Sunday, March 4, 2018

A response to Stuart Glennan's "On our craving for generality"

In Professor Glennan's "On our craving for generality", he makes an argument about an acceptable need for generality in philosophy and he conjures Professor Wittgenstein's lone criticism 

Wittgenstein also stated,

People are deeply imbedded in philosophical, i.e., grammatical confusions. And to free them presupposes pulling them out of the immensely manifold connections they are caught up in.

I like to paraphrase this by claiming what he really meant was the philosophers are masters of self-confusion. Because, within the folds of language, there are many interpretations to hide your theory in that make skepticism impractical and falsification impossible.  This is the malaise of philosophy borne out by the acceptable practices of it's mendicants. 

While Glennan is right that generalization allows one to make more useful rules ( associations, relationships), if you like, similar to natural philosophy, about the epistemological world that apply to a wider context than would otherwise be possible without generalization. I don't see, however, how he can always by correct when philosophy started out 2000 years ahead of science and has delivered so very little in comparison. This is not disputable by any performance metric one intends to measure. By time. By effort. By accomplishments.

Within the last 200 years, we have seen the capabilities of science-driven thought uncover billion-fold improvements in ability. The abacus may still arrive at instantaneous answers, but binary sequential computers can approach a Zeno's distance of that goalpost. One can scoff at technology, but the models inherent in those systems have also advanced and accelerated in complexity and generality as they are improved, generation after generation. Contrast this with modern philosophical by-products.

And yet, philosophers remain unmoved.  This should be frightfully unnerving for philosophers to see so many pass them by.  That, to me, indifference by philosophers speaks to a relative misapprehension regarding  the notions of achievement and advancement. These are all symptoms of a protected workshop, unwilling to change.

Even within science, the electrical disciplines goad the biological disciples to pick up the pace, as Intel's CEO, attending a pharmaceutical conference, implored them to ramp up the effort because more is possible, faster.

Professor Glennan, in the aforementioned article, points out:

"Wittgenstein’s worries about the craving for generality are in some ways reminiscent of Hume’s worries about the principle of induction. Hume argued that inductive inferences are grounded in our unwarranted commitment to a principle of the uniformity of nature. We use past experience to make predictions about future experience, but this can only work if the future is like the past, and we cannot, on pain of circularity, establish by induction that the future is like the past. Nonetheless we persist with our inductions. It is just habit.
The problem with Hume’s way of putting it is that it suggests that in the past nature has always been uniform; we know it has not.

The real question is not whether the future will be like the past, but when it will be."
Let me pierce that assumption (circular logic amounting to false tautologies) - and philosophical cover - by pointing out a simple proposition. While there are no absolutes like beauty and good, to propose these ideas as time-varying breaks neither generality nor specificity. To assign limits to good or bad may make for exclusions outside the frame, it also distinguishes "better" or "worse" as straightforward. Therein is a model. By claiming we can't induce that this bread, as Hume did, is as nourishing as the last bread may seem rational. But it evades the possibility that if we define the depth and breadth of what bread is, we can make a pronouncement within induction that makes sense. This is where science accelerated away from philosophy.

What philosophy lacks is not generality, it lacks specificity.  Ludwig von Wittgenstein arrived at philosophy from engineering, I can assure you as another engineer witnessing the practices of the philosophical knowledge tribe, what he found was lacking.  Not in the lofty goals nor the ability of the practitioners, but the madness masquerading as method.

Badiou pointed out that truth and false must exist outside any one philosophy.

If so, then any philosophy is the right starting point to make the same inroads on epistemology as the others.

What the sciences developed that philosophy did not, was a set of standards.

They are not what you might imagine, like a protocol or even the scientific method.  They are instead bounded constants that explain the interrelations amongst many concepts. While mass in Newtonian models is incommensurable in an Einsteinian model (in the language of Kuhn), it makes a common reference frame that one can use to compare and contrast models and results.

These standards are mainly embodied as universal physical constants. Boltzmann, Hertz, Avogadro, Newtons, Amperes, and so on. Physical properties - that might be any tangible, practical units of measure - that allow any one's circular logic to depart one constant and arrive at another.  Many are arbitrary, they could be changed, and sometimes do. The length of a metre, the bounds of a second. If a model or proposition about these standards can't be transformed to another then it makes it very easy to falsify. That exposes more error and truth than a messy system where ambiguity is used as cover, not a reason to define and refine. This system makes a mesh or a lattice, or a torus of any circular logic. The transcendence isn't in the method but the patterns it creates in understanding.

Now, Glennan might counter with late-Wittgenstein (also from the Blue Book);

The idea that in order to get clear about the meaning of a general term one had to find the common element in all its applications has shackled philosophical investigation; for it has not only led to no result, but also made the philosopher dismiss as irrelevant the concrete cases, which alone could have helped him understand the usage of the general term. 
Late-Wittgenstein was a study in paradox compared against early-Wittgenstein.  At first,  Wittgenstein is fortified with an optimist's effervescence that symbols and systems had no limit to aiding man's comprehension. At the end, he'd drifted so far into the riddles of words - language is only one knowledge modality - that he'd lost his faith in a better tomorrow.

This all came about not despite his talents nor dedication. I suggest his conversion came about due to the philosophical company he kept, the ill-recognition of his vision, and the plodding pedestrian nature of other minds unwilling to extend their reputation to achieve a better model in philosophy.

As Bertrand Russell wrote of Wittgenstein:

...every morning he begins his work with hope, and every evening he ends in despair
Logical positivism was the attempt to bridge back to philosophy using the same successful techniques that have engorged natural philosophy with more knowledge than philosophy has achieved in 4 times the time. More's the pity that it was gradually excised and replaced.  Given the progress made, was that wiser for the discipline?

Yes, Kant dictates that experience is king, and while physical laws remain temporary theories, their lifespan may exceed the solar system if not infinity. A satisfactory state of affairs to give to our grandchildren.

Science has held its' progress because of formalized definitions and refined common reference frames. Despite the same tribalistic, political, sociological, difficulties of internecine rivalry inherent in all academia.

When and if string theories supersede relativistic models based on Lorentz transformations, that superseded Newtonian Platonic calculus, then mankind is better off than if one hadn't extended upon the standards.

Specificity doesn't proclaim that common reference frame logic is infallible, nor that any one set of arguments cannot be demonstrated false when compared to greater knowledge attained elsewhere. Falsifiability is still the goal, but the way to achieve it at every step in science is understood even if the ultimate outcome is not. Older scientists are proven wrong as new theories are proven better. Better may not be quantifiable in absolute terms, but the practical limits are widened nonetheless.

Let me represent the values of common reference points in an analogy.

Suppose that constants are like handholds on the face of a steep mountain. One can advance up the mountain by building a logical argument that clings to one of these constants.  If science was a pre-climbed mountain, Mount Science perhaps, then new climbers would arrive at the base camp with many visible, understood, and solid points to work from. If one climbs through a point but arrives at a dead end, some impossible vertical, then one can traverse back to another constant in the pursuit of a further plateau.  The mountain is nowhere conquered, but there are many beaten paths to ascend in comfort and safety, making attempts at higher points more achievable in a lifetime.

Now, imagine what today's Mount Philosophy looks like.

Saturday, February 24, 2018

Two Steps Ahead

Electronics miniaturization, global technology parity, and the Internet all combine to make awesome and frightening potentialities. When you see things you never imagined you would, it's easy to believe this is a world's first; the truth is, in most cases, your epiphany is late to the party. That is the single biggest threat to your appreciation of the value or danger of any technology: the assumption that civilizations' evolution will somehow wait for your brain process to catch up.

The top image is from 2017, in ISIS-controlled territory, the second is from 2009 in Afghanistan when General Jonathan Vance was in command. There is an argument by some that this idea - driving remote controlled explosive vehicles into enemies - really started in 2001 in Gulf War I as a counter to IEDs, but again that's other people's egos talking. Some people recall a Clint Eastwood movie, "The Dead Pool", and a cheesy gasoline explosion made by a remote controlled (RC) car weapon in the 1980's.

The image below demonstrates the idea behind all of these goes back to before WWII. The Goliath was made by Germany and used against Russian T-34's in WWII, but they stole the idea from a French prototype they fished out of the Meuse river. French defence research created the idea. One could make an argument that even they stole the idea and repurposed it in ground vehicle form from remote control airplanes in WWI. Or, in the vernacular of patent lawyers, "it's use in the new form was obvious".

It was Field Marshall Foch that stated to his own legacy's impairment that the biplane was:
Les avions sont des jouets intéressants mais n'ont aucune utilité militaire
(Airplanes are interesting toys, but of no military value.)
This is as great a folly as Bill Gates stating, " No one will need more than 640KB RAM for a computer."
“640K ought to be enough for anybody.”
In any case, many people, many powerful important people, went past dangerous technological things with only mild amusement. And a lack of significance. I do not blame them for it, but I urge you not to let them off the hook either. We all need someone looking at the problem from a wider perspective.
 Now more than ever.
People, no matter their stature nor visionary prowess, cannot encompass the entirety of everyone else's thought processes. That's a feature, not a bug. When one makes a pronouncement and turns around to more important things assuming you can forget the danger that lay in front of you, that might seem like a safe bet. Why? Because you determined it's not significant, remember?
Eventually, that idea, I suggest it as created by the French, was cast aside by the Germans as too expensive when the ends and means drifted farther than they could accommodate. The Goliath was forgotten without significance by the people stumbling past it. But the idea was not lost, it was "de-prioritized" in the minds of people that assumed they were smarter than the rest. They believed their vision must be the right vision. Why?
In all cases, an idea emerges then lays dormant that people think, at the time, they could avoid because they didn't see any immediate need to get ready to stop it. This is technological hubris. This is the single biggest, insidious, and most dangerous human behavior that prevents today's mankind from being ahead of tomorrow's adversary. It is one man's arrogance that he must be smarter, or smart enough, to out-think everyone else. A fatal tautology.
Asymmetric warfare, combined with the promulgation of technological parity brought on by electronics miniaturization, globalization, and the Internet, means that no government can avoid actively working on counters to technology. This isn't a self-serving opinion. It is a technological reality.
The luxury of time is no longer on anyone's side. That's not a bug, that's a feature. I have worked with people originating from the entire globe. I have traveled from Hawaii to Hungary. People everywhere are equally inventive, equally creative. Now they all have access to almost everything.
When someone claims that Improvised Explosive Devices (IEDs) are some new threat no one had time to imagine coming, well, let these images determine what you think of that belief:
IEDs, Warsaw Uprising, Poland, 1944.
Petard - satchel charge (IEDs), France, 16th Century.

There is only one way to gain two steps ahead of anyone else, giving you one step to prepare, and that is to make deliberate, consistent, and well-rounded investments in research and development. You must do this not for your own satisfaction, but despite it.
We warned about a threat in 2005 that included internet-based ambush, made our own versions in 2007 through 2010, that didn't appear until 2017. We weren't listened to, and got no recognition nor credit, but the information was ready and waiting when it did occur. That's two steps ahead whether it's appreciated or not. I hope I have avoided the impression that I thought I was being originally creative, I wasn't, because my point is that it wasn't any one technological novelty that was important for the idea to be a threat then. All these technological threats PRECEDE everyone's imagination.
The realization that is important for you to understand is that this idea combined with many other trends ongoing at the time, other technological ideas that one needs to be mindful of in conjunction, made for a stark change in the likelihood of imminent arrival. That was my novelty. One can never guess, with absolute certainty, the risk-on moment. But appreciating the lower bound is soon is a truly valuable skill. That is prescience. 

If someone claims that they have discovered a new threat, your first response should be skepticism. Unless it's me, of course, then it's gospel. I won't be too arrogant in pointing out I have been ahead of the curve on many things fresh people will tell you are new. I assure you I was there first because I have been at the game longer, I started researching robotics in 1988. I bored my International Baccalaureate English class with a 6 minute presentation on robotics. It took 15 minutes. I placed eighth in the world in the Solar Roller at the 2000 BEAM/ World Championship Robotics Games. I've been thinking on robotics, explosives, and counter-IED since before 1998.

Most of my good ideas get de-prioritized.
If anyone claims to know a new technological threat, the first thing you should consider is how old is that person? Just how recent is that corporate knowledge?

The good news is, so long as they last, there are people that have the time and space to think ahead and around for you. You need people not slaved to the development of any one technique or technology that understand the bigger picture, the danger, and that embody enough corporate knowledge to see past our own collective shortcomings because that's what they are paid to do. No one person can see all the danger. No one interpretation can encompass all the manifestations that can appear. To believe otherwise is technological hubris.

I believe it was a (the?) Russian Chief of Staff that said it best, the value of research is foresight. 

Any commander entering a new conflict from this point going forward, must accept he/she will see technological disruption that, to him/her, must seem like the unimaginable. Every one of them must accept that this notion of unimaginability is false and will always be false, and that might be accepted. The hard part will be convincing them that it has never been true.

In case you were wondering in what way we were two steps ahead, this is a demo from 2007.  Ten years ahead of the emergence from an asymmetric threat.

This was a water disruptor driven on an RC car prototype aimed into a dummy target, with the help of EOD operators of the Fleet Dive Unit Pacific at Exercise Desert Rat 2007, to demonstrate a cheap, fast, and real explosive vehicle borne IED .

(C) 2018 DRE. All rights reserved.

Friday, February 9, 2018

Investment Rules to avoid your Retirement Apocalypse.

I predicted the #StockMarket would crash in 2016 when President Clinton would have been elected to a third #Obama term. Instead there was so much sidelined money it came rushing back in on hope in the #TrumpBump. When the #US Federal government agreed to add hundreds of billions onto a debt they will never repay and the prospect of higher interest rates. The fundamentals are now worse. It's 1929 all over again.

Whatever that hopefulness was, it wasn't on firm footing and it was going to take one small disturbance to crash the market because the fundamentals are all wrong now: stocks are too over-valued and companies packed on debt to give dividends or buyback shares, that debt locked in at low interest rates they won't be able to sustain in the future when interest rates rise.  The entire #market was operating on cracking ice and it's now burst open.

I never wavered from the assessment. It was a matter of if not when.

As we gaze upon the carnage of the #DowJones -1032 and the halcyon of a fresh bear market, here are my rules for investing:

  1. Everyone is lying to you: once you accept that the economic data is not like physical data, it is skewed and biased to the advantage of the presenter, accept that you must rationalize how likely the promises are of coming true are.
  2.  All business reporters are lazy; you can't trust them to hold corporate sociopaths accountable. At the worst times like now, they can't bring themselves to admit they were wrong and instead rationalize why they didn't ask hard questions earlier. They are no different than political reporters, they get in bed with powerful interests whether they realize it or not. They become cheerleaders over time whether they realize it or not.
  3. Financial analysts fear being sued for recommending a sell, so they will advise a hold when they really mean sell. You can't trust them either. Look at all these so-called experts that were raving about investments the day before -1032 daily losses. Why didn't they predict it and become the lone disruptive hero? They can't see their own biases.
  4. If your investment makes a profit, don't be shy about taking it. A win is a win. Later, if that equity goes higher don't detract from why you sold at the time. You are always safer making a little than risking it for nothing.
  5. The underlying value of any equity is a fraction of the market value. It is a fantasy that stocks outstanding times by price is a fair estimate. It has never borne out in reality, ever. Instead, the real value is how much it would be worth if you had to liquidate 15% of the total immediately. If one had to sell at a loss, that value times the outstanding shares is more realistic, or in other words a fraction of the stated value. 
  6. All equities are gambling. You are giving your money to others who are often no smarter or dedicated than you are that "hope" to return a profit. Don't delude yourself they are superhuman.
  7. All equities are risky. You can't let current market sentiment persuade you that a price drop of 50% is inconceivable. It's never inconceivable.
  8. Bankers take their profit first. They walk away from the asset and you own it. All risk is yours. Their job is to get you to give them your money- what happens next is on you. Don't expect anything otherwise.
  9. Stock market workers can liquidate their assets faster than you can so plan downside conditional trades when you can't focus on the markets. 
  10. Understand both what the market fundamental are and the current investor sentiment before you invest in an equity. You can trade on either.
  11. Traders are motivated by greed and fear. Exploit both.
  12. Predict where the average, normal person will be in the future and get ahead of that. Keeping ahead of the swings is how you avoid disaster.
  13. If you can't babysit a risky investment, why would you invest?
  14. Take your time. Believe in yourself. Be patient when you invest.