mathjax

Monday, December 25, 2017

Badiou and the supernumerary name

For Badiou's Being and Event, given this text is old, traditional in the wake of Logics of Worlds, it staggers from inconsistencies like the one I describe here.

His use of the term supernumerary throughout the book brings with it confusion as to the many inferred meanings, the Fregean senses, one can extract. It strikes me often when reading philosophy did the author use a word slyly, being duplicitous in meaning by design, or was he or she unaware of the translation that specific word can result in.

I suspect that philosophers, given the searing attacks anyone can make on any knowledge beneath transcendentals, play a dangerous game of courting many suspicious groups by picking and choosing cognitive synonyms at will.

When a mathematician specifies a word, at least the courtesy is given to the reader to make the meaning singular. Otherwise, and by convention, other mathematicians will attack the double entendre as a weakness applied to the entire argument. And therewith destroy it.

Philosophers are masters of self confusion. Herein is a habit that discredits an otherwise plausible argument. Here was von Wittgenstein's complaint and why they, his peers but not equals, tend to savage his honest work without merit. Wittgenstein doubted from knowledge, they impugned from emotions.

For Badiou's logic; he uses supernumerary for:

Supernumerary axioms
Supernumerary names
Supernumerary elements
Supernumerary being
Supernumerary multiple
Supernumerary symbol
Supernumerary situation
Supernumerary nomination

He uses it 53 times.

In his mind they might be all the same. In his comprehensive understanding he can elucidate the context to each. To the outside world, they cannot be equal. Or they have not been made exclusively the same.

Philosophy will never advance, no matter how dedicated, how intelligent, nor how driven it's mendicants are until and unless they decide to humble their horizons to smaller fundamental expositions that construct unassailable arguments.



Thursday, November 2, 2017

Why autonomous car companies will fail. For now.

Google's Waymo has disabled autopilot features which allow drivers to become a passenger.

https://uk.reuters.com/article/uk-alphabet-autos-self-driving/google-ditched-autopilot-driving-feature-after-test-user-napped-behind-wheel-idUKKBN1D00QM

While this may seem like a minor problem, it is in fact a death knell to profitable autonomous vehicle projects.

I have argued for a while that people who understand how far autonomy and artificial intelligence have come, the people that study it, are far less impressed at how much it can accomplish - safely and reliably. These may be marketing-weaponized jargon to impress investors, but they are two terms that have been promised since the 1970's.

Mobile autonomy is not ready. I attended at talk at the Institute of Electrical and Electronics Engineers (IEEE - the people that make standards like WiFi) by Sebastien Thrun at IEEE CVPR 2012. At that time he said it was "90% solved with 90% to go" in a joking manner to a room full of people who understood what he meant, not an audience full of investors with dewey eyes and dreams of striking it rich. He was admitting that the solution isn't present. That's not what they tell investors, is it? He left Google soon after to work on Udacity.

He went on to describe how one autonomous car at that time almost got in an accident when it was driving along smoothly and it stopped without warning and without prediction for a floating plastic bag. The car behind it screeched to a halt.

As Johnathon Schaeffer (created an AI that beat humans at checkers) warns about AlphaGo that just because you can play a game well doesn't mean you can play ALL games well. There are unusual situations that come up in games.

When I was a Master's student at the University of Alberta, I was awarded 2nd prize by IEEE Northern Canada Section for my adaptive AI Rock Paper Scissors/ Roshambo player that beat all the competitors from the recent RPS championships. I did it with a simple strategy: I made an adaptive strategy that used ALL the other players' strategies against opponents. It chose a strategy from the other players, and over time it weighted choices to the better ones, and would compete in a non-predictable manner. When it started losing it would revert to the games theory Nash equilibrium of 1/3 rock, 1/3 paper, 1/3 scissors and play for a tie. It beat all the others including Iocaine Powder - the reigning champ.

It was a novel approach, but it didn't have any real insight into how it was winning or what key factors underline the strategy. That was my novelty. It wasn't playing a defined strategy. That made it unusual so other computer strategies couldn't store a time-history of moves and predict how to beat it.

So what it did do in effect was present an unusual situation to the other AI agents. And they failed. I didn't beat them, they failed to beat me.

It would be a philosophical stretch of epic proportions to say the mobile autonomy AI are the same as AI games players.

But it is a philosophical stretch of even greater proportions on their part to claim that the AI algorithms that work in defined space games  like checkers or Go are up to the challenge of dynamic problems in 4D time-space.

I claim they are similar, yet the mobile autonomy problem space is much more complicated and time varying than the game player problem space is. That supposition is beyond dispute by anyone.

The problem with mobile autonomy is not that it works, is that it only works in the known part of the problem spaces. It can't guarantee a victory ( to drive up to users' expectations ) in unusual situations, i.e. the blowing plastic bag thought to be an obstacle. If your robot car depends on a map of the roads, what happens in a construction zone? What happens when a road disappears or a house is put in it's place?  What happens when there is an accident in the middle of the highway? Flying tire? Cardboard box? What happens if a policeman is outside the vehicle gesturing that the car pull over?

I research autonomy. I know the algorithms on the inside of the car. I would not get in an autopilot vehicle.

In fact, I was one of the first autonomous robot wranglers when we made this one in 2005:




I work on this one right now:




Waymo is developing autonomous cars that they are admitting are not autonomous. They are blaming it on drivers getting careless - behaviour which their own testers did during beta-testing - but they are admitting they can't make the vehicle work without the driver almost in control. That makes their AI system an expensive paper weight.

In any case, they are trying to make the driver responsible so they can de-risk their own product, not make drivers any more safe. It's like a reverse-liability Jedi mind trick.

But that won't stop them from being sued or losing huge court rulings against them.

Why that matters to Waymo and Uber and all other neophyte mobile autonomy companies: in the US the product law is governed by strict product liability.

https://en.wikipedia.org/wiki/Product_liability#Strict_liability
Under strict liability, the manufacturer is liable if the product is defective, even if the manufacturer was not negligent in making that product defective.

Slick US lawyers will have no problem pinning the blame for accidents on autonomous vehicles if the human being can't be aware of where the AI fails. They can show the products are defective because the autonomous vehicles fail to understand the simplest scenarios for humans. It won't matter what machine learning they use or how much data they crunch. These lawyers will outline the details of the accident to a jury full of drivers. The drivers will consider how easy it is from personal experience how and what to avoid the accident, and they will see these evil billion dollar companies lying to them about how well their products work. The evidence will be the accident itself, not the assurance nor the technology. If a robot cannot figure out a plastic bag, it isn't ready for the road. As a driver you know that plastic bag might be a dog, might be an unannounced construction zone, might be an oversized vehicle, and so on. That means ALL these products are inherently defective. These are huge liability risks given the state of the art right now. This is a huge unfunded risk to autonomous vehicles.

And the question they will posit that will win huge settlements for clients will be a variation on this:

"I ask the jury to consider: as a reasonable driver, given the facts in evidence surrounding this accident, would you have been able to avoid this tragic accident? If so, then you must find the product defective because it wasn't capable of doing what a reasonable driver can do." 

QED

I recommend you steer clear of autonomy vehicle companies. For now.

Friday, September 29, 2017

My predictions on #Trump and #Obama

I predicted #Trump would win, you can go back and look at my old postings.


I also predicted this, and I write this as I listen to #Trump speak to the Manufacturing Association in Washington DC,:

At the end of 4 years, Trump will sound more like Obama, and Obama will sound more like Trump.

As you listen to Trump now versus 2016 it is already happening.