Tuesday 1 March 2016

What You Really Need To Know About The Google Car That Crashed Into A Bus


So a Google car hit a bus, according to numerous news reports that stretched even out across the ocean to the BBC. On the surface, it's a car crash--so what? However, as you go deeper into this simple report, there are very strong deliberate statements made about human unpredictability and machine reliability. These statements are critical for swaying government and public opinion toward autonomous vehicles over against human-powered ones.

According to the report, the Google car was attempting to merge into traffic from an on-ramp, and the human operating the vehicle assumed the bus would let it merge in front of it, but continued driving. The autonomous vehicle struck the bus's side. Google's comment? They bear "some" responsibility, but the operator thought the bus would slow down and let it in so he didn't over-ride the system and take over. 

Here's a strange statement by Google: 

"The Google AV [autonomous vehicle] test driver saw the bus approaching in the left side mirror but believed the bus would stop or slow to allow the Google AV to continue. . . ."

Believed? The AV (autonomous vehicle) had a belief about what the bus was going to do, and thus acted accordingly. Can computers have beliefs? Can a self-driving car have beliefs? Apparently so. It's a strange way of describing an action made based on a series of predictions and algorithms. Can machines have beliefs? 

Nevertheless, what's the key to this? Is is a big failure? Of course not. It is simply an opportunity for the AV and Google to make an adjustment to the belief-centre of the computer: Buses most likely will not stop to let you in. And with that adjustment, the computer is better now--it can drive with more confidence, learn from the mistake, and move on. 

This report is very telling in several ways. First, Google took "some" responsibility, meaning it takes two to tango--even though, let's face it, the bus had the right of way. What they're saying is humans are unpredictable. The computer made the action based on algorithms of predictability--the human being is too spontaneous, and that's a problem. Second, Google tacitly blamed the operator of the AV itself--he/she (it's not specified) did not intervene on the computer, thus allowing it to crash into the bus. Third, this is no simple computer--it has beliefs. Whether AVs or other machines Google is building, they are highly intelligent--more than you and I may think or wish to credit. Third, with each little slip-up, the computer is growing in intelligence. Unlike human beings who get shaken up with an accident, and may even slip into post-traumatic stress, the trim little machine gets a simple tune up, adjusts its belief-system, and moves on. If only, I can hear Google saying, the human brain could be so nimble. 

The autonomous vehicle is a mainstay. It will overcome human-powered cars in the next decade. Human-powered vehicles will be denigrated in the media and through such experiments, and the push will be to reduce their numbers on the roads in the name of safety and road predictability. Computers will crash into one another, but that's the price of learning--they have to make up for over a hundred years of human-learning behind the wheels of automobiles. So not only is this 'incident' with the Google car a matter of road-side logistics and algorithms, there is a much deeper ideological statement being made that seeks to sway government and public opinion. 


No comments:

Post a Comment