The recent Uber and Tesla autonomous driving accidents have raised a hot topic that has really divided opinion. The questions being asked globally are;
Is AI moving too fast and needs to slow down?
Or, does AI need to keep moving forward, and aggressively, to achieve better results as quickly as possible? Some school of thought is that AI isn’t moving quickly enough!
My question is, why does the race for AI need to be the race for perfection? Over 1.2M people die each year from car accidents and it never seems to raise the question; Should humans be allowed to drive cars? Therefore, when it comes to the perfection of AI, are we not putting too much pressure on AI? Can we not be happy with a potential huge improvement in safety that could lead to 50% less deaths and effectively save 600,000 lives a year? Or do we think and need it to be perfect?
We have all read the scary reports of uncontrolled cars, unstoppable and lacking human reaction, deadly machines let loose on our streets… but, here’s some other facts;
- AI does not drive under the influence
- AI is not busy playing with the radio
- AI is not getting distracted by kids in the back
- AI does not text while driving
While the goal would be to avoid fatalities from any source, sadly, even while writing this article we will have even more car accident fatalities due to human error.
We have had autopilot on aeroplanes for a long time now, and it’s commonly known that the plane basically flies itself with the pilots overseeing and assisting in whatever way they do.
Aeroplane crashes are a rarity – what does that say for the correlation between human involvement and fatal error?
So, what is the answer? Can we accept and understand that potential fatal accidents that could happen are the price we pay for a huge reduction of road fatalities in the long run? Is the answer that we need more education for the public to reduce their expectation of AI’s perfection?
If we use airplanes again as an example, when we were first testing and building planes, there were far more fatalities than thankfully there are today. However, if we are not willing to take risk and potentially experience losses for longer-term gain, then we will stand still and many innovative, world changing creations will cease to exist.
With all the above being said, there is also a strong case to be made from the other side of the fence. For me the two biggest issues are:
- Who is responsible when things go wrong? What’s clear is that this technology can potentially save a lot more lives that it endangers, the issue is knowing who’s culpable in the event it does go wrong. As humans we can take responsibility for our actions, you make a personal decision that has inevitable implications. For example, as a human, if you make the decision to drive under the influence of drugs or alcohol and cause a crash, there are clear consequences.
- But, how do you hold AI accountable in self-driving cars? Can the inventors of the self-learning programming say it will ever be 100% faultless? If not, then maybe there is, at present, too much ambiguity. What about the production company; are they the ones who would/should be held responsible? And to what cost?
- Another huge issue is how secure will autonomous self-driving cars be?
- Will movies we have loved in the past such as Terminator and iRobot, that depict a world where machines eventually take over (those we have considered farfetched) become a reality? For me, this is arguably the biggest and most frightening uncertainty. If history has taught us anything there are a lot of evil geniuses out there. What happens if our enemies manage to hack the systems? The consequences could be catastrophic.
There are other economic and social issues also:
- Car EnthusiastsMy boss (Michael Bott), for one would be devastated if he suddenly was unable to drive his classic car. For many people driving is much more than simply getting from A to B.
- Would it result in an economic disaster? If autonomous cars were adopted everywhere this would result in the loss of many industries. Professional drivers of all types, would be completely redundant. What happens to these people? Or is this no different to any technological advancement that ultimately results in the loss of many jobs, such as the invention of the first computer! To add to this, autonomous cars, I imagine, would be much more expensive to buy.
- Like UBER, autonomous cars rely heavily on GPS satellites. If a signal was to be blocked this would significantly affect the cars functionality. What would happen if this was on a large, global scale? What would the implications be?
I think it’s clear there are key advantages and importance to AI in terms of the practical and ecological advancement of our future. But, it is not without its social or economic consequences.
So, back to my original question; does that mean AI should slow down or should more investment be made so we can speed it up and make quicker advancements to reach that “perfection” ?