Artificial intelligence has long had to manage expectations of its potential benefits versus what it is capable of providing at any point in time. This tension slowed down the overall progress and benefits of AI by introducing skepticism, reducing funding, and limiting adoption of the technology.
Ideas around AI date back to even before the 1950s, when Alan Turing proposed his now-famous “Turing test” aimed at assessing a machine’s intelligence. As humans, we’ve always had the desire to benchmark the humanity of man-made technology. We’ve romanticized the potential of friendly robots such as Star Wars‘ R2-D2 and C-3PO or envisioned a fearful reality with the T-800 Model 101 robot in Terminator.
But as a society, we’ve never had a clear benchmark to align the capabilities of AI and the reality of the technology. If we want AI to continue to make uninterrupted progress, however, we need one quickly.
Enduring several cold AI winters
AI has a long history of hype cycles followed by periods of disappointment and even criticism. This typically results in significant funding cuts from government and commercial sources — what is commonly referred to as AI winters.
One famous example from 1954 was the Georgetown experiment, where an intelligent machine translated 60 sentences from Russian into English. Expectations quickly formed that general purpose translations would be available in five to six years. After 10 years of research failed to produce results, investors reduced funding dramatically. We didn’t see fluid translations until 2016, when Google released Google Neural Machine Translation that now translates between 100 languages fluently.
There have been two major AI winters so far: 1974-1980 and 1987-1993. These resulted from underestimating the difficulties in building intelligent machines and misunderstanding the realities of the technological limitations of the time. Basically, we saw a mismatch between end users’ expectations and what the technology could achieve.
Where autonomous cars are today
The March 2018 fatal crash involving a Tesla Model X SUV in Mountain View, California raised serious concerns about the future of autonomous cars. While the accident is certainly a tragic event, the car’s autopilot system apparently did provide visual and audio clues to cue the driver to avoid the concrete divider. The automaker also built warnings into the AI software that sound when drivers have their hands off the steering wheel for more than six seconds.
So how should we react when a driver doesn’t follow basic instructions and guidelines aimed at ensuring safety? If a driver dies when not wearing his seatbelt, is that the car manufacturer’s negligence? If a driver runs a red light or makes another fatal error, should we ban all cars on the road?
It’s critical for drivers to understand the level of automation and system capability that is in the car. The SAE International standard J3016 defines six levels of automation for automakers, suppliers, and policymakers to use to classify a system’s sophistication. Today, most autonomous cars are at level 2, which is partial automation, where manufacturers expect that drivers will need to keep their hands on the steering wheel. This level of automation is provided by technologies such as Cadillac Super Cruise, Mercedes-Benz Driver Assistance Systems, Tesla Autopilot, and Volvo Pilot Assist.
There is a big jump at level 3, conditional automation, in which the car can manage most aspects of driving, including monitoring the environment. The system prompts the driver to intervene when it encounters a scenario it can’t navigate. Audi released the world’s first level 3 autonomous vehicle, and the manufacturer has confirmed it will take full responsibility in the event of an accident.
So, while we will continue to see progress, it will take time to evolve the technology to the point of full autonomy. Even the new Ford CEO, Jim Hackett, reset expectations from predecessor Mark Fields’ prediction of “Level 4 vehicle in 2021, no gas pedal, no steering wheel, and the passenger will never need to take control of the vehicle in a predefined area.”
“If you think about a vehicle that can drive anywhere, anytime, in any circumstance, cold, rain — that’s longer than 2021. And every manufacturer will tell you that,” Hackett said in August 2017, months after he replaced Fields.
Avoiding the next AI winter
Autonomous car manufacturers need to develop a more practical guide to — and messaging around — the current capability of autonomous cars and how quickly they can get to level 3 and 4 autonomy. They need to avoid fanning the flames of the hype cycles that will inevitably create expectation gaps and potentially force another AI winter. And they need to stop typecasting humans as the enemy of technological advancement.
Automakers should consider making more use of parallel autonomy to be a guardian angel to assist human drivers to prevent accidents. Maybe full autonomy should not be a near-term goal and instead R&D efforts should go into assistive technology.
The article was originally published on VentureBeat and is reposted here by permission.
The post Overblown expectations for autonomous cars could force the next AI winter appeared first on Virtusa Official Blog.