Would You Feel Safer If Your Self-Driving Car Could Explain Itself?
George Dvorsky
2017-07-02 00:00:00
URL

The Defense Advanced Research Projects Agency (DARPA) is giving $6.5 million to eight computer science professors at Oregon State University’s College of Engineering. The Pentagon’s advanced concepts research wing is hoping these experts can devise a new system or platform that keeps humans within the conceptual loop of AI decision-making, allowing us to weigh in on those decisions as they’re being made. The idea is to make intelligence-based systems, such as self-driving vehicles and autonomous aerial drones, more trustworthy. Importantly, the same technology could also result in safer AI.

Part of the problem of humans not understanding AI decision-making stems from how AI works today. Instead of being programmed for specific behaviors, many of today’s smartest robots operate by learning on their own from many examples, a process called machine learning. Unfortunately, this often leads to solutions that the system’s developers don’t even understand—think computers making chess moves that baffle even the game’s top grandmasters. At the same time, the system cannot provide any sort of feedback explaining itself.

Accordingly, we’re becoming increasingly wary of machines that have to make important decisions. In a recent study, most participants agreed that autonomous vehicles should be programmed to make difficult ethical decisions, such as killing the car’s occupant instead of ten pedestrians in the absence of any other options. Trouble is, the same respondents said they wouldn’t want to ride in such a car. Seems we want our intelligent machines to act as ethically and socially responsible as possible, so long as we’re not the ones being harmed.

Perhaps it would help us to trust our machines more if we could peer under the hood and see how AIs reach their decisions. If we’re not happy with what we see, or how an AI reached a decision, we could simply pull the plug, or choose not to purchase a certain car. Alternately, programmers and computer scientists could provide the AI with new data, or different sets of rules, to help the machine come up with more palatable decisions.

Under the new DARPA four-year grant, researchers will work to develop a platform that facilitates communication between humans and AI to serve this very purpose.

“Ultimately, we want these explanations to be very natural—translating these deep network decisions into sentences and visualizations,” said Alan Fern, principal investigator for the grant and associate director of the College of Engineering’s Collaborative Robotics and Intelligent Systems Institute.

During the first stage of this multi-disciplinary effort, researchers will use real-time strategy games, like StarCraft, to train AI “players” that will have to explain their decisions to humans. Later, the researchers will adapt these findings to robotics and autonomous aerial vehicles.

This research may become crucial not just for improving trust between humans and self-driving cars, but any kind of autonomous machine—including those with even greater responsibilities. Eventually, artificially intelligent war machines may be required to kill enemy combatants. At that stage, we will most certainly need to know why machines are acting in a particular way. Looking even further ahead, we may one day need to peer into the mind of an AI vastly beyond human intelligence. This won’t be easy; such a machine will be able to calculate thousands of decisions in a split second. It may not be possible for us to understand everything our future AI do, but by thinking about the problem now, we have a better shot at constraining future robots’ actions.

[Oregon State University]