What Google Cars Can Learn From Killer Robots
Patrick Lin
2014-08-04 00:00:00
URL

We can look at the “drone wars” for insight, to the benefit of the autonomous driving industry that includes Audi, Bosch, Daimler, Ford, GM, Google, Nissan, Tesla, Toyota, Volvo, and other heavyweights.

Accidental ethics

Accidents with self-driving cars will happen as a matter of physics and statistics.  Even with perfect software, many things can go wrong that will cause them to crash:

Sensors can be damaged, improperly maintained, or impaired by bad weather; tires can unexpectedly blowout; animals and pedestrians—hidden behind curves, grassy knolls, cars, and other solid objects that sensing technologies cannot see through—can dart out in front of you; passing and incoming drivers can accidentally swerve into you; insurance-fraud criminals can deliberately crash into you; and you could be hemmed in between cars or against a cliff, leaving no escape path when a big rock falls in front of you or a distracted driver plows into you from behind.

When accidents happen, autonomous cars may have a range of options to reduce the harm—that is, “crash-optimization” programming.  Besides slamming on the brakes, they could also change directions to reduce the force of a collision, including a choice of what the car crashes into, such as avoiding a wall but striking another vehicle.  Some accidents are worse than others, and an intelligent self-driving car could choose the lesser evil.

But some of these decisions will look like judgment calls:  Is it better to crash into a small car or a large truck, or into a motorcyclist with a helmet or one without a helmet, or into a little girl or an elderly grandmother?  Should the car jealously protect its owner, or should it be first concerned with public safety?

Where value judgments like these are being made—to hit x instead of y—we need to think about ethics.  But the autonomous driving industry hasn’t engaged ethics much, at least publicly, even though many companies are interested in the issues.  Here’s why this silence can be a very expensive mistake:

Lessons from the drone wars

Because drones—or military robots, such as General Atomics’ MQ-1 Predator and MQ-9 Reaper models of unmanned aerial vehicles (UAVs)—are genetically related to robot cars, many of their ethical issues may be portable and apply to robot cars.  Military funding is behind both: the first successful autonomous car was Stanford University’s “Stanley”, an autonomous Volkswagen Touareg that was first to win the Defense Advanced Research Projects Agency (DARPA) Grand Challenge in 2005 by self-navigating a 7.32 mi (11.78 km) course in the Mojave Desert.






A MQ-9 Reaper unmanned aerial vehicle. (Photo credit: Wikipedia)




A key lesson from the drone wars is about the defense industry’s failure to engage in the ethics debate.  In late 2009, the Association for Unmanned Vehicle Systems International (AUVSI), the industry’s leading advocacy group, presented a survey of the top-25 stakeholders in the robotics field.  An astonishing 60% of these industry leaders reportedly did not expect “any social, ethical, or moral problems” to arise from the use of drones.  From either that disbelief or disregard of ethical problems, the US defense community had largely been silent about the ethics and legality of drone warfare, even though a positive case could possibly be made for it.

Click Here to read more...