IEET > GlobalDemocracySecurity > Vision > Fellows > HealthLongevity > Patrick Lin > Innovation
The Ethics of Autonomous Cars
Patrick Lin   Oct 8, 2013   The Atlantic  

Sometimes good judgment can compel us to act illegally. Should a self-driving vehicle get to make that same decision? If a small tree branch pokes out onto a highway and there’s no incoming traffic, we’d simply drift a little into the opposite lane and drive around it. But an automated car might come to a full stop, as it dutifully observes traffic laws that prohibit crossing a double-yellow line. This unexpected move would avoid bumping the object in front, but then cause a crash with the human drivers behind it.

Should we trust robotic cars to share our road, just because they are programmed to obey the law and avoid crashes? 

Our laws are ill-equipped to deal with the rise of these vehicles (sometimes called “automated”, “self-driving”, “driverless”, and “robot” cars—I will use these interchangeably). For example, is it enough for a robot car to pass a human driving test? In licensing automated cars as street-legal, some commentators believe that it’d be unfair to hold manufacturers to a higher standard than humans, that is, to make an automated car undergo a much more rigorous test than a new teenage driver.

But there are important differences between humans and machines that could warrant a stricter test. For one thing, we’re reasonably confident that human drivers can exercise judgment in a wide range of dynamic situations that don’t appear in a standard 40-minute driving test; we presume they can act ethically and wisely. Autonomous cars are new technologies and won’t have that track record for quite some time.

Moreover, as we all know, ethics and law often diverge, and good judgment could compel us to act illegally. For example, sometimes drivers might legitimately want to, say, go faster than the speed limit in an emergency. Should robot cars never break the law in autonomous mode? If robot cars faithfully follow laws and regulations, then they might refuse to drive in auto-mode if a tire is under-inflated or a headlight is broken, even in the daytime when it’s not needed.

Click Here to read the rest...

Dr. Patrick Lin is a former IEET fellow, an associate philosophy professor at California Polytechnic State University, San Luis Obispo, and director of its Ethics + Emerging Sciences Group. He was previously an ethics fellow at the US Naval Academy and a post-doctoral associate at Dartmouth College.



COMMENTS

There is no ethically significant difference between killing someone
and deciding to let that person die, if (as supposed in the trolley
problems) there is no doubt that the death will occur.

The reason, in real life, why killing someone is ethically different
from letting someone die is that real life is full of surprises: the
person might not really die.  If you kill him, his death is pretty
certain (though not totally).  If you merely don’t take action to save
him, he might survive anyway.  He might jump off the track, for
instance, or someone might pull him off.  All sorts of things might
happen.  Likewise, throwing the one person onto the track might not
succeed in saving the other five; how could you possibly be sure it
would?  You might find that you had done nothing but cause one
additional death.  Thus, in real life it is a good principle to avoid
actively killing someone now, even if that might result in other
deaths later.

The trolley problems invalidate the principle because of the unlikely
certainty that they assume.  Precisely for that reason, they are not a
useful moral guide for most real situations.  For real life, the robot
car should try to avoid a crash _now_, based on the presumption that
anything further in the future is full of uncertainty.

Driverless cars pose another ethical issue that is far larger: massive
unemployment.  The US does not generate new, good jobs any more, but
around two million Americans are still employed as drivers of various
kinds.  Driverless cars would push most of them into permanent
unemployment, and their dependents may join them in poverty, which
often leads to an early death.  Five million more Americans in
poverty, if 20% die early because of that, would mean a million
deaths, over a period of a few decades.  That’s comparable to 20 to 30
years of traffic deaths (not all of which would be prevented by these
cars).  This is not to count the suffering they’d all experience while
alive.

The US does not have a problem of insufficient goods available; its
problem is unequal distribution of wealth.  Thus, we should place
employment over efficiency.  In the absence of a radical change so
that being unemployed would not be a big problem, it behooves us to
reject technological changes that would cause a lot of unemployment.
This is why I refuse, in stores, to use the automated replacements for
sales clerks, and instead shout, “If you use those machines, you’re
putting Americans out of work!”

YOUR COMMENT Login or Register to post a comment.

Next entry: Personhood: Revisiting the Hierarchy of “Ender’s Game” to expand the circle

Previous entry: Impediments to the Singularity - Bayes Rule Reprise