Intrigued by IEET Fellow Patrick Lin’s essay “The Ethics of Autonomous Cars” we asked “Should your robot car sacrifice your life if it will save more lives?” A third of of the 196 of you who responded said no, a third said they should and a third said it should be the driver’s option.
Of course, we can hope there will be far fewer automotive accidents with robot cars in the first place. But it will be fascinating to see how this software and regulation gets written. Will libertarian Americans opt for choice, or save-the-driver as a default, while more social democratic countries implement save-the-most driver software?

Our new poll is on whether the Turing Test is useful or bollocks.
I chose the first one simply because of the fact that cars are not designed to protect other people, for whatever reason, but rather to protect those within the car. If a car is programmed to protect others at the expense of its passengers, we’re left with an ethical dilemma of determining “which lives are more important?”, “how many in the minority left behind should there be before being considered too much _to_ leave behind?”, and “what if the car was wrong and no lives were needing to be saved at the expense of its passengers?”
With each passenger being protected by its own car, we’re at least avoiding those kinds of dilemmas, or perhaps alleviating them to some degree. When every car is programmed to protect a limited select group of people, the roads will be much safer because they’ll collectively avoid dangers and risks at all times to themselves, rather than trying to “grab someone else’s steering wheel,” per se.
So looking back at these options, the first one is the only one that makes sense. Everything balances itself out when each car is defending itself and its passenger(s) as meticulous as possible. The 2nd option is a stupid option, IMO, because that’s EXACTLY what people are doing right now with the poll. Shouldn’t even be an option. And the 3rd and final option would result in more accidents than it saves, because it’s no longer determining the safety of any particular person, but others at random at the expense of those inside. A car doesn’t need to be a hero. It just needs to do its damn job.
There are, of course, some people who believe that cars should make the decision of saving someone important over someone who’s wasting away their lives. Assuming we’re even capable of programming a computer to achieve that kind of thought processing, what then?
Using that very “logic,” the car would make the decision that a progressive politician is more important to save than a car of junkies. The problem with this is that it’s an observation of what is and not what it could be! What if that politician becomes corrupt and those junkies get clean?
Is the car’s decision to spare the politician then still a positive decision? I wouldn’t think so. I’d leave it to where each car merely protects its own passengers. That way everything will balance itself out in the end.