When machines are anthropomorphized, we risk applying a human standard that should not apply to mere tools.
Subscribe to IEET Lists Daily News Feed
Longevity Dividend List
Catastrophic Risks List
Biopolitics of Popular Culture List
Patrick Lin Topics
Reproducing in space, lifeboat problems, and other ethical quandaries that could arise if we travel to Mars. Disaster can happen at any moment in space exploration. “A good rule for rocket experimenters to follow is this: always assume that it will explode,” the editors of the journal Astronautics wrote in 1937, and nothing has changed: This August, SpaceX’s rocket blew up on a test flight.
Without clear rules for cyberwarfare, technology workers could find themselves fair game in enemy attacks and counterattacks. If they participate in military cyberoperations—intentionally or not—employees at Facebook, Google, Apple, Microsoft, Yahoo!, Sprint, AT&T, Vodaphone, and many other companies may find themselves considered “civilians directly participating in hostilities” and therefore legitimate targets of war, according to the legal definitions of the Geneva Conventions and their Additional Protocol...
Do you remember that day when you lost your mind? You aimed your car at five random people down the road. By the time you realized what you were doing, it was too late to brake.
Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.
Robot cars and military robots have more in common than you’d think. Some accidents with self-driving cars will result in fatalities, and this may be troubling in ways that human-caused fatalities are not. But is it really worse to be killed by a robot than by a drunk driver—or by a renegade soldier?
The technology world was abuzz last week when Google announced it spent nearly half a billion dollars to acquire DeepMind, a UK-based artificial intelligence (AI) lab. With few details available, commentators speculated on the underlying motivation.
Sometimes good judgment can compel us to act illegally. Should a self-driving vehicle get to make that same decision? If a small tree branch pokes out onto a highway and there’s no incoming traffic, we’d simply drift a little into the opposite lane and drive around it. But an automated car might come to a full stop, as it dutifully observes traffic laws that prohibit crossing a double-yellow line. This unexpected move would avoid bumping the object in front, but then cause a crash with the human drivers behind it.
If you don’t listen to Google’s robot car, it will yell at you. I’m not kidding: I learned that on my test-drive at a Stanford conference on vehicle automation a couple weeks ago.