The technology world was abuzz last week when Google announced it spent nearly half a billion dollars to acquire DeepMind, a UK-based artificial intelligence (AI) lab. With few details available, commentators speculated on the underlying motivation.
Sometimes good judgment can compel us to act illegally. Should a self-driving vehicle get to make that same decision? If a small tree branch pokes out onto a highway and there’s no incoming traffic, we’d simply drift a little into the opposite lane and drive around it. But an automated car might come to a full stop, as it dutifully observes traffic laws that prohibit crossing a double-yellow line. This unexpected move would avoid bumping the object in front, but then cause a crash with the human drivers behind it.
In the year 2025, a rogue state—long suspected of developing biological weapons—now seems intent on using them against U.S. allies and interests. Anticipating such an event, we have developed a secret “counter-virus” that could infect and destroy their stockpile of bioweapons. Should we use it?
Earlier this month, a report funded by the Greenwall Foundation examined the legal and ethical implications of using biologically enhanced humans on the battlefield. Given the Pentagon's open acknowledgement that it's working to create super-soldiers, this is quickly becoming a pertinent issue. We wanted to learn more, so we contacted one of the study's authors. He told us that the use of cyber-soldiers could very well be interpreted as a violation of international law. Here's why.
Science fiction, or actual U.S. military project? Half a world away from the battlefield, a soldier controls his avatar-robot that does the actual fighting on the ground. Another one wears a sticky fabric that enables her to climb a wall like a gecko or spider would. Returning from a traumatic mission, a pilot takes a memory-erasing drug to help ward off post-traumatic stress disorder. Mimicking the physiology of dolphins and sled-dogs, a sailor is able to work his post all week without sleep and only a few meals.
As the conflict between Israel and Hamas extends into its second week, it has become quite clear that the renewed hostilities are markedly different that that ones that came before. Unlike previous engagements, this war has been characterized by the innovative use of new technologies — including rockets that target rockets, unmanned drones, and even social media. Given these early precedents, it’s fair to say that the means of war have changed yet again — but in a way that’s certainly not for the better.
It’s time to get serious about the moral questions resulting from our new class of weapons. In the last week or so, cyberwarfare has made front-page news: the United States may have been behind the Stuxnet cyberattack on Iran; Iran may have suffered another digital attack with the Flame virus; and our military and industrial computer chips may or may not be compromised by backdoor switches implanted by China. These revelations suggest that the way we fight wars is changing, and so are the rules.
With the Cyber Intelligence Sharing and Protection Act (CISPA), we’re in a political tug-of-war over who should lead the security of our digital borders: should it be a civilian organization such as the Department of Homeland Security (DHS), or a military organization such as the Department of Defense (DoD)? I want to suggest a third option that government need not be involved—a solution that would avoid very difficult issues related to international humanitarian law (IHL) and therefore reduce the risk of an accidental cyberwar or worse.
Sometimes, the creation is better than its creator. Robots today perform surgeries, shoot people, fly planes, drive cars, replace astronauts, baby-sit kids, build cars, fold laundry, have sex, and can even eat (but not human bodies, the manufacturer insists). They might not always do these tasks well, but they are improving rapidly. In exchange for such irresistible benefits, the Robotic Revolution also demands that we adapt to new risks and responsibilities.
Robots are replacing humans on the battlefield—but could they also be used to interrogate and torture suspects? This would avoid a serious ethical conflict between physicians’ duty to do no harm, or nonmaleficence, and their questionable role in monitoring vital signs and health of the interrogated. A robot, on the other hand, wouldn’t be bound by the Hippocratic oath, though its very existence creates new dilemmas of its own.
With some people, you just can’t win. Do you engage them in a debate, or do you hold your tongue and save yourself the frustration from beating your head against a brick wall? That is the dilemma I face.
Dr. Patrick Lin, a Fellow of the IEET and an assistant professor of philosophy at California Polytechnic State University, was a featured guest on a recent edition of the NPR program “Talk of the Nation,” discussing the ethics of robot warfare.
With some people, you just can’t win. Do you engage them in a debate, or do you hold your tongue and save yourself the frustration from beating your head against a brick wall? That is the dilemma I face now.
Dr. Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo, has accepted an appointment as Fellow of the Institute for Ethics and Emerging Technologies for 2010.