The Ethics of Killer Robots
Mike Treder
2009-02-23 00:00:00
URL

Military robots already have been deployed by the United States in the occupation of Iraq, and in growing numbers, as this recent article in The New Atlantis makes clear:

When U.S. forces went into Iraq, the original invasion had no robotic systems on the ground. By the end of 2004, there were 150 robots on the ground in Iraq; a year later there were 2,400; by the end of 2008, there were about 12,000 robots of nearly two dozen varieties operating on the ground in Iraq. As one retired Army officer put it, the “Army of the Grand Robotic” is taking shape.

Not only are the quantities of robots increasing, but the varieties of their usage and capabilities are also expanding. Again, from The New Atlantis:

It isn’t just on the ground: military robots have been taking to the skies—and the seas and space, too. And the field is rapidly advancing. The robotic systems now rolling out in prototype stage are far more capable, intelligent, and autonomous than ones already in service in Iraq and Afghanistan. But even they are just the start.

As one robotics executive put it at a demonstration of new military prototypes a couple of years ago, “The robots you are seeing here today I like to think of as the Model T. These are not what you are going to see when they are actually deployed in the field. We are seeing the very first stages of this technology.”

And just as the Model T exploded on the scene—selling only 239 cars in its first year and over one million a decade later—the demand for robotic warriors is growing very rapidly.

Most of the military robots currently in use are limited to surveillance purposes. A few are equipped for killing and have been used that way, but those are still in the minority.

This article (subscription required to read online), from "The Annals of Technology" in The New Yorker, describes rapid progress in the development of weaponized military robots.

The author observes demonstrations of robotic automated fighting machines on treads that can climb stairs, use on-board video to ascertain targets, and accurately fire five shotgun rounds per second with almost no recoil; similar robot warriors are mounted on small remote-control helicopters capable of flying even in strong winds, and aiming at and striking targets with deadly accuracy. Jerry Baber, a private designer of machine weapons profiled in the article, says he is also working on a ground robot that could fight its way into an enemy-held building and then deploy six smaller robots for individual combat operations.

So far, few of these advanced systems have been deployed, partly due to ethical questions, partly due to cost, but mostly, I suspect, because there are still fears to overcome about what happens if something goes badly wrong.

When asked about such worries, U.S. military spokespersons are quick to point out their policy of maintaining "Man in the loop." In theory, a human decision is required before robot warriors take human lives. In practice, it may not always work that way -- and it's not hard to project a time when so many robots are in the field that the number and pace of decisions to be made are beyond human ability to keep up.

P. W. Singer, the author of Wired for War, says:

We've already redefined what 'in the loop' means. It's moving from making the decision to fire to mere veto power. The lines are already fuzzy, and they're disappearing.

Meanwhile, according to the Times Online:

[A] report, compiled by the Ethics and Emerging Technology department of California State Polytechnic University and obtained by The Times, strongly warns the US military against complacency or shortcuts as military robot designers engage in the “rush to market” and the pace of advances in artificial intelligence is increased.

A rich variety of scenarios outlining the ethical, legal, social and political issues posed as robot technology improves are covered in the report. How do we protect our robot armies against terrorist hackers or software malfunction? Who is to blame if a robot goes berserk in a crowd of civilians – the robot, its programmer, or the U.S. President? Should the robots have a “suicide switch” and should they be programmed to preserve their lives?

Any sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.

We're proud to note that the lead author of this report, provided for the U.S. Office of Naval Research, is Dr. Patrick Lin, a member of CRN's Global Task Force on Implications and Policy.

Online reaction to Pat's important report, described by The Times as "the first serious work of its kind on military robot ethics," has been interesting to follow, especially as it takes thinkers beyond the usual questions and into deeper territory.

Nicholas Carr, author of The Big Switch: Rewiring the World, From Edison to Google, comments about the report on his blog:

The good news, according to the authors, is that emotionless machines have certain built-in ethical advantages over human warriors.

"Robots," they write, "would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes. We would no longer read (as many) news reports about our own soldiers brutalizing enemy combatants or foreign civilians to avenge the deaths of their brothers in arms—unlawful actions that carry a significant political cost."

Of course, this raises deeper issues, which the authors don't address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice - with all the messiness that goes with it?

Excellent points to consider. And taking matters even further, Paul Raven on the Futurismic blog says:

I’d go further still, and ask whether that capacity for emotion and moral action actually obviates the entire point of using robots to fight wars - in other words, if robots are supposed to take the positions of humans in situations we consider too dangerous to expend real people on, how close does a robot’s emotions and morality have to be to their human equivalents before it becomes immoral to use them in the same way?

These are hard questions, the kind many of us would prefer never to have to ask. But the time is near, if not now, when they will need to be answered. It is especially worrying when you consider the massive numbers and powerful destructive possibilities introduced by molecular manufacturing.

In typical dystopian scenarios, perhaps most vividly presented by the Terminator movies, these smart killing machines have turned against their human makers in all-out war.

But what if, instead, the recursively improving computer brains of robot warriors allow them to become enlightened and to see the horror of warfare for what it is -- to recognize the ridiculousness of building more and better (and more costly) machines only to command them to destroy each other?

What if they gave a robot war and nobody came?