Ensuring Human Control Over Military Robotics
Wendell Wallach
2015-08-29 00:00:00
URL

The comments below were written in response to a New York Times editorial by Jerry Kaplan titled “Robot Weapons: What’s the Harm?” and published on August 17th. The Times does not print rebuttals but did print the italicized section as a Letter to the Editor under the banner “Banning Land Mines” on August 26th.

Piecemeal Arguments Against Banning Killer Robots


In an attempt to thwart the gathering momentum for a ban on lethal autonomous weapons (LAWs, often dramatically referred to as “killer robots”), supporters of autonomous weapons, such as Jerry Kaplan believe it is sufficient to outline situations where intelligent machines might act better than human soldiers. This totally misses the central point.

By now, everyone acknowledges that robot soldiers, aircraft, and underwater vehicles offer ethically laudable benefits in certain situations. The simple fact that deploying robot soldiers could save the lives of one’s own troops is enough to make this point. But the more important question is whether the short-term benefits of robotizing warfare are outweighed by long-term risks and consequences. Those longer-term consequences do not require a belief in speculative science fiction scenarios such as the Terminator or Matrix.

One robotic device accidentally starting a war, that would not have otherwise occurred, is sufficient to wipe out any benefits accrued. One expansionist political leader, willing to start a new war because he believes he could do so without loss of his own troops, puts a lie to any contention that in the long run robotic weapons will save lives.

military robotsJerry Kaplan argues for lethal autonomous weapons, and against a ban on their use, with an example that is irresponsible. Land mines, in his view, could be made “safer and more effective” by building in cameras and software that could discriminate between a child, an adult or an animal. This is irrespective of the fact that most land mines are buried beneath the surface where any camera is likely to be covered by dirt.

More important, there has been a successful international campaign to ban the use of indiscriminate weapons such as land mines and cluster bombs. One hundred and sixty-one countries have signed the Ottawa Treaty, which bans the use, stockpiling and manufacturing of anti-personnel mines. Creating the illusion that indiscriminate weapons could be made more acceptable through computerized enhancements would be a major setback to limiting atrocities during warfare.

Even those, such as the roboticist Ronald Arkin, who argue that lethal autonomous weapons will potentially be better than a human soldier at following international humanitarian law propose that they be used only in tightly constrained contexts, like battlefields, where there is effective human oversight. The more responsible proponents for LAWs recognize that present day computer technology lacks the kinds of discrimination that will ensure it does not target noncombatants.

The reasons are many as to why autonomous killing machines, so-called intelligent weapons that select and dispatch their targets without immediate involvement of human personnel, should be banned. As thousands of scientists recently noted, offensive autonomous weapons would set off a new destabilizing global arms race. Human soldiers, whose reaction time is much slower than that of computers, could easily be mowed down and would have little or no role in fighting future wars. Autonomous weapons would continue a dilution of human responsibility, the foundation principle that a corporate or individual agent is responsible, culpable, and potentially liable for the actions of all weapons that have been deployed.

Furthermore, machines should not be making life and death decisions about humans. This violates existing International Humanitarian Law, and should violate the conscience of humanity.

Unlike the use of land mines and cluster bombs, which can be justified strategically but not morally, LAWs do offer some clear benefits. Those supporting a ban have their work cut out in demonstrating why those benefits are clearly outweighed by risks and consequences. Autonomous weapons will not be easy to ban. It will not be easy to agree on terms and there will be no foolproof inspection regimes. Nevertheless, at this juncture in history, we must make every effort to ensure that there will be meaningful human control over any intelligent systems that are deployed.

Let us stop looking at the challenges posed by the robotization of warfare piecemeal, and begin to reflect comprehensively upon the manner in which autonomous weapons alter the future conduct of war.