Terminating the Terminator: What to do About Autonomous Weapons
Wendell Wallach
2013-01-29 00:00:00
URL


 The first, a November 19 report from Human Rights Watch and the Harvard Law School Human Rights Clinic, calls for an international ban on killer robots. Four days later a Department of Defense Directive titled “Autonomy in Weapons Systems” was published under the signature of Deputy Defense Secretary Ashton Carter. The two documents may only be connected by the timing of their release, but the directive should nevertheless be read as an effort to quell any public concern about the dangers posed by semiautonomous and autonomous weapons systems—which are capable of functioning with little or no direct human involvement—and to block attempts to restrict the development of robotic weaponry. In the directive, the Department of Defense wants to expand the use of self-directed weapons, and it is explicitly asking us not to worry about autonomous robotic weaponry, saying that the Department of Defense will put in place adequate oversight on its own.



Hidden in the subtext is a plea for the civilian sector not to regulate the Department of Defense’s use of autonomous weapons. Military planners do not want their near-term options limited by speculative possibilities. Neither military leaders nor anyone else, however, want warfare to expand beyond the bounds of human control. The directive repeats eight times that the Department of Defense is concerned with minimizing “failures that could lead to unintended engagements or loss of control of the system.” Nevertheless, a core problem remains. Even if one trusts that the Department of Defense will establish robust command and control in deploying autonomous weaponry, there is no basis for assuming that other countries and nonstate actors will do the same. The directive does nothing to limit an arms race in autonomous weapons capable of initiating lethal force. In fact, it may actually be promoting it.

For thousands of years the machines used in warfare have been extensions of human will and intention. Bad design and flawed programming have been the primary dangers posed by much of the computerized weaponry deployed to date, but this is rapidly changing as computer systems with some degree of artificial intelligence become increasingly autonomous and complex.

The “Autonomy in Weapons Systems Directive” promises that the weaponry the U.S. military deploys will be fully tested. Military necessity during the wars in Iraq and Afghanistan, however, prompted the Secretary of Defense Robert Gates, to authorize the deployment of new drone systems—unmanned air vehicles, or UAVs, with very little autonomy—before they were fully tested. The unique and changing circumstances of the battlefield give rise to situations for which no weapons system can be fully tested. Even the designers and engineers who build complex systems cannot always predict how they will function in new situations with untested combinations of inputs.

Increasing autonomy will increase uncertainties as to how weaponry will perform in new situations. Personnel find it extremely difficult to coordinate their actions with “intelligent” systems whose behavior they cannot absolutely predict. David Woods and Erik Hollnagel’s book Joint Cognitive Systems: Patterns in Cognitive Systems Engineering illustrates this problem with the example of a 1999 accident in which a Global Hawk UAV went off the runway, causing a collapsed nose and $5.3 million in damage. The accident occurred because the operators misunderstood what the system was trying to do. Unfortunately, placing blame on the operators and increasing the autonomy of the system may actually exacerbate coordinating the activities of human and robotic agents.

The intent of the military planners who authored the directive is to put in place extensive controls for maintaining the safety of autonomous weaponry. But the nature of complex autonomous systems and of war is such that they will be less successful in doing so than the directive suggests.

Research on artificial intelligence over the past 50 years has arguably been a contemporary Tower of Babel. While AI continues to be a rich field of study and innovation, much of its edifice is built upon hype, speculation, and promises that cannot be fulfilled. The U.S. military and other government agencies have been the leaders in bankrolling new computer innovations and the AI tower of babble, and they have wasted countless billions of dollars in the process. Buying into hype and promises that cannot be fulfilled is wasteful. Failure to adequately assess the dangers posed by new weapons systems, however, places us all at risk.

The long-term consequences of building autonomous weapons systems may well exceed the short-term tactical and strategic advantages they provide. Yet the logic of maintaining technological superiority demands that we acquire new weapons systems before our potential adversaries—even if in doing so we become the lead driver propelling the arms race forward. There is, however, an alternative to a totally open-ended competition for superiority in autonomous weapons.

A longstanding concept in just war theory and international humanitarian law is that certain activities such as rape and the use of biological weapons are evil in and of themselves—what Roman philosophers called “mala in se.” I contend that machines picking targets and initiating lethal and nonlethal force are not just a bad idea, but also mala in se. Machines lack discrimination, empathy, and the capacity to make the proportional judgments necessary for weighing civilian casualties against achieving military objectives. Furthermore, delegating life and death decisions to machines is immoral because machines cannot be held responsible for their actions.

So let us establish an international principle that machines should not be making decisions that are harmful to humans. This principle will set parameters on what is and what is not acceptable. We can then go on to a more exacting discussion as to the situations in which robotic weapons are indeed an extension of human will and when their actions are beyond direct human control. This is something less than the absolute ban on killer robots proposed by Human Rights Watch, but it will set limits on what can be deployed.

The primary argument I have heard against this principle is the contention that future machines will have the capacity for discrimination and will be more moral in their choices and actions than human soldiers. This is all highly speculative. Systems with these capabilities may never exist. If and when robots become ethical actors that can be held responsible for their actions, we can then begin debating whether they are no longer machines and are deserving of some form of personhood. But warfare is not the place to test speculative possibilities.

As a first step, President Barack Obama should sign an executive order declaring that a deliberate attack with lethal and nonlethal force by fully autonomous weaponry violates the Law of War. This executive order would establish that the United States holds that this principle already exists in international law. NATO would soon follow suit, leading to the prospect of an international agreement that all nations will consider computers and robots to be machines that can never make life and death decisions. A responsible human actor must always be in the loop for any offensive strike that harms a human. An executive order establishing limits on autonomous weapons will reinforce the contention that the United States places humanitarian concerns as a priority in fulfilling its defense responsibilities.

The Department of Defense directive should have declared a five-year moratorium on the deployment of autonomous weapons. A moratorium would indicate that  military planners recognize that this class of weaponry is problematic. More importantly, it would provide an opportunity to explore with our allies the issues in international humanitarian law that impinge upon the use of lethal autonomous weapons. In addition, a moratorium signals to defense contractors that they lack a ready buyer for autonomous systems they might develop. No one, however, anticipates that autonomous weapons capable of precision targeting will be available in the next five years. Furthermore, a moratorium is unlikely to reassure other countries that look to the United States as they gauge their own defense needs.  

There is no way to ensure that other countries and nonstate actors will emulate standards and testing protocols similar to those outlined in the directive before they use autonomous weapons. Some country is likely to deploy crude autonomous drones or ground-based robots capable of initiating lethal force, and that will justify efforts within the U.S. defense industry to establish our superiority in this class of weaponry.

The only viable route to slow and hopefully arrest an inexorable march toward future wars that pit one country’s autonomous weapons against another’s is a principle or international treaty that puts the onus on any party that deploys such weapons. Instead of placing faith in the decisions made by a few military planners within the Pentagon about the feasibility of autonomous weapons, we need an open debate within the Obama administration and within the international community as to whether prohibitions on autonomous offensive weapons are implicit under existing international humanitarian law. A prohibition on machines making life-and-death decisions must either be made explicit and/or established and codified in a new international treaty.

The inflection point for setting limits on autonomous weaponry initiating lethal force exists now. This opportunity will disappear, however, as soon as many arms manufacturers and countries perceive short-term advantages that could accrue to them from a robot arms race.