Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Artificial Intelligence, Anthropics & Cause Prioritization

What is the Difference between Posthumanism and Transhumanism?

Building the Virtues Control Panel

Convergent Risk, Social Futurism, and the Wave of Change (Part 2 of 2)

Beauty Is Skin-deep—But That’s Where Genetic Engineering Is Going Next

Convergent Risk, Social Futurism, and the Wave of Change (Part 1 of 2)


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
by Martine Rothblatt


comments

CygnusX1 on 'The Problem with the Trolley Problem, or why I avoid utilitarians near subways' (Jul 28, 2014)

instamatic on 'Beauty Is Skin-deep—But That’s Where Genetic Engineering Is Going Next' (Jul 27, 2014)

instamatic on 'Why We’ll Still Be Fighting About Religious Freedom 200 Years From Now!' (Jul 27, 2014)

contraterrine on 'Radcliffe-Richards on Sexual Inequality and Justice (Part Two)' (Jul 27, 2014)

contraterrine on 'The Sad Passing of a Positive Futurist' (Jul 27, 2014)

Rick Searle on 'The Problem with the Trolley Problem, or why I avoid utilitarians near subways' (Jul 27, 2014)

CygnusX1 on 'How do you explain consciousness?' (Jul 27, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month


Nanomedical Cognitive Enhancement
Jul 11, 2014
(6005) Hits
(0) Comments

Interview with Transhumanist Biohacker Rich Lee
Jul 8, 2014
(5846) Hits
(0) Comments

Virtually Sacred, by Robert Geraci – religion in World of Warcraft and Second Life
Jul 3, 2014
(4446) Hits
(0) Comments

How Should Humanity Steer the Future?
Jul 5, 2014
(3704) Hits
(18) Comments



IEET > Security > Military > Rights > Vision > Technoprogressivism > Fellows > Wendell Wallach

Print Email permalink (4) Comments (5441) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Terminating the Terminator: What to do About Autonomous Weapons


Wendell Wallach
Wendell Wallach
scienceprogress.org

Posted: Jan 29, 2013

“The Terminator” is clearly science fiction, but it speaks to a deep intuition that the robotization of warfare is a slippery slope—the endpoint of which can neither be predicted nor fully controlled. Two reports released soon after the November 2012 election have propelled the issue of autonomous killing machines onto the political radar.

 The first, a November 19 report from Human Rights Watch and the Harvard Law School Human Rights Clinic, calls for an international ban on killer robots. Four days later a Department of Defense Directive titled “Autonomy in Weapons Systems” was published under the signature of Deputy Defense Secretary Ashton Carter. The two documents may only be connected by the timing of their release, but the directive should nevertheless be read as an effort to quell any public concern about the dangers posed by semiautonomous and autonomous weapons systems—which are capable of functioning with little or no direct human involvement—and to block attempts to restrict the development of robotic weaponry. In the directive, the Department of Defense wants to expand the use of self-directed weapons, and it is explicitly asking us not to worry about autonomous robotic weaponry, saying that the Department of Defense will put in place adequate oversight on its own.

Hidden in the subtext is a plea for the civilian sector not to regulate the Department of Defense’s use of autonomous weapons. Military planners do not want their near-term options limited by speculative possibilities. Neither military leaders nor anyone else, however, want warfare to expand beyond the bounds of human control. The directive repeats eight times that the Department of Defense is concerned with minimizing “failures that could lead to unintended engagements or loss of control of the system.” Nevertheless, a core problem remains. Even if one trusts that the Department of Defense will establish robust command and control in deploying autonomous weaponry, there is no basis for assuming that other countries and nonstate actors will do the same. The directive does nothing to limit an arms race in autonomous weapons capable of initiating lethal force. In fact, it may actually be promoting it.

For thousands of years the machines used in warfare have been extensions of human will and intention. Bad design and flawed programming have been the primary dangers posed by much of the computerized weaponry deployed to date, but this is rapidly changing as computer systems with some degree of artificial intelligence become increasingly autonomous and complex.

The “Autonomy in Weapons Systems Directive” promises that the weaponry the U.S. military deploys will be fully tested. Military necessity during the wars in Iraq and Afghanistan, however, prompted the Secretary of Defense Robert Gates, to authorize the deployment of new drone systems—unmanned air vehicles, or UAVs, with very little autonomy—before they were fully tested. The unique and changing circumstances of the battlefield give rise to situations for which no weapons system can be fully tested. Even the designers and engineers who build complex systems cannot always predict how they will function in new situations with untested combinations of inputs.

Increasing autonomy will increase uncertainties as to how weaponry will perform in new situations. Personnel find it extremely difficult to coordinate their actions with “intelligent” systems whose behavior they cannot absolutely predict. David Woods and Erik Hollnagel’s book Joint Cognitive Systems: Patterns in Cognitive Systems Engineering illustrates this problem with the example of a 1999 accident in which a Global Hawk UAV went off the runway, causing a collapsed nose and $5.3 million in damage. The accident occurred because the operators misunderstood what the system was trying to do. Unfortunately, placing blame on the operators and increasing the autonomy of the system may actually exacerbate coordinating the activities of human and robotic agents.

The intent of the military planners who authored the directive is to put in place extensive controls for maintaining the safety of autonomous weaponry. But the nature of complex autonomous systems and of war is such that they will be less successful in doing so than the directive suggests.

Research on artificial intelligence over the past 50 years has arguably been a contemporary Tower of Babel. While AI continues to be a rich field of study and innovation, much of its edifice is built upon hype, speculation, and promises that cannot be fulfilled. The U.S. military and other government agencies have been the leaders in bankrolling new computer innovations and the AI tower of babble, and they have wasted countless billions of dollars in the process. Buying into hype and promises that cannot be fulfilled is wasteful. Failure to adequately assess the dangers posed by new weapons systems, however, places us all at risk.

The long-term consequences of building autonomous weapons systems may well exceed the short-term tactical and strategic advantages they provide. Yet the logic of maintaining technological superiority demands that we acquire new weapons systems before our potential adversaries—even if in doing so we become the lead driver propelling the arms race forward. There is, however, an alternative to a totally open-ended competition for superiority in autonomous weapons.

A longstanding concept in just war theory and international humanitarian law is that certain activities such as rape and the use of biological weapons are evil in and of themselves—what Roman philosophers called “mala in se.” I contend that machines picking targets and initiating lethal and nonlethal force are not just a bad idea, but also mala in se. Machines lack discrimination, empathy, and the capacity to make the proportional judgments necessary for weighing civilian casualties against achieving military objectives. Furthermore, delegating life and death decisions to machines is immoral because machines cannot be held responsible for their actions.

So let us establish an international principle that machines should not be making decisions that are harmful to humans. This principle will set parameters on what is and what is not acceptable. We can then go on to a more exacting discussion as to the situations in which robotic weapons are indeed an extension of human will and when their actions are beyond direct human control. This is something less than the absolute ban on killer robots proposed by Human Rights Watch, but it will set limits on what can be deployed.

The primary argument I have heard against this principle is the contention that future machines will have the capacity for discrimination and will be more moral in their choices and actions than human soldiers. This is all highly speculative. Systems with these capabilities may never exist. If and when robots become ethical actors that can be held responsible for their actions, we can then begin debating whether they are no longer machines and are deserving of some form of personhood. But warfare is not the place to test speculative possibilities.

As a first step, President Barack Obama should sign an executive order declaring that a deliberate attack with lethal and nonlethal force by fully autonomous weaponry violates the Law of War. This executive order would establish that the United States holds that this principle already exists in international law. NATO would soon follow suit, leading to the prospect of an international agreement that all nations will consider computers and robots to be machines that can never make life and death decisions. A responsible human actor must always be in the loop for any offensive strike that harms a human. An executive order establishing limits on autonomous weapons will reinforce the contention that the United States places humanitarian concerns as a priority in fulfilling its defense responsibilities.

The Department of Defense directive should have declared a five-year moratorium on the deployment of autonomous weapons. A moratorium would indicate that  military planners recognize that this class of weaponry is problematic. More importantly, it would provide an opportunity to explore with our allies the issues in international humanitarian law that impinge upon the use of lethal autonomous weapons. In addition, a moratorium signals to defense contractors that they lack a ready buyer for autonomous systems they might develop. No one, however, anticipates that autonomous weapons capable of precision targeting will be available in the next five years. Furthermore, a moratorium is unlikely to reassure other countries that look to the United States as they gauge their own defense needs.  

There is no way to ensure that other countries and nonstate actors will emulate standards and testing protocols similar to those outlined in the directive before they use autonomous weapons. Some country is likely to deploy crude autonomous drones or ground-based robots capable of initiating lethal force, and that will justify efforts within the U.S. defense industry to establish our superiority in this class of weaponry.

The only viable route to slow and hopefully arrest an inexorable march toward future wars that pit one country’s autonomous weapons against another’s is a principle or international treaty that puts the onus on any party that deploys such weapons. Instead of placing faith in the decisions made by a few military planners within the Pentagon about the feasibility of autonomous weapons, we need an open debate within the Obama administration and within the international community as to whether prohibitions on autonomous offensive weapons are implicit under existing international humanitarian law. A prohibition on machines making life-and-death decisions must either be made explicit and/or established and codified in a new international treaty.

The inflection point for setting limits on autonomous weaponry initiating lethal force exists now. This opportunity will disappear, however, as soon as many arms manufacturers and countries perceive short-term advantages that could accrue to them from a robot arms race.


Wendell Wallach is a consultant, ethicist, and scholar at Yale University's Interdisciplinary Center for Bioethics, where he chairs the Center's working research group on Technology and Ethics.
Print Email permalink (4) Comments (5442) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


“Furthermore, delegating life and death decisions to machines is immoral because machines cannot be held responsible for their actions.”
This hardly seems relevant. As long as these machines lack personhood, their actions are the direct responsibility of the forces that choose to deploy them, as they are but tools, no different in principle except more complex intermediate steps between deployment (with human intent) and completion (result of said human intent). Whether it’s a drone being piloted by some guy in a bunker halfway across the world or by a sophisticated object detection algorithm, the relevant factor is who gets killed as a result.





“So let us establish an international principle that machines should not be making decisions that are harmful to humans.”

If we’re to interpret that as “harmful to any humans”, it won’t work - machines will often need to kill a few bad people in order to save many good people and it would be wrong to ban them from doing so. If it’s to be interpreted as “harmful to people in general”, then it will allow killing of bad people: murderers and warmongers. The military hopefully wants to use machines to kill bad people and to avoid killing good people, and that would be absolutely the right thing to do. What could rightly be banned is any use of machines which set out to kill good people on the bahalf of bad people, but the kind of people who want their machines to do that will not respect laws and will have to be defeated militarily. The only solution to this will be a military fight over control of the planet, though most of the action will actually come in the form of assasinations carried out by people on the instructions of artificial intelligence with no involvement of any robotic capability.





Unfortunately, with Islamic extremism threatening to inflame North Africa from Morocco to Egypt and south to Kenya, the disintegration of Syria threatening Lebanon and Jordan, the threat of the explosion throughout the Turkic “Stan’s” of the old Soviet Union, the increasing intensity of competition for water supplies between China and the countries of South Asia, the need for the United States to require increasing leverage for it military power, makes increasingly predictable, in the short and medium term, the development of exceedingly lethal and increasingly autonomous weapons.
So…





Need to develop a machine that will deal exclusively with the extermination of robots - “Robot Fighters”





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Ramez Naam Talks about his New Book, Nexus at Google

Previous entry: A New Model Drone Resolution

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
Williams 119, Trinity College, 300 Summit St., Hartford CT 06106 USA 
Email: director @ ieet.org     phone: 860-297-2376