Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Overview of technopolitics


whats new at ieet

Devastated by the American Presidential Election

HISTORIC Victory At Standing Rock

Blockchain – The Building Blocks for a New Society, with Vince Meens

Martine Rothblatt, Lawrence Krauss, Douglas Rushkoff, Russell Blackford endorse Science Missionaries

Trump Picks Establishment Banker For Treasury Secretary

Trump’s Pick for Health Secretary Is Total Nightmare Fuel


ieet books

TECHNOPROG, le transhumanisme au service du progrès social
Author
Marc Roux and Didier Coeurnelle





JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life


IEET > GlobalDemocracySecurity > Cyber > Vision > Fellows > Wendell Wallach > Technoprogressivism

Print Email permalink (0) Comments (3173) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Ensuring Human Control Over Military Robotics


Wendell Wallach
By Wendell Wallach
New York Times

Posted: Aug 29, 2015

Let us stop looking at the challenges posed by the robotization of warfare piecemeal, and begin to reflect comprehensively upon the manner in which autonomous weapons alter the future conduct of war. 

The comments below were written in response to a New York Times editorial by Jerry Kaplan titled “Robot Weapons: What’s the Harm?” and published on August 17th.  The Times does not print rebuttals but did print the italicized section as a Letter to the Editor under the banner “Banning Land Mines” on August 26th.

Piecemeal Arguments Against Banning Killer Robots

In an attempt to thwart the gathering momentum for a ban on lethal autonomous weapons (LAWs, often dramatically referred to as “killer robots”), supporters of autonomous weapons, such as Jerry Kaplan believe it is sufficient to outline situations where intelligent machines might act better than human soldiers. This totally misses the central point.

By now, everyone acknowledges that robot soldiers, aircraft, and underwater vehicles offer ethically laudable benefits in certain situations. The simple fact that deploying robot soldiers could save the lives of one’s own troops is enough to make this point. But the more important question is whether the short-term benefits of robotizing warfare are outweighed by long-term risks and consequences.  Those longer-term consequences do not require a belief in speculative science fiction scenarios such as the Terminator or Matrix.

One robotic device accidentally starting a war, that would not have otherwise occurred, is sufficient to wipe out any benefits accrued. One expansionist political leader, willing to start a new war because he believes he could do so without loss of his own troops, puts a lie to any contention that in the long run robotic weapons will save lives.

military robotsJerry Kaplan argues for lethal autonomous weapons, and against a ban on their use, with an example that is irresponsible. Land mines, in his view, could be made “safer and more effective” by building in cameras and software that could discriminate between a child, an adult or an animal. This is irrespective of the fact that most land mines are buried beneath the surface where any camera is likely to be covered by dirt.

More important, there has been a successful international campaign to ban the use of indiscriminate weapons such as land mines and cluster bombs. One hundred and sixty-one countries have signed the Ottawa Treaty, which bans the use, stockpiling and manufacturing of anti-personnel mines. Creating the illusion that indiscriminate weapons could be made more acceptable through computerized enhancements would be a major setback to limiting atrocities during warfare.

Even those, such as the roboticist Ronald Arkin, who argue that lethal autonomous weapons will potentially be better than a human soldier at following international humanitarian law propose that they be used only in tightly constrained contexts, like battlefields, where there is effective human oversight. The more responsible proponents for LAWs recognize that present day computer technology lacks the kinds of discrimination that will ensure it does not target noncombatants.

The reasons are many as to why autonomous killing machines, so-called intelligent weapons that select and dispatch their targets without immediate involvement of human personnel, should be banned. As thousands of scientists recently noted, offensive autonomous weapons would set off a new destabilizing global arms race.  Human soldiers, whose reaction time is much slower than that of computers, could easily be mowed down and would have little or no role in fighting future wars. Autonomous weapons would continue a dilution of human responsibility, the foundation principle that a corporate or individual agent is responsible, culpable, and potentially liable for the actions of all weapons that have been deployed.

Furthermore, machines should not be making life and death decisions about humans.  This violates existing International Humanitarian Law, and should violate the conscience of humanity. 

Unlike the use of land mines and cluster bombs, which can be justified strategically but not morally, LAWs do offer some clear benefits.  Those supporting a ban have their work cut out in demonstrating why those benefits are clearly outweighed by risks and consequences. Autonomous weapons will not be easy to ban.  It will not be easy to agree on terms and there will be no foolproof inspection regimes.  Nevertheless, at this juncture in history, we must make every effort to ensure that there will be meaningful human control over any intelligent systems that are deployed.

Let us stop looking at the challenges posed by the robotization of warfare piecemeal, and begin to reflect comprehensively upon the manner in which autonomous weapons alter the future conduct of war.


Wendell Wallach is a consultant, ethicist, and scholar at Yale University's Interdisciplinary Center for Bioethics, where he chairs the Center's working research group on Technology and Ethics.
Print Email permalink (0) Comments (3174) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Practopoiesis Tells Us Machine Learning Isn’t Enough

Previous entry: What Explains The Rise of Humans?

HOME | ABOUT | STAFF | EVENTS | SUPPORT  | CONTACT US
JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Executive Director, Dr. James J. Hughes,
35 Harbor Point Blvd, #404, Boston, MA 02125-3242 USA
Email: director @ ieet.org