Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Reading robots’ minds

Genetic Enineering and Preimplantation Genetic Diagnosis

Sorgner @ 3rd World Humanities Forum

Futurism: Go Big

Why oil is getting cheaper

7 Signs That the American Dream is Dying


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

CygnusX1 on '2040’s America will be like 1840’s Britain, with robots?' (Oct 30, 2014)

Rick Searle on '2040’s America will be like 1840’s Britain, with robots?' (Oct 30, 2014)

CygnusX1 on '2040’s America will be like 1840’s Britain, with robots?' (Oct 30, 2014)

dobermanmac on 'Philosopher Michael Lynch Says Privacy Violations Are An Affront To Human Dignity' (Oct 30, 2014)

GamerFromJump on 'Can Gene Therapy Cure HIV?' (Oct 30, 2014)

Rick Searle on '2040’s America will be like 1840’s Britain, with robots?' (Oct 29, 2014)

Khannea Suntzu on '2040’s America will be like 1840’s Britain, with robots?' (Oct 29, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


2040’s America will be like 1840’s Britain, with robots?
Oct 26, 2014
(11252) Hits
(13) Comments

Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7798) Hits
(2) Comments

Why oil is getting cheaper
Oct 29, 2014
(5464) Hits
(0) Comments

Should we abolish work?
Oct 3, 2014
(5414) Hits
(1) Comments



IEET > Security > Military > Fellows > Mike Treder

Print Email permalink (1) Comments (4597) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


The Ethics of Killer Robots


Mike Treder
By Mike Treder
Responsible Nanotechnology

Posted: Feb 23, 2009

What if they gave a war and nobody came? That was a popular slogan for peace demonstrators of the Vietnam era (including me). It might be repeated, with a slight revision, at some point during this century: What if they gave a robot war and nobody came?

Military robots already have been deployed by the United States in the occupation of Iraq, and in growing numbers, as this recent article in The New Atlantis makes clear:

When U.S. forces went into Iraq, the original invasion had no robotic systems on the ground. By the end of 2004, there were 150 robots on the ground in Iraq; a year later there were 2,400; by the end of 2008, there were about 12,000 robots of nearly two dozen varieties operating on the ground in Iraq. As one retired Army officer put it, the “Army of the Grand Robotic” is taking shape.

Not only are the quantities of robots increasing, but the varieties of their usage and capabilities are also expanding. Again, from The New Atlantis:

It isn’t just on the ground: military robots have been taking to the skies—and the seas and space, too. And the field is rapidly advancing. The robotic systems now rolling out in prototype stage are far more capable, intelligent, and autonomous than ones already in service in Iraq and Afghanistan. But even they are just the start.

As one robotics executive put it at a demonstration of new military prototypes a couple of years ago, “The robots you are seeing here today I like to think of as the Model T. These are not what you are going to see when they are actually deployed in the field. We are seeing the very first stages of this technology.”

And just as the Model T exploded on the scene—selling only 239 cars in its first year and over one million a decade later—the demand for robotic warriors is growing very rapidly.

Most of the military robots currently in use are limited to surveillance purposes. A few are equipped for killing and have been used that way, but those are still in the minority.

This article (subscription required to read online), from "The Annals of Technology" in The New Yorker, describes rapid progress in the development of weaponized military robots.

The author observes demonstrations of robotic automated fighting machines on treads that can climb stairs, use on-board video to ascertain targets, and accurately fire five shotgun rounds per second with almost no recoil; similar robot warriors are mounted on small remote-control helicopters capable of flying even in strong winds, and aiming at and striking targets with deadly accuracy. Jerry Baber, a private designer of machine weapons profiled in the article, says he is also working on a ground robot that could fight its way into an enemy-held building and then deploy six smaller robots for individual combat operations.

So far, few of these advanced systems have been deployed, partly due to ethical questions, partly due to cost, but mostly, I suspect, because there are still fears to overcome about what happens if something goes badly wrong.

When asked about such worries, U.S. military spokespersons are quick to point out their policy of maintaining "Man in the loop." In theory, a human decision is required before robot warriors take human lives. In practice, it may not always work that way—and it's not hard to project a time when so many robots are in the field that the number and pace of decisions to be made are beyond human ability to keep up.

P. W. Singer, the author of Wired for War, says:

We've already redefined what 'in the loop' means. It's moving from making the decision to fire to mere veto power. The lines are already fuzzy, and they're disappearing.

Meanwhile, according to the Times Online:

[A] report, compiled by the Ethics and Emerging Technology department of California State Polytechnic University and obtained by The Times, strongly warns the US military against complacency or shortcuts as military robot designers engage in the “rush to market” and the pace of advances in artificial intelligence is increased. 

A rich variety of scenarios outlining the ethical, legal, social and political issues posed as robot technology improves are covered in the report. How do we protect our robot armies against terrorist hackers or software malfunction? Who is to blame if a robot goes berserk in a crowd of civilians – the robot, its programmer, or the U.S. President? Should the robots have a “suicide switch” and should they be programmed to preserve their lives?

Any sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.

We're proud to note that the lead author of this report, provided for the U.S. Office of Naval Research, is Dr. Patrick Lin, a member of CRN's Global Task Force on Implications and Policy.

Online reaction to Pat's important report, described by The Times as "the first serious work of its kind on military robot ethics," has been interesting to follow, especially as it takes thinkers beyond the usual questions and into deeper territory.

Nicholas Carr, author of The Big Switch: Rewiring the World, From Edison to Google, comments about the report on his blog:

The good news, according to the authors, is that emotionless machines have certain built-in ethical advantages over human warriors.

"Robots,"  they write, "would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes. We would no longer read (as many) news reports about our own soldiers brutalizing enemy combatants or foreign civilians to avenge the deaths of their brothers in arms—unlawful actions that carry a significant political cost."

Of course, this raises deeper issues, which the authors don't address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice - with all the messiness that goes with it?

Excellent points to consider. And taking matters even further, Paul Raven on the Futurismic blog says:

I’d go further still, and ask whether that capacity for emotion and moral action actually obviates the entire point of using robots to fight wars - in other words, if robots are supposed to take the positions of humans in situations we consider too dangerous to expend real people on, how close does a robot’s emotions and morality have to be to their human equivalents before it becomes immoral to use them in the same way?

These are hard questions, the kind many of us would prefer never to have to ask. But the time is near, if not now, when they will need to be answered. It is especially worrying when you consider the massive numbers and powerful destructive possibilities introduced by molecular manufacturing.

In typical dystopian scenarios, perhaps most vividly presented by the Terminator movies, these smart killing machines have turned against their human makers in all-out war.

But what if, instead, the recursively improving computer brains of robot warriors allow them to become enlightened and to see the horror of warfare for what it is—to recognize the ridiculousness of building more and better (and more costly) machines only to command them to destroy each other?

What if they gave a robot war and nobody came?


Mike Treder is a former Managing Director of the IEET.
Print Email permalink (1) Comments (4598) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Mike:

Before repeating such alarmist drivel, please check with an engineer.  You will learn that these weaponized systems are in fact NOT automated.  This whole discussion, including comments by Dr Lin, who is a professor of philosophy, is a tempest in a teapot.

Sometime in the distant future the day will certainly come when our technology will be sufficiently advanced to be able to replace the human mind with automated systems.  But for the moment machine intelligence is at best insect-level,  capabile of shooting anything that moves in a secured battlefield kill zone.

Being able to distinguish between civilians and a combattant in an urban setting is an error-prone challenge for the most highly-trained robots and law enforcement personnel.

Robots do not currently have such capabilities.  The SWORDs system mentioned by a reporter from the NY Times is a minature radio controlled tank.  “In the loop” means that a soldier steers the vehicle, points the gun, and pulls the trigger.

I trust that you are not a robot, and can are therefore capable of understanding.

Sincerely,
  Nelson Bridwell





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: The Immaculate Designer Prosthesis

Previous entry: No Exit

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376