Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Algocracy and Transhuamnism Podcast: Hannah Maslen on the Ethics of Neurointerventions

The World’s First Child-Sized Exoskeleton Will Melt Your Heart

How better tech could protect us from distraction

Worst case scenario – 2035 and no basic income.

The birth of virtual reality as an art form

How VR Gaming will Wake Us Up to our Fake Worlds


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

Pastor_Alex on 'On tragedy, ethics and the human condition.' (Jun 28, 2016)

instamatic on 'On tragedy, ethics and the human condition.' (Jun 28, 2016)

Pastor_Alex on 'On tragedy, ethics and the human condition.' (Jun 28, 2016)

instamatic on 'On tragedy, ethics and the human condition.' (Jun 28, 2016)

RJP8915 on 'How VR Gaming will Wake Us Up to our Fake Worlds' (Jun 28, 2016)

almostvoid on 'How VR Gaming will Wake Us Up to our Fake Worlds' (Jun 28, 2016)

almostvoid on 'The birth of virtual reality as an art form' (Jun 28, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


How VR Gaming will Wake Us Up to our Fake Worlds
Jun 28, 2016
(6250) Hits
(2) Comments

Will Transhumanism Change Racism in the Future?
Jun 2, 2016
(5549) Hits
(0) Comments

Dubai Is Building the World’s Largest Concentrated Solar Power Plant
Jun 20, 2016
(4148) Hits
(0) Comments

A snapshot on the fight against death
Jun 1, 2016
(3532) Hits
(4) Comments



IEET > Security > Military > Vision > Artificial Intelligence > Philosophy > Fellows > Kevin LaGrandeur

Print Email permalink (1) Comments (4190) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Could Artificial Morals and Emotions Make Robots Safer?


Kevin LaGrandeur
By Kevin LaGrandeur
Ethical Technology

Posted: Oct 2, 2015

This past summer saw the release of the new film “Avengers: Age of Ultron.” Like so many recent movies, the villains in this one were once again killer robots. But the idea of deadly, weaponized robots isn’t just isolated to titillating movie plots. Such machines are already with us, in one form or another, in many places on the globe.

The South Korean army has a robotic border guard—the Samsung Techwin security surveillance guard robots—that can automatically detect intruders and shoot them. Israel is building an armed drone—the Harop—that can choose its own targets. Lockheed Martin is building a missile that seeks and destroys enemy ships, and it can evade countermeasures too. Amid concerns about how these intelligent weapons decide whom or what to target, and about the looming possibility of completely autonomous weapons in the near future, developers are considering, or have actually succeeded in building morals and emotions into their creations in order to make them safer.

Is this a good idea?

Building weaponized robots that are at once safe and deadly is difficult. As Brookings Institute Fellow P.W. Singer worries in his book, Wired for War, robots do just what they are programmed to, and this means that, “Shooting a missile at a T-80 tank is just the same to a robot as shooting it at an eighty-year-old grandmother.” This is why designers of autonomous weapons for the military are interested in the idea of building a code of ethics into their programming—an artificial conscience: because they want them to be safe to operate around friendly forces and civilians. And building a true artificial conscience is hard to imagine without also finding a way to embed certain emotions that are integral to ethical behavior in humans, such as guilt and grief.

So when the Department of Defense paid experts like Georgia Tech robotics professor Ronald Arkin to do research on creating an artificial sense of ethics for robotic weapons, he also had to consider building in such emotions. His work is daring and creative, but human emotions, let alone ethics, are very complex things.


In a report he wrote for the D.O.D. titled “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Arkin argued that guilt, remorse, or grief could be programmed to occur. The problem is that the only way this kind of software could be structured would be as an after-the-fact system; in other words, the software only activates after the autonomous weapon commits an atrocity; unfortunately, that is just the way guilt works. Arkin admits as much, but then goes on to say that still the machine would be able to learn to avoid similar situations afterward.


However, few battlefield situations are identical, and so how well would a literal-minded robot adapt what it learned from a previous deadly mistake? How many different atrocities would have to be committed by one robot before there are sufficient models in its database to avoid all possible situations that might happen? Even if they could not experience rage or hatred or fear, would these robotic combatants really be safer and better than human soldiers?


There are other philosophical problems with building any kind of emotion into an autonomous weapon: What basis should be used for the robot ethics that entailed these emotions? Arkin proposes using the military’s Rules of Engagement and Rules of War, but the narrowness of these codes could be a liability precisely because they are so narrow, whereas modern battlefields are becoming ever broader in context. And the proposition to reduce guilt to a simple binary variable of True or False seems problematic: in humans guilt is not something that simply toggles on and off, a one-or-zero program. There are gradations of guilt, and the most extreme can be self-destructive. If a machine could truly be made to “feel” guilt in its varying degrees, then would we have problems of machine suffering and machine “suicide”? If we develop a truly strong Artificial Intelligence (AI) we might, and then we would face the moral problem of creating a suffering being.


Efforts to use emotions as a basis for creating safer machines are noble in the sense that they are attempts to make the best of what many see as a bad situation: that increasingly smart weapons are inevitable, and so we should make sure that whatever is devised in the way of autonomous weapons does not cause more horror than we suffer now. Also admirable is the attempt to take human carnage out of battlefield situations by replacing humans with machines. But as the ethical questions above indicate, it’s just not that clear that smart weapons can be made safer by adding in simulated human emotions.


IEET Fellow Kevin LaGrandeur is a Faculty Member at the New York Institute of Technology. He specializes in the areas of technology and culture, digital culture, philosophy and literature.
Print Email permalink (1) Comments (4191) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


  I thought this was a good article that raises some disturbing questions. As human beings we can call it anything we want but when we go to war we murder other human beings. Trying to give a moral code of ethics to a machine for war is kind of like trying to give a moral code of ethics to a serial killer or a pedophile. ” It’s ok to rape THOSE children over there but not these children here.” I cannot stress enough how INSANE this is. I’m glad that Mr LaGrandeur is addressing the seeming inevitable, that AI will have a conscience and even a sub conscience. We are still on the ground floor of this event and it is important for us now to decide how we can impart to AI how we would want it to look at itself and to perceive us.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Manipulate Much? A Professional Video Editor Discusses Carly Fiorina’s Claims and the Campaign

Previous entry: The Brain with David Eagleman

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org