Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

Technology hasn’t changed love. Here’s why

Why Non-Natural Moral Realism is Better than Divine Command Theory

IEET Affiliate Scholar Steve Fuller Publishes New Article in The Telegraph on AI

Can we build AI without losing control over it?

Blockchain Fintech: Programmable Risk and Securities as a Service

Brexit for Transhumanists: A Parable for Getting What You Wish For

ieet books

Philosophical Ethics: Theory and Practice
John G Messerly


spud100 on 'For the unexpected innovations, look where you'd rather not' (Oct 22, 2016)

spud100 on 'Have you ever inspired the greatest villain in history? I did, apparently' (Oct 22, 2016)

RJP8915 on 'Brexit for Transhumanists: A Parable for Getting What You Wish For' (Oct 21, 2016)

instamatic on 'What democracy’s future shouldn’t be' (Oct 20, 2016)

instamatic on 'Is the internet killing democracy?' (Oct 17, 2016)

RJP8915 on 'The Ethics of a Simulated Universe' (Oct 17, 2016)

Nicholsp03 on 'The Ethics of a Simulated Universe' (Oct 17, 2016)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Here’s Why The IoT Is Already Bigger Than You Realize
Sep 26, 2016
(5977) Hits
(1) Comments

IEET Fellow Stefan Sorgner to discuss most recent monograph with theologian Prof. Friedrich Graf
Oct 3, 2016
(4057) Hits
(0) Comments

Space Exploration, Alien Life, and the Future of Humanity
Oct 4, 2016
(3999) Hits
(1) Comments

All the Incredible Things We Learned From Our First Trip to a Comet
Oct 6, 2016
(3064) Hits
(0) Comments

IEET > Security > Military > Vision > Artificial Intelligence > Philosophy > Fellows > Kevin LaGrandeur

Print Email permalink (1) Comments (4702) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

Could Artificial Morals and Emotions Make Robots Safer?

Kevin LaGrandeur
By Kevin LaGrandeur
Ethical Technology

Posted: Oct 2, 2015

This past summer saw the release of the new film “Avengers: Age of Ultron.” Like so many recent movies, the villains in this one were once again killer robots. But the idea of deadly, weaponized robots isn’t just isolated to titillating movie plots. Such machines are already with us, in one form or another, in many places on the globe.

The South Korean army has a robotic border guard—the Samsung Techwin security surveillance guard robots—that can automatically detect intruders and shoot them. Israel is building an armed drone—the Harop—that can choose its own targets. Lockheed Martin is building a missile that seeks and destroys enemy ships, and it can evade countermeasures too. Amid concerns about how these intelligent weapons decide whom or what to target, and about the looming possibility of completely autonomous weapons in the near future, developers are considering, or have actually succeeded in building morals and emotions into their creations in order to make them safer.

Is this a good idea?

Building weaponized robots that are at once safe and deadly is difficult. As Brookings Institute Fellow P.W. Singer worries in his book, Wired for War, robots do just what they are programmed to, and this means that, “Shooting a missile at a T-80 tank is just the same to a robot as shooting it at an eighty-year-old grandmother.” This is why designers of autonomous weapons for the military are interested in the idea of building a code of ethics into their programming—an artificial conscience: because they want them to be safe to operate around friendly forces and civilians. And building a true artificial conscience is hard to imagine without also finding a way to embed certain emotions that are integral to ethical behavior in humans, such as guilt and grief.

So when the Department of Defense paid experts like Georgia Tech robotics professor Ronald Arkin to do research on creating an artificial sense of ethics for robotic weapons, he also had to consider building in such emotions. His work is daring and creative, but human emotions, let alone ethics, are very complex things.

In a report he wrote for the D.O.D. titled “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Arkin argued that guilt, remorse, or grief could be programmed to occur. The problem is that the only way this kind of software could be structured would be as an after-the-fact system; in other words, the software only activates after the autonomous weapon commits an atrocity; unfortunately, that is just the way guilt works. Arkin admits as much, but then goes on to say that still the machine would be able to learn to avoid similar situations afterward.

However, few battlefield situations are identical, and so how well would a literal-minded robot adapt what it learned from a previous deadly mistake? How many different atrocities would have to be committed by one robot before there are sufficient models in its database to avoid all possible situations that might happen? Even if they could not experience rage or hatred or fear, would these robotic combatants really be safer and better than human soldiers?

There are other philosophical problems with building any kind of emotion into an autonomous weapon: What basis should be used for the robot ethics that entailed these emotions? Arkin proposes using the military’s Rules of Engagement and Rules of War, but the narrowness of these codes could be a liability precisely because they are so narrow, whereas modern battlefields are becoming ever broader in context. And the proposition to reduce guilt to a simple binary variable of True or False seems problematic: in humans guilt is not something that simply toggles on and off, a one-or-zero program. There are gradations of guilt, and the most extreme can be self-destructive. If a machine could truly be made to “feel” guilt in its varying degrees, then would we have problems of machine suffering and machine “suicide”? If we develop a truly strong Artificial Intelligence (AI) we might, and then we would face the moral problem of creating a suffering being.

Efforts to use emotions as a basis for creating safer machines are noble in the sense that they are attempts to make the best of what many see as a bad situation: that increasingly smart weapons are inevitable, and so we should make sure that whatever is devised in the way of autonomous weapons does not cause more horror than we suffer now. Also admirable is the attempt to take human carnage out of battlefield situations by replacing humans with machines. But as the ethical questions above indicate, it’s just not that clear that smart weapons can be made safer by adding in simulated human emotions.

IEET Fellow Kevin LaGrandeur is a Faculty Member at the New York Institute of Technology. He specializes in the areas of technology and culture, digital culture, philosophy and literature.
Print Email permalink (1) Comments (4703) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


  I thought this was a good article that raises some disturbing questions. As human beings we can call it anything we want but when we go to war we murder other human beings. Trying to give a moral code of ethics to a machine for war is kind of like trying to give a moral code of ethics to a serial killer or a pedophile. ” It’s ok to rape THOSE children over there but not these children here.” I cannot stress enough how INSANE this is. I’m glad that Mr LaGrandeur is addressing the seeming inevitable, that AI will have a conscience and even a sub conscience. We are still on the ground floor of this event and it is important for us now to decide how we can impart to AI how we would want it to look at itself and to perceive us.

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Manipulate Much? A Professional Video Editor Discusses Carly Fiorina’s Claims and the Campaign

Previous entry: The Brain with David Eagleman


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @