This past summer saw the release of the new film “Avengers: Age of Ultron.” Like so many recent movies, the villains in this one were once again killer robots. But the idea of deadly, weaponized robots isn’t just isolated to titillating movie plots. Such machines are already with us, in one form or another, in many places on the globe.
The South Korean army has a robotic border guard—the Samsung Techwin security surveillance guard robots—that can automatically detect intruders and shoot them. Israel is building an armed drone—the Harop—that can choose its own targets. Lockheed Martin is building a missile that seeks and destroys enemy ships, and it can evade countermeasures too. Amid concerns about how these intelligent weapons decide whom or what to target, and about the looming possibility of completely autonomous weapons in the near future, developers are considering, or have actually succeeded in building morals and emotions into their creations in order to make them safer.
Is this a good idea?
Building weaponized robots that are at once safe and deadly is difficult. As Brookings Institute Fellow P.W. Singer worries in his book, Wired for War, robots do just what they are programmed to, and this means that, “Shooting a missile at a T-80 tank is just the same to a robot as shooting it at an eighty-year-old grandmother.” This is why designers of autonomous weapons for the military are interested in the idea of building a code of ethics into their programming—an artificial conscience: because they want them to be safe to operate around friendly forces and civilians. And building a true artificial conscience is hard to imagine without also finding a way to embed certain emotions that are integral to ethical behavior in humans, such as guilt and grief.
So when the Department of Defense paid experts like Georgia Tech robotics professor Ronald Arkin to do research on creating an artificial sense of ethics for robotic weapons, he also had to consider building in such emotions. His work is daring and creative, but human emotions, let alone ethics, are very complex things.
In a report he wrote for the D.O.D. titled “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Arkin argued that guilt, remorse, or grief could be programmed to occur. The problem is that the only way this kind of software could be structured would be as an after-the-fact system; in other words, the software only activates after the autonomous weapon commits an atrocity; unfortunately, that is just the way guilt works. Arkin admits as much, but then goes on to say that still the machine would be able to learn to avoid similar situations afterward.
However, few battlefield situations are identical, and so how well would a literal-minded robot adapt what it learned from a previous deadly mistake? How many different atrocities would have to be committed by one robot before there are sufficient models in its database to avoid all possible situations that might happen? Even if they could not experience rage or hatred or fear, would these robotic combatants really be safer and better than human soldiers?
There are other philosophical problems with building any kind of emotion into an autonomous weapon: What basis should be used for the robot ethics that entailed these emotions? Arkin proposes using the military’s Rules of Engagement and Rules of War, but the narrowness of these codes could be a liability precisely because they are so narrow, whereas modern battlefields are becoming ever broader in context. And the proposition to reduce guilt to a simple binary variable of True or False seems problematic: in humans guilt is not something that simply toggles on and off, a one-or-zero program. There are gradations of guilt, and the most extreme can be self-destructive. If a machine could truly be made to “feel” guilt in its varying degrees, then would we have problems of machine suffering and machine “suicide”? If we develop a truly strong Artificial Intelligence (AI) we might, and then we would face the moral problem of creating a suffering being.
Efforts to use emotions as a basis for creating safer machines are noble in the sense that they are attempts to make the best of what many see as a bad situation: that increasingly smart weapons are inevitable, and so we should make sure that whatever is devised in the way of autonomous weapons does not cause more horror than we suffer now. Also admirable is the attempt to take human carnage out of battlefield situations by replacing humans with machines. But as the ethical questions above indicate, it’s just not that clear that smart weapons can be made safer by adding in simulated human emotions.
IEET Fellow Kevin LaGrandeur is a Faculty Member at the New York Institute of Technology. He specializes in the areas of technology and culture, digital culture, philosophy and literature.
(1) Comments •
(4820) Hits •