Psychobot
Rick Searle
2013-07-07 00:00:00

Robotics is indeed changing the nature of work, and is likely to continue to do so throughout this century and beyond. But, as in most technological revolutions, the impact of change is felt first and foremost in the field of war.

In 2012 IEET Fellow Patrick Lin had a fascinating article in the Atlantic about a discussion he had at the CIA revolving around the implications of the robotics revolution. The use of robots in war results in all kinds of questions in the area of Just-War theory that have yet to even begun to be addressed. An assumption throughout Lin’s article is that robots are likely to make war more not less ethical as robots can be programmed to never target civilians, or to never cross the thin line that separates interrogation from torture.

This idea, that the application of robots to war could ultimately take some of the nastier parts of the human condition out of the calculus of warfare is also touched upon from the same perspective in Peter Singer’s Wired for War.  There, Singer brings up the case of Steven Green, a US soldier charged with the premeditated rape and murder of a 14 year old Iraqi girl.  Singer contrast the young soldier “swirling with hormones” to the calm calculations of a robot lacking such sexual and murderous instincts.

The problem with this interpretation of Green is that it relies on an outdated understanding of how the brain works. As I’ll try to show Green is really more like a robot-soldier than most human beings.

Lin and Singer’s idea of the “good robot” as a replacement for the “bad soldier” is based on a understanding of the nature of moral behavior that can be traced, as most things in Western civilization, back to Plato. In Plato’s conception, the godly part of human nature, it’s reason, was seen as a charioteer tasked with guiding chaotic human passions. People did bad things whenever reason lost control. The idea was updated by Freud with his ID (instincts) Ego (self) and Super-Ego (social conscience). The thing is, this version of why human beings act morally or immorally is most certainly wrong.

The neuroscience writer Jonah Lehrer in his How we Decide has a chapter, The Moral Mind, devoted to this very topic.  Odd thing is the normal soldier does not want to kill anybody- even enemy combatants. He cites a study of thousands of American soldiers after WWII done by  U.S. Army Brigadier General S.L.A Marshall.




His shocking conclusion was that less than 20 percent actually shot at the enemy even when under attack. “It is fear of     killing” Marshall wrote “rather than fear of being killed, that is the most common cause of battle failure of the individual”. When soldiers were forced to directly confront the possibility of directly harming another human being- this is a personal moral decision- they were literally incapacitated by their emotions. “At the most vital point of battle”, Marshall wrote, “the soldier becomes a conscientious objector”.

After this study was published, the Army redesigned it’s training to reduce this natural moral impediment to battlefield effectiveness. “What was being taught in this environment is the ability to shoot reflexively and instantly… Soldiers are de-sensitized to the act of killing until it becomes an automatic response. pp. 179-180




Lehrer, of course, has been discredited as a result of plagiarism scandals, so we should accept his ideas with caution, yet, they do suggest what already know that the existential condition of war is that it is difficult for human beings to kill one another, and well it should be. If modern training methods are meant to remove this obstruction in the name of combat effective they also remove the soldier from the actual moral reality of war. This moral reality is the reason why wars should be fought infrequently and only under the most extreme of circumstances. We should only be willing to kill other human beings under the most threatening and limited of conditions.

The designers of  robots warriors are unlikely to program this moral struggle with killing into their machines. Such machines will kill or not kill a fellow sentient beings as they are programmed to do. They were truly be amoral in nature, or to use a loaded and antiquated term, without a soul.

We could certainly program robots with ethical rules of war, as Singer and Lin suggest. These robots would be less likely to kill the innocent in the fear and haste of the fog of war. It is impossible to imagine that robots would commit the horrible crime of rape, which is far too common in war. All these things are good things. The question for the farther future is, how would a machine with a human or supra-human level of intelligence experience war? What would be their moral/existential reality of war compared to how the most highly sentient creatures today, human beings, experience combat.

Singer’s use of Steven Green as a flawed human being whose “hormones” have overwhelmed his reason, as ethically inferior to the cold reason of artificial intelligence which have no such passions to control is telling, and again is based on the flawed Plato/Freud model of the conscience of human beings.  A clear way to see this is by looking inside the mind of the rapist/murderer Green who, before he had committed his crime had been quoted in the Washington Post as saying:




I came over here because I wanted to kill people…

I shot a guy here when we were out at a traffic checkpoint, and it was like nothing. Over here killing people is like squashing an ant. I mean you kill somebody and it’s like ‘All right, let’s go get some pizza’.




In other words, Green is a psychopath.

Again we can turn to Lehrer who in describing the serial killer John Wayne Gacy:




According to the court appointed psychiatrist, Gacy seemed incapable of experiencing regret, sadness, or joy. Instead his inner life consisted entirely of sexual impulses and ruthless rationality. p.169




It is not the presence of out of control emotions that explain the psychopath, but the very absence of emotion. Psychopaths are unmoved by the very sympathy that makes it difficult for normal soldiers to kill. Unlike other human beings they show no emotional response when shown depictions of violence. In fact, they are unmoved by emotions at all.  For them, there are simply “goals” (set by biology or the environment) that they want to achieve. The means to those goals, including murder, are, for them, irrelevant. Lehrer quotes G.K. Chesterson:




The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.




Whatever the timeline, we are in the process of creating sentient beings who will kill other sentient beings, human and machine, without anger, guilt, or fear. I see no easy way out of this dilemma, for the very selective pressures of war, appear to be weighted against programming such moral qualities (as opposed to rules for who and when to kill) into our machines.  Rather than ushering in an era of “humane” warfare, on the existential level, that is in the minds of the beings actually doing the fighting, the moral dimension of war will be relentlessly suppressed. We will have created what is in effect, an army of psychopaths.