Teaching Robots to Lie, Cheat, and Deceive
Mike Treder
2010-09-18 00:00:00

A couple of researchers, funded by the U.S. Office of Naval Research, are training robots in the art of deception:

Robots can perform an ever-increasing number of human-like actions, but until recently, lying wasn't one of them. Now, thanks to researchers at the Georgia Institute of Technology, they can. More accurately, the Deep South robots have been taught "deceptive behavior."

This might sound like the recipe for a Philip K. Dick-esque disaster, but it could have practical uses. Robots on the battlefield, for instance, could use deception to elude captors. In a search and rescue scenario, a robot might have to be deceptive to handle a panicking human. For now, however, the robots are using their new skill to play a mean game of hide-and-seek.


Regents professor Ronald Arkin and research engineer Alan Wagner utilized interdependence theory and game theory to create algorithms that tested the value of deception in a given situation. In order for deception to be deemed appropriate, the situation had to involve a conflict between the deceiving robot and another robot, and the deceiving robot had to benefit from the deception. It carried out its dastardly deeds by providing false communications regarding its actions, based on what it knew about the other robot...


The full results of the Georgia Tech experiment were recently published in the International Journal of Social Robotics.


"The experimental results weren't perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment," said Wagner. "The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot."


Gosh, I can't imagine anything that could possibly go wrong with that, unless it's -



Seriously, though, do you think there is reasonable justification for creating robots that are skilled in this way? Do the potential practical uses of deceptive behavior outweigh whatever risks might be involved? Are scary science fiction outcomes like the sophisticated intelligent robots in the Terminator series wholly imaginary?

On the other hand, is there enough of a possibility that something really could go wrong that we should forbid such research, starting right now? But then, even if enough people did want to disallow the training of liar robots or the programming of deceptive computer intelligences, how exactly would you go about preventing that? Wouldn't such restrictions be essentially impossible to enforce?


We'd like your opinion. Please answer our new poll question for IEET readers:

Do we need a law making it illegal for computers and robots to deceive or be dishonest?

Thanks for your input!