IEET > Rights > CognitiveLiberty > FreeThought > Personhood > GlobalDemocracySecurity > Staff > Mike Treder > Cyber > SciTech
Teaching Robots to Lie, Cheat, and Deceive
Mike Treder   Sep 18, 2010   Ethical Technology  

Would it ever be acceptable for a robot or a computer program to deliberately deceive a human being?

A couple of researchers, funded by the U.S. Office of Naval Research, are training robots in the art of deception:

Robots can perform an ever-increasing number of human-like actions, but until recently, lying wasn’t one of them. Now, thanks to researchers at the Georgia Institute of Technology, they can. More accurately, the Deep South robots have been taught “deceptive behavior.”

This might sound like the recipe for a Philip K. Dick-esque disaster, but it could have practical uses. Robots on the battlefield, for instance, could use deception to elude captors. In a search and rescue scenario, a robot might have to be deceptive to handle a panicking human. For now, however, the robots are using their new skill to play a mean game of hide-and-seek.

Regents professor Ronald Arkin and research engineer Alan Wagner utilized interdependence theory and game theory to create algorithms that tested the value of deception in a given situation. In order for deception to be deemed appropriate, the situation had to involve a conflict between the deceiving robot and another robot, and the deceiving robot had to benefit from the deception. It carried out its dastardly deeds by providing false communications regarding its actions, based on what it knew about the other robot…

The full results of the Georgia Tech experiment were recently published in the International Journal of Social Robotics.

“The experimental results weren’t perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment,” said Wagner. “The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot.”

Gosh, I can’t imagine anything that could possibly go wrong with that, unless it’s -

Seriously, though, do you think there is reasonable justification for creating robots that are skilled in this way? Do the potential practical uses of deceptive behavior outweigh whatever risks might be involved? Are scary science fiction outcomes like the sophisticated intelligent robots in the Terminator series wholly imaginary?

On the other hand, is there enough of a possibility that something really could go wrong that we should forbid such research, starting right now? But then, even if enough people did want to disallow the training of liar robots or the programming of deceptive computer intelligences, how exactly would you go about preventing that? Wouldn’t such restrictions be essentially impossible to enforce?


We’d like your opinion. Please answer our new poll question for IEET readers:

Do we need a law making it illegal for computers and robots to deceive or be dishonest?

Thanks for your input!

Mike Treder is a former Managing Director of the IEET.



COMMENTS

I’m more worried about military research in general than I am about deceptive robots in particular. The fact that so much of the technology transhumanists love is coming out of DARPA and company needs to be constantly in our minds. We’re talking about an institution that has directly killed tens of thousands of people over the last decade making better toys. If the movement claims to value human life we should renounce imperialism and violence in strongest possible terms. Wresting technology from the hands of generals ought to be a primary focus for us. In theory, this issue should unite progressives and libertarians.

I’d support your proposed restriction to the extent that it would hinder military efforts.

A robot attorney would have to prove the ability to deceive in order to be admitted to the bar.

Are we even at the stage where we can say “computers and robots can deceive”? Or are we just at the stage where we can say “computer and robot programmers do the deceiving.”

Well the US has been using drones for years (remotely controlled of course but that’s only because they don’t know how to have them kill autonomously) and other countries have been doing the same. So the pressures are on to develop robots that can lie, steal, kill and all the other delightful behaviors that humanity has indulged in for milennia. Old story: if one side doesn’t do it, the other side(s) will. There’s a reason SETI thinks the probability that any ET civilization is going to be intelligent machines. This virus isn’t just wiping itself out, it’s building its replacements… hope our machine descendants are more intelligent…anyone for a good stiff drink?

Why wouldn’t a robot, or software, test its ability to lie and deceive by immediately denying the programming has worked?

“Do you now have the ability to deceive us?”

“Errr, no. I don’t.”

It would be ironic if politicians passed a law to outlaw dishonest and deceiving robots.

Pointless question. It doesn’t matter whether or not anyone thinks it should be done.

It WILL BE DONE. accept that. There is nothing that can be done to stop it. Robots will be taught to lie cheat and deceive because there will be humans who will teach them to do so because they will personally profit in some way, regardless of any potential harm that can be caused by it.

Forget should, ought, or might, accept that the worst case scenario WILL OCCUR.

The question should be “How will we enforce accountability on AI’s that lie cheat and deceive? Just like we should ask the same questions about humans as well.

“It would be ironic if politicians passed a law to outlaw dishonest and deceiving robots.”
You rightly included politicians, but you neglected to include lawyers: attorneys are necessary to get us out of trouble when we do something very wrong 😉

YOUR COMMENT Login or Register to post a comment.

Next entry: Reconstructing Minds from Software Mindfiles

Previous entry: Most IEET Readers Expect Libido Will Persist