The Turkle Test
Kyle Munkittrick
2011-02-10 00:00:00
URL

Sherry Turkle, Director of the MIT Initiative on Technology and Self, believes you certainly could. Whether or not you should is the question.

People, especially children, project personalities and emotions on to rudimentary robots. As the Chronicle of Higher Education article about Turkle's work shows, the result of believing a robot can feel is not always happy:

One day during Turkle's study at MIT, Kismet malfunctioned. A 12 year-old subject named Estelle became convinced that the robot had clammed up because it didn't like her, and she became sullen and withdrew to load up on snacks provided by the researchers. The research team held an emergency meeting to discuss "the ethics of exposing a child to a sociable robot whose technical limitations make it seem uninterested in the child," as Turkle describes in [her new book] Alone Together.


We want to believe our robots love us. Movies like Wall-E, The Iron Giant, Short Circuit, and A.I. are all based on the simple idea that robots can develop deep emotional connections with humans. For fans of the Half-Life video game series, Dog, a large scrapheap monstrosity with a penchant for dismembering hostile aliens, is one of the most lovable and loyal characters in the game. Science fiction is packed with robots that endear themselves to us, such as Data from Star Trek, the replicants in Blade Runner, and Legion from Mass Effect. Heck, even R2-D2 and C-3PO seem endeared to one another.

And Futurama has a warning for all of us:




Yet these lovable mechanoids are not what Turkle is critiquing.

Turkle is no Luddite, and does not strike me as a speciesist. What Turkle is critiquing is contentless, performed emotion. Robots like Kismet and Cog are representative of a group of robots where the brains are second to bonding. Humans have evolved to react to subtle emotional cues that allow us to recognize other minds, other persons.

Kismet and Cog have rather rudimentary A.I., but very advanced mimicking and response abilities. The result is they seem to understand us. Part of what makes HAL-9000 terrifying is that we cannot see it emote. HAL simply processes and acts.

On the one hand, we have empty emotional aping; on the other, faceless super-computers. What are we to do? Are we trapped between the options of the mindless bot with the simulated smile or the sterile super-mind calculating the cost of lives?

READ THE REST