The “Uncanny Valley” is the evocative name for the commonplace reaction to realistic-but-not-quite-right simulated humans, robotic or animated. Most of us, when encountering such a simulacrum, have an instinctive “it’s creepy” response, one that is enhanced when the sim is moving. Invented by roboticist Masahiro Mori, the Uncanny Valley concept is typically applied to beings (broadly conceived) as they become increasingly similar to humans in appearance and action. But what about beings as they become less similar to humans—following the path of transhumans and, eventually, posthumans?
An article in the latest issue of New Scientist (subscription required) prompted this question. Thierry Chaminade and Ayse Saygin of University College London began to investigate how the Uncanny Valley phenomenon worked, and performed brain scans on people encountering simulacra of varying degrees of human likeness. They found spikes of activity in the parietal cortex.
This area of the brain is known to contain “mirror neurons”, which are active when someone imagines performing an action they are observing. While watching all three videos, people imagine picking up the cup themselves. Chaminade says the extra mirror neuron activity when viewing the lifelike robot might be due to the way it moves, which jars with its appearance. This “breach of expectation” could trigger extra brain activity and produce the uncanny feelings.
The response may stem from an ability to identify - and avoid - people suffering from an infectious disease. Very lifelike robots seem almost human but, like people with a visible disease, aspects of their appearance jar.
Clearly, such a reaction does not require that the observed “human” actually be sick, only that its behavior and/or physiological characteristics seem a bit off. This could, conceivably, include human beings with “enhanced” characteristics—“H+” in the current jargon.
Science fiction visions of space-adapted posthumans with hands for feet or wings for low-gravity flight would obviously seem at least “a bit off,” but the enhancements need not be that radical. In fact, it’s possible—even likely—that the less-radical changes would end up being more disturbing. Enhancements to optical capabilities might change the appearance of the eye. Improved neuromuscular systems might make everyday actions—grabbing a coffee cup, picking up a child, even walking along the street—look unnatural. Accelerated cognition might make verbal interactions disjointed, even bizarre.
As long as these changes fall into the broad ranges of current human variety, we’d be unlikely to see an unusually negative response. But if they are clearly outside the realm of the “expanded normal,” and if they have external manifestations that are readily identifiable, it may very well be that the reactions of unmodified people—and perhaps even the reactions of other “H+” individuals!—are significantly more negative than one might expect. In this scenario, the enhanced person wouldn’t just seem weird, he or she would seem wrong.
If this is possible, then it has profound social and political implications for transhumanist and other H+ advocate agendas for human enhancement technologies.
For example, if the typical reaction of unmodified people to enhanced humans is “that guy really creeps me out,” it may be easy for opponents of these technologies to generate a legal and cultural backlash.
Similarly, if the gut reaction to a moderately modified human is to see him or her as no longer human, political struggles could get very ugly very quickly.
It’s unlikely that the first generations of human enhancement technologies—which would most likely just be adaptations of therapeutic medical technologies—would engender this kind of response. But if we follow the logic of the human enhancement model, we will at some point over this century start to introduce changes to the human physiological and behavior model that will fall well outside the realm of human variability. It’s possible that we’ll have enough other kinds of simulacra and non-human persons in our midst that we’ll take such modifications in stride, and have no qualms about keeping the transhumans in the human family.
But it’s also possible—arguably, more possible—that the emergence of significant modifications to humanity will trigger deep responses in the human brain, ones that we may very well not like.