In my last column, I mentioned that the Turing Test is an important part of determining personhood. The Turing Test determines not necessarily the consciousness of a technological agent, but whether that agent does a good enough simulation of a human being’s consciousness when communicating with a human being to fool that human being into believing that ze is communicating with another human being.
As George Dvorsky points out in his Sentient Developments blog and podcast,
“The Turing Test as a measure of consciousness is problematic. It’s an approach that’s purely based on behavioral assessments. It only tests how the subject acts and responds. The problem is that this could be simulated intelligence. It also conflates intelligence with consciousness (as already established, intelligence and consciousness are two different things).
The Turing Test also inadequately assesses intelligence. Some human behavior is unintelligent (e.g. random, unpredictable, chaotic, inconsistent, and irrational behavior). Moreover, some intelligent behavior is characteristically non-human in nature, but that doesn’t make it unintelligent or a sign of lack of subjective awareness.”
So determining a being’s sentience and consciousness will be no easy task.
Indeed, if we take the example of Cartesian doubt, we cannot completely prove the consciousness and sentience of another human being, let alone a nonhuman being who is even more different from other humans than we are to one another. In his work Meditations on First Philosophy, as Descartes attempted to create a philosophy of science derived purely from reason, he realized that because his senses could be fooled, this meant that his senses were fallible and, therefore, untrustworthy. He decided that because of this, input from his senses was too unreliable to be included in his new, reason-derived philosophy. He then set out to discover what he could prove without the input of his senses.
He didn’t get far.
Eventually, Descartes realized that all that he could prove, if he maintained his skepticism about his senses, was that he, the being doing the thinking, existed. Hence his well-known Cogito, ergo sum: I think, therefore I am. Descartes’ skepticism is important as we begin to attempt to prove the sentience of other beings. I cannot prove with certainty that you are sentient, even if we’re sitting right across from one-another, holding a conversation, if I maintain that level of doubt; I can only rely upon what my senses tell me about you (which is more in line with what the philosopher Bishop Berkeley had to say) and what my experiences with other sentient beings tell me about you.
The same holds true as we attempt to prove the sentience of other beings, human or otherwise. So, as would appear to be obvious, as we can know only those things about the world that we learn via our senses, the Turing Test remains, as far as I can tell, a valuable tool — one in a set of tools — for helping to determine a being’s personhood. When we consider the sentience of a nonhuman being, we must do so as objectively as possible, understanding that we can only make a declaration of personhood by observing characteristics associated with intelligence and sentience.
I teach a course on science fiction becoming real-world science fact. I share with students an example of the sorts of problems that can arise with a Turing Test with this scenario: Say you have a Web page with a button that reads, “Do You Think?”
I go to the decidedly low-tech chalk board and write a few lines of pseudo-code that would be attached to the button:
print “I think.”
A user visits the Web page and sees the button asking “Do You Think?” The user clicks on the button to ask the question, and receives a response from the Web page, which then print the words: “I think.”
We ask whether ze’s thinking, ze tells us ze’s thinking, but ze’s not thinking.
More sophisticated simulations do a better job of convincing people and passing Turing’s test. I was in a chatroom online some years back. I was logged into a server that hosted software downloads. The server and its admin shared the same name. Everyone was required to remain in the chatroom while logged onto the server, which I’ll call Beauregard. In the chatroom, each user was represented by an icon, and communication was done in text. One of the users in chat was Beauregard. Or at least, ze said ze was. Ze used the admin’s identifying icon, and from time to time added a comment to the chat, things such as: “Hmmm … I wonder what all these people are doing here. Hmmm ... ” and “Sunny and beautiful in CA today!” and, “HOLLA!” and “Remember: Stay in chat!”
Ze really only said about six or seven different things. I was downloading a huge install file, so I was logged in for a long time, long enough to watch people login, join the chat, and begin to hold a casual conversation with “Beuregard.”
Beuregard was a chatbot. But that’s nothing. In an episode of the Radiolab podcast, Robert Epstein, a techie guy in his own right, found that the Muscoviet woman he’d been exchanging e-mails with via a dating service, and with whom he’d fallen in love, was, in fact, a chatbot zirself.
How I’m going about this
Methods of valuing other beings do something that I find difficult to overcome: They place an artificial hierarchy upon the panoply of beings, arbitrarily ensconcing ourselves, Homo sapiens sapiens (HSS), atop the lot. While the system of classifications of degrees of similarity and Otherness may be thought of as doing this to an extent, I think of it instead as more of a level, Venn diagram sort of system; instead of declaring us Kings of the Hill, this system simply seeks to lay out relationships between beings, biological or otherwise. At the moment, taking into account only what we’ve discussed, and including technological beings who pass the threshold of personhood, it looks something like this:
The diagram would change, of course, as needed, to included technologically enhanced humans, the Declaration on the Rights of Cetaceans, etc.
In attempting to consider how to value other beings, for my own purposes, I settled long ago on a simple, defining characteristic. For my interactions with other beings, I ask whether they can experience pain, emotionally or physically. If they can experience pain, I have decided to do my best not to inflict pain upon them. That works for me, and it is something each person must decide for zirself. (Ahimsa is a useful term for this philosophy.) Peter Singer, Ira W. DeCamp Professor of Bioethics at Princeton University and Laureate Professor at the Centre for Applied Philosophy and Public Ethics at the University of Melbourne, and the philosopher Jeremy Bentham espouse a philosophy toward other beings that considers whether they can suffer, rather than whether they can reason.
But I really want to emphasize something here: Where one comes down on such a standard is entirely up to oneself. I’ve found my standard, and I try my best to live by it, but that doesn’t mean that I think that anyone who doesn’t live by that same judgment is “bad,” or less moral, only that ze’s standard is on that differs from my own. And that our judgments differ on such decisions is all that anyone can expect.
But Dr. Martine Rothblatt, in the amazing H+ Magazine article/interview “Transgender, Transhuman, Transbeman,” “has taken from English Bioethicist John Harris the idea that that which values itself should be so valued, whether it be an ape or an artificial intelligence. She thinks this is a more useful guide than Jeremy Bentham’s derivation of rights from the ability to suffer.”
Valuing each being as it values itself sounds to me like a colossal undertaking; I haven’t read enough on this idea to have an informed opinion, but even simple organisms strive to survive, and will avoid peril and pain if they can. But Rothblatt’s guideline does allow consideration for claims that artificially intelligent, differently sentient, technological beings might make of their own personhood. Consider the Puppet Master, from the movie “Ghost in the Shell”:
Jared Taglialatela, a Clayton State University primatologist who studies chimpanzee communication, likewise finds that classification of “chimpanzee ‘personhood’ is a judgment that falls on a spectrum of cognitive and social characteristics — a spectrum of subtle gradations, one that doesn’t place humans above and outside the animal kingdom, but within it. Calling great apes ‘people’ is … not a black-and-white judgment.”
And it is not: Breaking the black-and-white mode of thought, we find instead a spectrum of many shades of gray. Speciesism is black-and-white thinking, as substrate chauvinism is about to be. And, as the moment approaches in which technological persons exist apart from biological bodies, legal and moral consideration of such technological beings as persons is paramount, whether we are discussing the uploaded minds of human beings, or other beings who are purely technological in origin. Without it, we simply draw a line and declare ourselves superior and them as having no interests deserving of protection. That is arbitrary, it is self-limiting, and it would be a disastrous way to treat beings deserving the status of persons.
In a very real way, what we need to embrace to prepare for all of this is an understanding that all oppression of people is oppression — an understanding, in other words, of a sort of unity of oppression: A moral understanding that oppression is abhorrent to an advanced, moral people.
I’ll close this essay with a song that seems to fit our discussion:
Band: Consolidated; Song: Unity of Oppression