Homo in Machina: Ethics of the AI Soul
Steven Umbrello
2015-04-27 00:00:00

Picard: “And what is at stake?”

Commander Data: “My right to choose, perhaps my very life.”

Commander Data, an extremely advanced cyborg, defends his right to be treated the same as a person in the hit TV show Star Trek: The Next Generation in the episode titled The Measure of a Man (1989). This is arguably one of the most famous episodes of the entire series because aside from its wonderfully written dialogue, it addresses the question of when an artificial intelligence deserves ethical treatment.

The episode revolves around an impending legal hearing in which a government official wants to seize Commander Data – a then key figure amongst the Enterprise crew – for research, citing the fact that there is no law that affords a being such as Data any rights. A hearing takes place to determine if Data is sentient. If it is determined that he is in fact sentient, then he would be afforded Human Rights.

In this Star Trek episode, these criteria must be met in order for AI to be deemed sentient:



In the case of Commander Data, “sentience” takes on a new meaning, one which far surpasses what we usually mean by the word—the ability to feel and perceive things. The Star Trek definition implies that requirements for determining AI sentience are much stricter than they currently are for biological organisms. For example, we gladly give dogs rights and protections because they appear to be sentient, though these rights are limited. Dogs do not seem to possess the intelligence that we have, and they might not have self-awareness, yet we don’t require these for a minimum of ethical treatment. In them, consciousness is readily assumed or inferred.

The Star Trek episode attempts to define sentience in human terms, it asks: What is necessary to bestow equal rights to AI? In this case, requiring intelligence and self-awareness would appear to make more sense. However, if Commander Data fails the Turing test for human sentience we’re left with a paradox. Animals would retain a minimum degree of rights and protections while Commander Data gets utterly destroyed without further deliberation. He’s either an autonomous being equivalent to humans or nothing but machinery.

Why the double standard?

We want to explore this question. By what criterion would AI count as deserving ethical treatment?

For the rest of this article when we use the word “sentience” we mean the ability to feel and perceive things.


Knowing Sentience

Firstly, it is not clear that exhibiting sentience is enough for bestowing rights to AI. There’s always a sense that artificial sentience/consciousness is just that—artificial. Should we make having sentience a strict requirement for AI ethical treatment? Obviously not since we can’t access that knowledge. Alan Turing makes the point that we can’t ever access anyone’s inner experience, and if we make that a requirement of how we know whether something thinks, we’d end up in solipsism. At first glance it seems that this argument holds for sentience as well. If a being behaves as if it’s sentient, for all practical purposes it’s sentient.

Turing’s point seems cogent enough, but on closer inspection it’s missing something critical: AI is created by us, animals and other people are natural. It’s not just conceivable but also likely that AI could put on a good show for us without ever experiencing the things we do. It would be preposterous to say the same thing of each other and animals, though it’s remotely conceivable when we’re sitting around doing philosophy. In natural creatures, consciousness is assumed in a way that just isn’t possible with AI. Logically, our knowledge of natural consciousness/sentience and AI consciousness/sentience is the same, but the difference in the way these are created is important to us. The artificial/natural difference is not something we can easily ignore.

This article will serve as an illustration of how we come to recognize the thresholds for providing an organism—biological or artificial—basic ethical treatment. To that end we’d like to explore what it is that currently compels us to bestow rights to a being. We look at our ethical treatment of natural creatures in order to see how our current requirements—intuitive, vague and flawed though they may be—could apply to AI. There’s a human bias that underscores our task. We are not interested in attacking or justifying that bias, but simply elucidating it. We hope that we might reach a greater understanding of some of the questions facing us as AI advances.

Here we will lay out a thought experiment, or “intuition pump” as the philosopher Daniel Dennett calls them, which will help us explore our intuitions on imparting minimal rights—or at least informal ethical treatment—to AIs. We will see how the composition of an organism is of significance when determining whether or not to endow a being with a minimum level of ethical treatment.


Built vs. Born: What It Means to be Made

To better discern how we intuitively understand sentience in other organisms, we use a thought experiment to literally ‘pump our intuition’. The following thought experiment was created in such a fashion as to deliberately put the subject behind a Rawlsian Veil of Ignorance. In each of the following cases we must assume that the reality of the AI’s consciousness is unknown so that we can better compare the subjects of the experiment. The thought experiment is as follows:

1. AI that looks and behaves exactly like a dog vs. a natural dog. Which one most deserves to be treated ethically?

In this case it appears that the natural dog most deserves to be treated ethically, however that does not mean that the AI dog does not deserve any ethically treatment. As we mentioned before, in natural creatures, consciousness is readily inferred or assumed. The AI dog does not get the benefit of the doubt. Since all else is equal, the natural dog wins.

2. AI that behaves exactly like us vs. a natural dog. Which one most deserves to be treated ethically?

Now our intuitions come into conflict. Many would be inclined to give the ethical treatment to the AI, even though its consciousness/sentience is difficult to infer. The assumption here is that the AI’s behavior is so convincing that it supersedes the debate between natural organism and artificial organism. That being said, the AI’s rights will most likely be limited and thus not equal to a biological human. However, we must take into account the physical manifestation of the AI. If the AI exhibits behavior that is like ours but inconsistent with its outward appearance, even a minor discrepancy may throw all of the AI’s behavior into question. For example, suppose we have a body-less AI that tells us of an experience that only beings with bodies can have. We are drawn to the discrepancy and made skeptical of all the AI’s other experiences as well. In such a case we might not want to afford these non-human-looking AIs rights at all.

3. AI that behaves and looks like us vs. a natural dog. Which one most deserves to be treated ethically?

This case seems quite clear given the previous case. Here we would intuitively afford the most ethical treatment to the AI. Not only does the AI behave exactly as we do, but the AI’s outward appearance is so human that it’s method of creation is not yet of concern.

4. AI that looks and behaves like us vs. us. Which one most deserves to be treated ethically?

This is the case where the first three logically culminate. The trajectory of our intuitions thus far would lead us to believe that the answer would depend on how the AI was created, what it’s made of. If the AI is biological in a similar way that humans are, we may be inclined to bestow upon them equal rights without question. If not, if the AI is more mechanical/computer-like, but somehow looks like us, debate would be more likely to ensue.

The above intuition pump was not to show that AI must necessarily be given ethical consideration, but rather that the material from which the AI is made is significant. If one argues that AI composition is irrelevant then we risk the paradox of treating AI dogs as natural dogs. By doing this we would assume that behavior is all that matters, even if, strictly speaking, that is all we have. We cannot underplay the importance of real consciousness and real sentience in our treatment of beings. To be sure, knowledge of these may be founded on assumptions, but these are not so easily dispensed with, and it’s not clear that they always ought to be.

However, in #4 we’ve reached a critical tipping point. Bestowing rights in a more legal sense requires delving into a critical analysis that leaves aside our inclinations and biases in a search of objective criteria. Yet our intuitions and inclinations are what drive us to bestow rights in the first place. Throughout this intuition pump we’ve become aware of the importance of likeness in the way we treat other beings. What underpins this principle of likeness is not something reasoned; reasoning comes after the fact and presents itself as if it were there all along. In the case of dogs and other creatures, we don’t reason about their abilities and then assume consciousness. We don’t ask them to pass a Turing test. On the contrary, it’s not uncommon for us to treat natural creatures even better than their nature would by critical analysis deserve. We talk to our pets, sometimes in preposterously long narratives. We feel a bit of anxiety when they view us having sex. We sometimes go so far as to clothe them and buy them healthcare that many humans don’t enjoy. We want them to be like us. The same could hold true for AI. Though we cannot ignore artificiality, perhaps our impulse will be to push for likeness. With AI we’ve shown why assuming consciousness would be challenging, but also how such doubts could be left behind in the face of compelling behavior.


Standing on the Edge of the Future

What might in the future drive us to bestow equal rights to AI may have nothing to do with critical analysis of composition, but at what level not bestowing rights would offend us. Once we’ve reached this tipping point where AI looks and behaves like us, it might not be worth debating the artificial/natural distinction. We might be inclined to treat AI as equal, because not doing this is simply too offensive.

Commander Data’s character in Star Trek is a testament to this impulse. The questions surrounding this Star Trek episode resonate with us precisely because Commander Data exhibits behavior similar to ours and yet is treated lower than an animal. This is offensive to viewers; we root for Commander Data.

But this is TV. Our doubts are easily suspended for the sake of entertainment.

Our tendency to anthropomorphize—to push for likeness—in our interpretations is not our only bias. Humans are no stranger to fear, intolerance, oppression and even slavery. For Commander Data the ruling was beneficial to him, affording him the same rights that humans posses, but we cannot be so certain for our future. Our decisions about affording rights to sentient machines will “seriously redefine the boundaries of personal freedom and liberty; expanding them some, savagely curtailing them for others” (Star Trek, 1989). The question that we will have to ask ourselves when we are eventually confronted with the inevitable choice of affording AI rights is whether or not we are prepared to “condemn [them]…to servitude and slavery?” (Star Trek, 1989).

These are questions that we will have to leave unanswered.




This piece was co-written between Steven Umbrello and Tina Forsee.




Refrences


Dennett, D. C. Intuition Pumps and Other Tools for Thinking. W. W. Norton & Company, 2013.

Michie, Donald. Turing’s Test and Conscious Thought. Department of Computer Science, no. TR 92-04 (1992).

Mirror test (ScienceDaily)

The Measure of a Man (IMDb)