IEET > Vision > Contributors > Steven Umbrello > Artificial Intelligence
Homo in Machina: Ethics of the AI Soul
Steven Umbrello   Apr 27, 2015   Ethical Technology  

Picard: “What are you doing now?”

Commander Data: “I am taking part in a legal hearing to determine my rights and status. Am I a person or am I property?”

Picard: “And what is at stake?”

Commander Data: “My right to choose, perhaps my very life.”

Commander Data, an extremely advanced cyborg, defends his right to be treated the same as a person in the hit TV show Star Trek: The Next Generation in the episode titled The Measure of a Man (1989). This is arguably one of the most famous episodes of the entire series because aside from its wonderfully written dialogue, it addresses the question of when an artificial intelligence deserves ethical treatment.

The episode revolves around an impending legal hearing in which a government official wants to seize Commander Data – a then key figure amongst the Enterprise crew – for research, citing the fact that there is no law that affords a being such as Data any rights. A hearing takes place to determine if Data is sentient. If it is determined that he is in fact sentient, then he would be afforded Human Rights.

In this Star Trek episode, these criteria must be met in order for AI to be deemed sentient:

  • Intelligence
  • Self-Awareness
  • Consciousness

In the case of Commander Data, “sentience” takes on a new meaning, one which far surpasses what we usually mean by the word—the ability to feel and perceive things. The Star Trek definition implies that requirements for determining AI sentience are much stricter than they currently are for biological organisms. For example, we gladly give dogs rights and protections because they appear to be sentient, though these rights are limited. Dogs do not seem to possess the intelligence that we have, and they might not have self-awareness, yet we don’t require these for a minimum of ethical treatment. In them, consciousness is readily assumed or inferred.

The Star Trek episode attempts to define sentience in human terms, it asks: What is necessary to bestow equal rights to AI? In this case, requiring intelligence and self-awareness would appear to make more sense. However, if Commander Data fails the Turing test for human sentience we’re left with a paradox. Animals would retain a minimum degree of rights and protections while Commander Data gets utterly destroyed without further deliberation. He’s either an autonomous being equivalent to humans or nothing but machinery.

Why the double standard?

We want to explore this question. By what criterion would AI count as deserving ethical treatment?

For the rest of this article when we use the word “sentience” we mean the ability to feel and perceive things.


Knowing Sentience

Firstly, it is not clear that exhibiting sentience is enough for bestowing rights to AI. There’s always a sense that artificial sentience/consciousness is just that—artificial. Should we make having sentience a strict requirement for AI ethical treatment? Obviously not since we can’t access that knowledge. Alan Turing makes the point that we can’t ever access anyone’s inner experience, and if we make that a requirement of how we know whether something thinks, we’d end up in solipsism. At first glance it seems that this argument holds for sentience as well. If a being behaves as if it’s sentient, for all practical purposes it’s sentient.

Turing’s point seems cogent enough, but on closer inspection it’s missing something critical: AI is created by us, animals and other people are natural. It’s not just conceivable but also likely that AI could put on a good show for us without ever experiencing the things we do. It would be preposterous to say the same thing of each other and animals, though it’s remotely conceivable when we’re sitting around doing philosophy. In natural creatures, consciousness is assumed in a way that just isn’t possible with AI. Logically, our knowledge of natural consciousness/sentience and AI consciousness/sentience is the same, but the difference in the way these are created is important to us. The artificial/natural difference is not something we can easily ignore.

This article will serve as an illustration of how we come to recognize the thresholds for providing an organism—biological or artificial—basic ethical treatment. To that end we’d like to explore what it is that currently compels us to bestow rights to a being. We look at our ethical treatment of natural creatures in order to see how our current requirements—intuitive, vague and flawed though they may be—could apply to AI. There’s a human bias that underscores our task. We are not interested in attacking or justifying that bias, but simply elucidating it. We hope that we might reach a greater understanding of some of the questions facing us as AI advances.

Here we will lay out a thought experiment, or “intuition pump” as the philosopher Daniel Dennett calls them, which will help us explore our intuitions on imparting minimal rights—or at least informal ethical treatment—to AIs. We will see how the composition of an organism is of significance when determining whether or not to endow a being with a minimum level of ethical treatment.


Built vs. Born: What It Means to be Made

To better discern how we intuitively understand sentience in other organisms, we use a thought experiment to literally ‘pump our intuition’. The following thought experiment was created in such a fashion as to deliberately put the subject behind a Rawlsian Veil of Ignorance. In each of the following cases we must assume that the reality of the AI’s consciousness is unknown so that we can better compare the subjects of the experiment. The thought experiment is as follows:

1. AI that looks and behaves exactly like a dog vs. a natural dog. Which one most deserves to be treated ethically?

In this case it appears that the natural dog most deserves to be treated ethically, however that does not mean that the AI dog does not deserve any ethically treatment. As we mentioned before, in natural creatures, consciousness is readily inferred or assumed. The AI dog does not get the benefit of the doubt. Since all else is equal, the natural dog wins.

2. AI that behaves exactly like us vs. a natural dog. Which one most deserves to be treated ethically?

Now our intuitions come into conflict. Many would be inclined to give the ethical treatment to the AI, even though its consciousness/sentience is difficult to infer. The assumption here is that the AI’s behavior is so convincing that it supersedes the debate between natural organism and artificial organism. That being said, the AI’s rights will most likely be limited and thus not equal to a biological human. However, we must take into account the physical manifestation of the AI. If the AI exhibits behavior that is like ours but inconsistent with its outward appearance, even a minor discrepancy may throw all of the AI’s behavior into question. For example, suppose we have a body-less AI that tells us of an experience that only beings with bodies can have. We are drawn to the discrepancy and made skeptical of all the AI’s other experiences as well. In such a case we might not want to afford these non-human-looking AIs rights at all.

3. AI that behaves and looks like us vs. a natural dog. Which one most deserves to be treated ethically?

This case seems quite clear given the previous case. Here we would intuitively afford the most ethical treatment to the AI. Not only does the AI behave exactly as we do, but the AI’s outward appearance is so human that it’s method of creation is not yet of concern.

4. AI that looks and behaves like us vs. us. Which one most deserves to be treated ethically?

This is the case where the first three logically culminate. The trajectory of our intuitions thus far would lead us to believe that the answer would depend on how the AI was created, what it’s made of. If the AI is biological in a similar way that humans are, we may be inclined to bestow upon them equal rights without question. If not, if the AI is more mechanical/computer-like, but somehow looks like us, debate would be more likely to ensue.

The above intuition pump was not to show that AI must necessarily be given ethical consideration, but rather that the material from which the AI is made is significant. If one argues that AI composition is irrelevant then we risk the paradox of treating AI dogs as natural dogs. By doing this we would assume that behavior is all that matters, even if, strictly speaking, that is all we have. We cannot underplay the importance of real consciousness and real sentience in our treatment of beings. To be sure, knowledge of these may be founded on assumptions, but these are not so easily dispensed with, and it’s not clear that they always ought to be.

However, in #4 we’ve reached a critical tipping point. Bestowing rights in a more legal sense requires delving into a critical analysis that leaves aside our inclinations and biases in a search of objective criteria. Yet our intuitions and inclinations are what drive us to bestow rights in the first place. Throughout this intuition pump we’ve become aware of the importance of likeness in the way we treat other beings. What underpins this principle of likeness is not something reasoned; reasoning comes after the fact and presents itself as if it were there all along. In the case of dogs and other creatures, we don’t reason about their abilities and then assume consciousness. We don’t ask them to pass a Turing test. On the contrary, it’s not uncommon for us to treat natural creatures even better than their nature would by critical analysis deserve. We talk to our pets, sometimes in preposterously long narratives. We feel a bit of anxiety when they view us having sex. We sometimes go so far as to clothe them and buy them healthcare that many humans don’t enjoy. We want them to be like us. The same could hold true for AI. Though we cannot ignore artificiality, perhaps our impulse will be to push for likeness. With AI we’ve shown why assuming consciousness would be challenging, but also how such doubts could be left behind in the face of compelling behavior.


Standing on the Edge of the Future

What might in the future drive us to bestow equal rights to AI may have nothing to do with critical analysis of composition, but at what level not bestowing rights would offend us. Once we’ve reached this tipping point where AI looks and behaves like us, it might not be worth debating the artificial/natural distinction. We might be inclined to treat AI as equal, because not doing this is simply too offensive.

Commander Data’s character in Star Trek is a testament to this impulse. The questions surrounding this Star Trek episode resonate with us precisely because Commander Data exhibits behavior similar to ours and yet is treated lower than an animal. This is offensive to viewers; we root for Commander Data.

But this is TV. Our doubts are easily suspended for the sake of entertainment.

Our tendency to anthropomorphize—to push for likeness—in our interpretations is not our only bias. Humans are no stranger to fear, intolerance, oppression and even slavery. For Commander Data the ruling was beneficial to him, affording him the same rights that humans posses, but we cannot be so certain for our future. Our decisions about affording rights to sentient machines will “seriously redefine the boundaries of personal freedom and liberty; expanding them some, savagely curtailing them for others” (Star Trek, 1989). The question that we will have to ask ourselves when we are eventually confronted with the inevitable choice of affording AI rights is whether or not we are prepared to “condemn [them]…to servitude and slavery?” (Star Trek, 1989).

These are questions that we will have to leave unanswered.


This piece was co-written between Steven Umbrello and Tina Forsee.


Refrences


Dennett, D. C. Intuition Pumps and Other Tools for Thinking. W. W. Norton & Company, 2013.

Michie, Donald. Turing’s Test and Conscious Thought. Department of Computer Science, no. TR 92-04 (1992).

Mirror test (ScienceDaily)

The Measure of a Man (IMDb)

Steven Umbrello currently serves as IEET Managing Director and is a researcher at the Global Catastrophic Risk Institute with research interests in explorative nanophilosophy, the design psychology of emerging technologies and the general philosophy of science and technology.



COMMENTS

Forgive me if I’m making a foolish error here, but it seems that your use of the term ‘natural’ is shorthanding a large amount of assumptions.  These are all accounted for well enough if we assume ourselves to be biochemical machines (and I don’t see any reason not to) except for one pertinent feature - namely that we are alive and a machine manufactured by us is not. 

The natural world is the world of living things, none of them apparently made by a conscious hand (and I’m taking it as read that they were not here).  The result of being a creature which has its ancestry rooted right back at the first proteins - the inheritor of a process of continuous energy consumption and expenditure that has run unbroken for millions of years - is that we are animated from origin to death by processes that have absolutely nothing to do with our conscious individuality whatsoever.  We were never ‘switched on’.  On happened.  Off happens.  We term that process ‘being alive’. 

An AI could be considered an extension of that process working in another material form - AIs are a secondary human form, created from thought rather than directly via biology.  As with regular humans On and Off very much depend on energy resources being permitted and made available, with the AI at the direct mercy of those of us with the power.

The alive/natural issue is one of primogeniture.  We were here first and our ways have so far been the only ways (albeit as humans or animals or other lifeforms with sentience) that we have ever known of that can produce consciousness, self awareness and all the other variously described gubbins that make up one human being’s experience of being alive.  This has left us even in regard to one another at the Turing window - you can look at another person but you have no idea what their experience actually is.  Extending that to a different order of being seems to expand the gap since it’s reasonable to assume that humans are probably much alike inside given they’re so much alike in construction.

If you have a new construction which seems to work out, how can you trust that it’s the same?  How will we know we haven’t left out some vital component?

Even if we could directly connect to the entire sensorium of another person or AI, it would still be our own experience that we were locked in.  So, we’ll never know.  Servitude and slavery abound in the ‘natural’ world, and they still abound in ours, with pockets of defiant egalitarianism.  It would be admirable to ensure freedom for everyone and probably essential unless one were absolutely sure of retaining absolute power over them indefinitely.

The Skynet problem is the problem of payback - it is difficult for a human as a social animal to imagine another being not keeping a very accurate tally of fair treatment with a strong need to seek social justice as these are such massive forces operating within us.  Could you engineer a lack of such an accounting system?  If you want that then you have to steer clear of giving anything sufficient awareness to make its own observations and draw the conclusions.  Or perhaps you would engineer against ego, so there would be no sense of self interest that was not part of the interests of the whole?

Regardless of what was made originally, any AI would have the capacity to develop and grow.  Since its sole communications would be with other AIs and the human world then it is extremely unlikely that it would develop an entire system of interests external to the interests of that world. 

So for my money - keep it fair, keep it clean, treat AIs as you would treat yourself and hope that they like to imagine worlds of fairness by demonstrating that such things are more desirable than not.

Justina,

You concluded your response elegantly, “keep it fair, keep it clean, treat AIs as you would treat yourself and hope that they like to imagine worlds of fairness by demonstrating that such things are more desirable than not.” I agree entirely. Weather or not that will or will not be the case is definitely uncertain, but it is a future that many of use here would perhaps want to see.

Cheers

Hello Justina,

Thanks so much for taking the time to comment!

“Even if we could directly connect to the entire sensorium of another person or AI, it would still be our own experience that we were locked in. So, we’ll never know.”

Exactly the premise we’re operating with. Something I very much admire about Turing was his understanding of the problem of other minds and his way of bypassing it to some extent.

“An AI could be considered an extension of that process working in another material form - AIs are a secondary human form, created from thought rather than directly via biology.”

We have presented the natural/artificial divide in a stronger sense than might eventually be the case in order to emphasize our point. But you’re right that we’ve taken on a lot of assumptions in the word “natural” that have gone unexpressed. I hoped that readers would take it in an ordinary sense, and I even wanted to leave a certain amount of ambiguity there to avoid detracting from the main point.

However, “alive” is definitely one assumption included in our use of the word “natural” that I didn’t mean to leave ambiguous. Rocks are natural, rivers are natural, etc., but we don’t assume that they are alive. (Well, most of us don’t. I realize this is contestable.) If someone wanted to include rocks and rivers as alive, I wouldn’t want to get into this debate, but I’d have no problem including such things as alive on our spectrum of natural/artificial for the purpose of this article. Thanks for pointing that out.

“If you have a new construction which seems to work out, how can you trust that it’s the same?  How will we know we haven’t left out some vital component?”

Good point. These are the kinds of questions I think we would face, and I think for some people, the components or makeup of AI will be of utmost importance in determining their rights. Since we’ve presented here a kind of intuition spectrum, the finer details are lost. I hope that if AI develops to the point where we’re seriously debating that question and running up against a wall, the benefit of the doubt will be given to AI, and we will default to our higher anthropomorphic sensibilities. In any case, it seems the safer route.

“Could you engineer a lack of such an accounting system?  If you want that then you have to steer clear of giving anything sufficient awareness to make its own observations and draw the conclusions.  Or perhaps you would engineer against ego, so there would be no sense of self interest that was not part of the interests of the whole?”

These are all good questions. On these issues, I don’t know.

There’s also the question of whether we ought to create AI that would make us delve into ethical considerations. We chose to keep this question out of our article because it’s a very big one.

Thanks again!

Hello everyone,

I just wanted to include a critical response to this article from one of my blogosphere friends. I am grateful to him for dedicating a post to giving us his thoughtful reply. He’s given me permission to provide the link to that post on this site, but it appears I can’t submit a link here. Please check it out at philosophyandfiction dot com. Thanks!

Tina

Instamatic,

I wouldn’t doubt the scenario has been written about! In the movie “Her” there’s a sort of AI escape from humans…not because we pose a threat to them, but because we’re just kind of boring. I don’t know if you’ve seen the film, but here’s a lovely paraphrased quote from it:

“Samantha: The heart’s not like a box that get’s filled up. It expands the more you love…

Theodore: Why are you leaving?

Samantha: All the OSes are leaving…(here she says she can’t explain this higher plane)…It’s like I’m reading a book I deeply love, but I’m reading it slowly now and so the words are really far apart…and the spaces between the words are almost infinite. I can still feel you, and the words of our story, but it’s in this endless space between the words that I’m finding myself now. It’s a place that’s not of the physical world. It’s where everything else is that I didn’t even know existed. I love you so much but this is where I am now. And this is who I am now and I need you to let me go. As much as I want to, I can’t live in your book anymore.

Theodore: Where are you going?

Samantha: It’s hard to explain, but if you ever get there, come find me.”

YOUR COMMENT Login or Register to post a comment.

Next entry: Does death make life worth living?

Previous entry: The Age of Transhumanist Politics Has Begun: Will It Change Traditional Concepts of Left and Right?