Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

How fear of nuclear power is hurting the environment

Eurosymposium on Healthy Aging

Pushing Humans off the Loop: Automation and the Unsustainability Problem

Want to Be a Physicist? Develop an Affinity for the Weird

India Awakens Conference Fundraiser and Ticket Sale

Is Robust Moral Realism a kind of Religious Belief?


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

rms on 'Born Poor, Stay Poor: The Silent Caste System of America' (Sep 29, 2016)

rms on 'Here's Why The IoT Is Already Bigger Than You Realize' (Sep 28, 2016)

instamatic on 'Born Poor, Stay Poor: The Silent Caste System of America' (Sep 26, 2016)

Joseph Ratliff on 'Born Poor, Stay Poor: The Silent Caste System of America' (Sep 24, 2016)

rsbakker on 'Competitive Cognitive Artifacts and the Demise of Humanity: A Philosophical Analysis' (Sep 24, 2016)

Nicholsp03 on 'The dark side of the Simulation Argument' (Sep 24, 2016)

DavidJKelley on 'Critical Nature of Emotions in Artificial General Intelligence' (Sep 23, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari
Sep 1, 2016
(4305) Hits
(0) Comments

A Free Education for all the World’s People: Why is this Not yet a Thing?
Sep 20, 2016
(4133) Hits
(2) Comments

Defining the Blockchain Economy: What is Decentralized Finance?
Sep 17, 2016
(3516) Hits
(0) Comments

Here’s Why The IoT Is Already Bigger Than You Realize
Sep 26, 2016
(3424) Hits
(1) Comments



IEET > Rights > Personhood > Life > Access > Vision > Technoprogressivism > Contributors > Jønathan Lyons

Print Email permalink (3) Comments (6882) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Determining Personhood: Not Black & White, But Many Shades of Gray


Jønathan Lyons
By Jønathan Lyons
Ethical Technology

Posted: Jun 11, 2012

In my last column, I mentioned that the Turing Test is an important part of determining personhood. The Turing Test determines not necessarily the consciousness of a technological agent, but whether that agent does a good enough simulation of a human being’s consciousness when communicating with a human being to fool that human being into believing that ze is communicating with another human being.

As George Dvorsky points out in his Sentient Developments blog and podcast,

“The Turing Test as a measure of consciousness is problematic. It’s an approach that’s purely based on behavioral assessments. It only tests how the subject acts and responds. The problem is that this could be simulated intelligence. It also conflates intelligence with consciousness (as already established, intelligence and consciousness are two different things).

The Turing Test also inadequately assesses intelligence. Some human behavior is unintelligent (e.g. random, unpredictable, chaotic, inconsistent, and irrational behavior). Moreover, some intelligent behavior is characteristically non-human in nature, but that doesn’t make it unintelligent or a sign of lack of subjective awareness.”

So determining a being’s sentience and consciousness will be no easy task.

Indeed, if we take the example of Cartesian doubt, we cannot completely prove the consciousness and sentience of another human being, let alone a nonhuman being who is even more different from other humans than we are to one another. In his work Meditations on First Philosophy, as Descartes attempted to create a philosophy of science derived purely from reason, he realized that because his senses could be fooled, this meant that his senses were fallible and, therefore, untrustworthy. He decided that because of this, input from his senses was too unreliable to be included in his new, reason-derived philosophy. He then set out to discover what he could prove without the input of his senses.

He didn’t get far.

Eventually, Descartes realized that all that he could prove, if he maintained his skepticism about his senses, was that he, the being doing the thinking, existed. Hence his well-known Cogito, ergo sum: I think, therefore I am. Descartes’ skepticism is important as we begin to attempt to prove the sentience of other beings. I cannot prove with certainty that you are sentient, even if we’re sitting right across from one-another, holding a conversation, if I maintain that level of doubt; I can only rely upon what my senses tell me about you (which is more in line with what the philosopher Bishop Berkeley had to say) and what my experiences with other sentient beings tell me about you.

The same holds true as we attempt to prove the sentience of other beings, human or otherwise. So, as would appear to be obvious, as we can know only those things about the world that we learn via our senses, the Turing Test remains, as far as I can tell, a valuable tool — one in a set of tools — for helping to determine a being’s personhood. When we consider the sentience of a nonhuman being, we must do so as objectively as possible, understanding that we can only make a declaration of personhood by observing characteristics associated with intelligence and sentience.

I teach a course on science fiction becoming real-world science fact. I share with students an example of the sorts of problems that can arise with a Turing Test with this scenario: Say you have a Web page with a button that reads, “Do You Think?”

I go to the decidedly low-tech chalk board and write a few lines of pseudo-code that would be attached to the button:

onClick
  print “I think.”

A user visits the Web page and sees the button asking “Do You Think?” The user clicks on the button to ask the question, and receives a response from the Web page, which then print the words: “I think.”

We ask whether ze’s thinking, ze tells us ze’s thinking, but ze’s not thinking.

More sophisticated simulations do a better job of convincing people and passing Turing’s test. I was in a chatroom online some years back. I was logged into a server that hosted software downloads. The server and its admin shared the same name. Everyone was required to remain in the chatroom while logged onto the server, which I’ll call Beauregard. In the chatroom, each user was represented by an icon, and communication was done in text. One of the users in chat was Beauregard. Or at least, ze said ze was. Ze used the admin’s identifying icon, and from time to time added a comment to the chat, things such as: “Hmmm … I wonder what all these people are doing here. Hmmm ... ” and “Sunny and beautiful in CA today!” and, “HOLLA!” and “Remember: Stay in chat!”

Ze really only said about six or seven different things. I was downloading a huge install file, so I was logged in for a long time, long enough to watch people login, join the chat, and begin to hold a casual conversation with “Beuregard.”

Beuregard was a chatbot. But that’s nothing. In an episode of the Radiolab podcast, Robert Epstein, a techie guy in his own right, found that the Muscoviet woman he’d been exchanging e-mails with via a dating service, and with whom he’d fallen in love, was, in fact, a chatbot zirself.

How I’m going about this

Methods of valuing other beings do something that I find difficult to overcome: They place an artificial hierarchy upon the panoply of beings, arbitrarily ensconcing ourselves, Homo sapiens sapiens (HSS), atop the lot. While the system of classifications of degrees of similarity and Otherness may be thought of as doing this to an extent, I think of it instead as more of a level, Venn diagram sort of system; instead of declaring us Kings of the Hill, this system simply seeks to lay out relationships between beings, biological or otherwise. At the moment, taking into account only what we’ve discussed, and including technological beings who pass the threshold of personhood, it looks something like this:


The diagram would change, of course, as needed, to included technologically enhanced humans, the Declaration on the Rights of Cetaceans, etc.

In attempting to consider how to value other beings, for my own purposes, I settled long ago on a simple, defining characteristic. For my interactions with other beings, I ask whether they can experience pain, emotionally or physically. If they can experience pain, I have decided to do my best not to inflict pain upon them. That works for me, and it is something each person must decide for zirself. (Ahimsa is a useful term for this philosophy.) Peter Singer, Ira W. DeCamp Professor of Bioethics at Princeton University and Laureate Professor at the Centre for Applied Philosophy and Public Ethics at the University of Melbourne, and the philosopher Jeremy Bentham espouse a philosophy toward other beings that considers whether they can suffer, rather than whether they can reason.

But I really want to emphasize something here: Where one comes down on such a standard is entirely up to oneself. I’ve found my standard, and I try my best to live by it, but that doesn’t mean that I think that anyone who doesn’t live by that same judgment is “bad,” or less moral, only that ze’s standard is on that differs from my own. And that our judgments differ on such decisions is all that anyone can expect.

But Dr. Martine Rothblatt, in the amazing H+ Magazine article/interview “Transgender, Transhuman, Transbeman,” “has taken from English Bioethicist John Harris the idea that that which values itself should be so valued, whether it be an ape or an artificial intelligence. She thinks this is a more useful guide than Jeremy Bentham’s derivation of rights from the ability to suffer.”

Valuing each being as it values itself sounds to me like a colossal undertaking; I haven’t read enough on this idea to have an informed opinion, but even simple organisms strive to survive, and will avoid peril and pain if they can. But Rothblatt’s guideline does allow consideration for claims that artificially intelligent, differently sentient, technological beings might make of their own personhood. Consider the Puppet Master, from the movie “Ghost in the Shell”:

Jared Taglialatela, a Clayton State University primatologist who studies chimpanzee communication, likewise finds that classification of “chimpanzee ‘personhood’ is a judgment that falls on a spectrum of cognitive and social characteristics — a spectrum of subtle gradations, one that doesn’t place humans above and outside the animal kingdom, but within it. Calling great apes ‘people’ is … not a black-and-white judgment.”

And it is not: Breaking the black-and-white mode of thought, we find instead a spectrum of many shades of gray. Speciesism is black-and-white thinking, as substrate chauvinism is about to be. And, as the moment approaches in which technological persons exist apart from biological bodies, legal and moral consideration of such technological beings as persons is paramount, whether we are discussing the uploaded minds of human beings, or other beings who are purely technological in origin. Without it, we simply draw a line and declare ourselves superior and them as having no interests deserving of protection. That is arbitrary, it is self-limiting, and it would be a disastrous way to treat beings deserving the status of persons.

In a very real way, what we need to embrace to prepare for all of this is an understanding that all oppression of people is oppression — an understanding, in other words, of a sort of unity of oppression: A moral understanding that oppression is abhorrent to an advanced, moral people.

I’ll close this essay with a song that seems to fit our discussion:

Band: Consolidated; Song: Unity of Oppression

 


Jønathan Lyons is an affiliate scholar for the IEET. He is also a transhumanist parent, an essayist, and an author of experimental fiction both long and short. He lives in central Pennsylvania and teaches at Bucknell University. His fiction publications include Minnows: A Shattered Novel.
Print Email permalink (3) Comments (6883) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


wise article. yet i wonder if Speciesism isn’t so hardwired into the human brain that we will not go beyond it until we have also left human form behind.

Brad A.
bradamante75.blogspot.com





Thanks, Brad Amante. I think you’re right about speciesism. I also think that we need to locate speciesism in our own judgments and do our best to combat it.





I think it is possible to get past the specieism, but as Jonathan says, it will mean naming it and owning that we have this bias. It is very much like the racism/sexism battle that still rages in some surprising place.  One of the offshoots of facing and naming and owning the other -isms is that it gives us space to recognize other biases and fears that we may not have named yet.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Becoming… (thanks to Ray Bradbury)

Previous entry: Cracks in Reality: How our Systems Fool Themselves

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org