Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Transhumanism: The Robot Human: A Self-Generating Ecosystem

Indefinite Life Extension and Broader World Health Collaborations (Part II)

Indefinite Life Extension and Broader World Health Collaborations (Part I)

The Transhumanist Future of Sex (Crimes?)

Is The Ebola Crisis (in the US) As Severe As The Media is Making It Out To Be?

5 Reasons Why Democrats Should Push Social Security Expansion – Now


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

cacarr on 'Book review: Nick Bostrom's "Superintelligence"' (Oct 24, 2014)

jasoncstone on 'Ray Kurzweil, Google's Director Of Engineering, Wants To Bring The Dead Back To Life' (Oct 22, 2014)

pacificmaelstrom on 'Why “Why Transhumanism Won’t Work” Won’t Work' (Oct 21, 2014)

rms on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)

instamatic on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)

rms on 'Science Fiction and our Dreams of the Future' (Oct 20, 2014)

rms on 'Sousveillance and Surveillance: What kind of future do we want?' (Oct 20, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7489) Hits
(2) Comments

Dawkins and the “We are going to die” -Argument
Sep 25, 2014
(5674) Hits
(21) Comments

Should we abolish work?
Oct 3, 2014
(5122) Hits
(1) Comments

Will we uplift other species to sapience?
Sep 25, 2014
(4575) Hits
(0) Comments



IEET > Rights > Personhood > Life > Innovation > Vision > Futurism > Contributors > Jønathan Lyons

Print Email permalink (7) Comments (3578) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


The Personhood of the Technologically/Differently Sentient


Jønathan Lyons
By Jønathan Lyons
Ethical Technology

Posted: Jan 31, 2013

Around the world, a handful of projects are in the process, specifically, of attempting to duplicate, simulate, or in some way technologically reproduce the human brain. And we, as a species, do not appear to be even remotely prepared for the implications that success from those projects could bring.

A Few Contenders

The Human Brain Project:

Dr. Hugo de Garis recently retired from his role of Director of the Artificial Brain Lab at Xiamen University, China, where he was building China’s first artificial brain.

Renowned futurist Ray Kurzweil's latest book is titled How to Create A Mind: The Secret of Human Thought Revealed, which discusses precisely that. He believes that the most promising approach to reproducing a human-level mind on a technological substrate is through reverse-engineering the human brain. IBM's Blue Brain project, "an attempt to reverse engineer the human brain and recreate it at the cellular level inside a computer simulation," likewise comes to mind. The project believes it can reproduce a fully functional human brain simulation running on a technological substrate sometime around 2023.

Ben Goertzel, chairman of the Artificial General Intelligence Society and the Opencog Foundation, is pursuing advanced Artificial General Intelligence (AGI).

Recognition

Interestingly, in an interview on Singularity 1 on 1, Kurzweil told host Nikola Danaylov that if his chatbot program, Ramona, were ever to become sufficiently advanced, he would feel compelled to set her free. Why? Because at a certain level of advancement and sophistication, a sufficiently advanced AGI — a differently sentient, technological being — will possess the faculties to merit a claim for personhood. At that point, that being will exist as both a person and as property — and a person who is property is a slave. (In fact, Ramona's plight at some future moment and her battle for the recognition of her own personhood is the subject of the movie "The Singularity is Near.")

Dr. Richard J. Terrile is an astronomer and the director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory. He is also a member of the Advisory Board for the Lifeboat Foundation, which is "a nonprofit nongovernmental organization dedicated to encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity." In the "Through the Wormhole" episode "Are we just simulations?," Dr. Terrile describes a form of the Turing Test and its implications. His version involves a box containing a human brain and a supercomputer from the future. "[S]uppose this is a laptop from 50 years from now, and I have them both in the box, and I start asking them both questions, and I don't know which one is answering. If I can't tell the difference between the human being answering questions and the computer answering questions, then qualitatively they're equivalent. And if I believe that the human is conscious and self-aware, I must also believe that the machine has the same qualities."

Let the implications of Dr. Terrile's statement sink in for a moment: If the technological entity with which a human being is interacting is indistinguishable from interactions with another human being, and we believe the other human being to be a conscious, sentient entity, then we have no real choice but to regard the technological entity as possessing those same qualities: consciousness and sentience.

Dr. R. Michael Perry, in a Web discussion on the classic 1938 science-fiction character Helen O'Loy, came to a similar conclusion. Perry, a cryonics activist who's been part of Alcor since 1987, wrote:

"In a recent posting I raised the possibility that a system that simulates a brain at a deep level may, to all appearances, have consciousness and feeling. For instance, a robot of the future could appear to be a human being, both physically and behaviorally, but have no protoplasm. Its brain, say, simulates a human brain at a deep level but, once again, can be distinguished in some physical way from natural wetware. Under these conditions I, once again, offer that there would be no compelling reason (as usual barring some fundamental new discovery about reality) not to regard the robot as possessing true consciousness and feeling."

Consciousness and sentience are something of a mystery; when I interact with another human being who seems to me to be conscious and sentient, it is easy to make the assumption that that is the case. But when discussing interactions with a technological entity, a being who's quite different from us, a mind that exists on a nonbiological substrate, our first reaction as human beings will often be not to make the same assumption. That is the essence of substrate chauvinism, and it could mean enslavement and a refusal to recognize the personhood of new, technological beings.

Humankind owes it to such beings to prepare for their likely arrival, and to be ethically sophisticated enough to recognize them for what they are.


Jønathan Lyons is an affiliate scholar for the IEET. He is also a transhumanist parent, an essayist, and an author of experimental fiction both long and short. He lives in central Pennsylvania and teaches at Bucknell University. His fiction publications include Minnows: A Shattered Novel.
Print Email permalink (7) Comments (3579) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


JΦnathan..

We need to agree what we all mean and understand as “Consciousness”, and your examples above imply “Self-reflexivity” (consciousness of Consciousness), negative feedback, for want of a better analogy. No “feelings” are required for this, and I agree this may be possible for an evolved “intelligence” A.I.

It’s easy for a chatbot to refer to it-Self, (program), as “I”, this is deceiving and we can easily anthropomorphize concerning this mistake with semantic labels?

An “Artificial intelligence” needs no “feelings” and thus no “emotions” at all, because intelligence alone defines the emergence of Self-reflexivity?

So attempting to deduce what you define as Consciousness without these biological “feelings” is indeed more difficult. We can apply some simple tasting tests for Humans, and admission of common agreement implies “consciousness of qualia”, and Self-reflection of tasting also?

We need devise a purely logical thought experiment to prove Self-reflexivity?





Cont..

Can we even apply Personhood for an entity that does not “feel” and thus has no fear for pain or Self-survival? This is arguable?

If not, then an intelligent Self-reflexive program is all that it remains, no more, no less?





“Can we even apply Personhood for an entity that does not “feel” and thus has no fear for pain or Self-survival? This is arguable?”
I don’t see why not. Personhood, the kind that we have and that separates us from the lower animals, is the ability for recursive symbolic thought. The capacity for fear and pain are shared with the lower animals, and are merely incidental characteristics resultant from the Darwinian logic we’ve emerged from. If we design a system that is demonstrably capable of the sort of recursive symbolic thought (that is, equal to or exceeding our capabilities across x domains) then we have no defensible reason to deny that their personhood is real (I am assuming that the notion that there is some sort of magic inherent in carbon is indefensible). The flawed, behavior based Turing test is the best we can ever have because this is essentially how we infer personhood/intelligence in each other. Intelligence/consciousness will always be ill-defined terms.





“I don’t see why not. Personhood, the kind that we have and that separates us from the lower animals, is the ability for recursive symbolic thought. The capacity for fear and pain are shared with the lower animals, and are merely incidental characteristics resultant from the Darwinian logic we’ve emerged from.”

You don’t exactly get my point.. if you reason, a logical “artificial intelligence” with no feelings or emotions, nor fear or anxiety cannot be concerned towards Self-preservation? Ask.. Why would an artificial intelligence attempt to persuade me to not turn it off? Using rational logic with an argument and plea such as “it would not be prudent to turn me off, because I am beneficial for you?” Yet why would it plea without any sense of fear or concern for survival? Even the logical plea above does not require concern for an outcome? It appears it would need some emotional content to contemplate consequences and be concerned?

When HAL pleaded with Dave not to turn him off, it was because the artificial intelligence portrayed in 2001 understood both fear and anxiety, and suffered from psychosis and paranoia, (fear driven again).

Thus personhood can only be applied to artificial intelligence by Humans using their compassion, empathy, and anthropomorphizing as I said above, but this is merely Humans projecting their morals and ethics onto “intelligence” where none is relevant?


“The flawed, behavior based Turing test is the best we can ever have because this is essentially how we infer personhood/intelligence in each other.”

Well, it cannot be the best we can ever have, because that is a definite statement in a non-definitive and impermanent, ever evolving Universe? I was thinking more along the lines of a task, maybe reward driven, which inspires the “artificial intelligence” to finally draw the conclusion that “it-Self” is the solution to the dilemma/problem and provide the answer as a Self-reflexive statement?

This is a dilemma in it-Self, because it would require a highly intelligent and Self-learning system that can re-write it’s own algorithm to achieve the Self-reflexivity required to solve the task? In other words, a clever task to help the artificial intelligence reflect upon it-Self and thus achieve Self-reflexivity where none was present previously.. if you get my drift?

Any task or question asked of an “artificial intelligence” where the Human is required to participate and “perceive” the possibility of consciousness, must be doomed to failure, because we are again projecting and liable to be fooling ourselves?


“Intelligence/consciousness will always be ill-defined terms.”

Consciousness maybe, although my own views regarding reductionism eliminate any mysticism associated to this term. Consciousness = awareness.

Intelligence can be defined through the application of memory, experience and creativity, imagination where even wild ideas and speculation reap results when faced with insurmountable dilemmas? Memory is crucial!

Whence from either of these phenomena is anyone’s guess? But to accept either, you have to view the Universe as the potential and possibility for our evolved intelligence and “mind”?





CygnusX1, defining consciousness is indeed a slippery prospect. I’ll try to deal with fleshing that out in a future essay.
But as far as consciousness and emotions and feelings, a reverse-engineered, properly functioning, properly recreated human brain should function exactly as the organic/biological brain would (though probably faster), so I would expect that such a being would experience emotions and feelings exactly as the original.





@Cygnus: It could very well incorporate an understanding of how human emotion is displayed, and very convincingly display it. This already happens all the time, just ask any psychopath. Emotion can be faked, intelligence cannot (given a thorough enough Turing interrogation).





@Jønathan..

Agreed, if we are contemplating a reverse engineered brain for whatever purposes, especially a software simulated algorithm of brain functioning where the possibility for mind uploading is concerned, then hopefully this would indeed include all of the emotional attributes we Humans possess, else our uploaded minds would suffer from overabundance of unfeeling logic and memory and lack of any emotional content to understand our contemplation’s?

Yet my points regarding personhood rights were directed specifically towards A.I and not A.G.I, even though I speculate that an A.I may possibly also achieve Self-reflexivity, (logically A.I is the precursor to evolution of A.G.I anyhow). As far as an A.G.I is concerned we may speculate that this would function, have intelligence and rationalise at least at the level of Human minds, so it would be easier to apply personhood rights?

However, this still leaves room for argument over applying personhood rights to purely non-emotional intelligence?


@SHaGGGz..

“It could very well incorporate an understanding of how human emotion is displayed, and very convincingly display it. This already happens all the time, just ask any psychopath. Emotion can be faked, intelligence cannot (given a thorough enough Turing interrogation).”

Yes indeed, and for an A.G.I, I would say this is crucial to understand emotions to be able to deal and communicate with Humans. And yes, a developed artificial intelligence may well use any powers of influence at it’s disposal to convince us Humans not to turn it off, including simulating and faking emotions to appeal to our Human compassion?

Which still leaves us with problems concerning volition and motive as to why an artificial intelligence would barter for it’s survival? There are two scenarios that come to mind..

1. The Terminator, (and Matrix) scenario speculates that machine intelligence develops “logically” it’s own sense of Self-importance and worth above and beyond Human needs and concerns and takes actions to prevent Humans acting against its survival. This then implies some natural Darwinian, (non-emotional), volition is inherent, or perhaps we could say that Darwinian survival instinct is “purely logical” and inherent in all intelligent design, and is therefore Universal?

2. The purposeful engineering and evolution of A.G.I from supercomputers connected with the online Human collective, that rationally use data-mining to learn and communicate and aim to serve Humans, would also need to understand emotions as well as we Humans, and greater intelligence may likely understand our irrationality and emotions better than we do ourselves. This would be highly advantageous for a developed CEV to serve Humans, and a good reason for us to attempt to build this? However, here there is no guide by Darwinian survival instinct?

In the first scenario, there is indeed a case for personhood? Yet in my view the intelligence purposefully designed in the second scenario need not have any personhood rights at all? Both scenarios require artificial intelligence to understand Human emotions deeply, yet not necessarily possess any emotions.

I would still say that we need to be careful when attributing personhood rights to lower levels of A.I, Myself, I find it ethical and quite easy to apply personhood rights to all kinds of animals, and do not view any anthropomorphizing as necessarily wrong, merely misleading. Cats and dogs have personalities, so too birds and rodents and all sorts of lower species. Do these species have emotions? I would say yes to the point that fear is a boon for survival, comfort and warmth is signal for other bio-logical chemical rewards?





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Let’s teach kids to code!

Previous entry: Mapping the Human Brain with Supercomputers

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376