The Personhood of the Technologically/Differently Sentient
Jønathan Lyons
2013-01-31 00:00:00

A Few Contenders

The Human Brain Project:

Dr. Hugo de Garis recently retired from his role of Director of the Artificial Brain Lab at Xiamen University, China, where he was building China’s first artificial brain.

Renowned futurist Ray Kurzweil's latest book is titled How to Create A Mind: The Secret of Human Thought Revealed, which discusses precisely that. He believes that the most promising approach to reproducing a human-level mind on a technological substrate is through reverse-engineering the human brain. IBM's Blue Brain project, "an attempt to reverse engineer the human brain and recreate it at the cellular level inside a computer simulation," likewise comes to mind. The project believes it can reproduce a fully functional human brain simulation running on a technological substrate sometime around 2023.

Ben Goertzel, chairman of the Artificial General Intelligence Society and the Opencog Foundation, is pursuing advanced Artificial General Intelligence (AGI).

Recognition

Interestingly, in an interview on Singularity 1 on 1, Kurzweil told host Nikola Danaylov that if his chatbot program, Ramona, were ever to become sufficiently advanced, he would feel compelled to set her free. Why? Because at a certain level of advancement and sophistication, a sufficiently advanced AGI — a differently sentient, technological being — will possess the faculties to merit a claim for personhood. At that point, that being will exist as both a person and as property — and a person who is property is a slave. (In fact, Ramona's plight at some future moment and her battle for the recognition of her own personhood is the subject of the movie "The Singularity is Near.")

Dr. Richard J. Terrile is an astronomer and the director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory. He is also a member of the Advisory Board for the Lifeboat Foundation, which is "a nonprofit nongovernmental organization dedicated to encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity." In the "Through the Wormhole" episode "Are we just simulations?," Dr. Terrile describes a form of the Turing Test and its implications. His version involves a box containing a human brain and a supercomputer from the future. "[S]uppose this is a laptop from 50 years from now, and I have them both in the box, and I start asking them both questions, and I don't know which one is answering. If I can't tell the difference between the human being answering questions and the computer answering questions, then qualitatively they're equivalent. And if I believe that the human is conscious and self-aware, I must also believe that the machine has the same qualities."

Let the implications of Dr. Terrile's statement sink in for a moment: If the technological entity with which a human being is interacting is indistinguishable from interactions with another human being, and we believe the other human being to be a conscious, sentient entity, then we have no real choice but to regard the technological entity as possessing those same qualities: consciousness and sentience.

Dr. R. Michael Perry, in a Web discussion on the classic 1938 science-fiction character Helen O'Loy, came to a similar conclusion. Perry, a cryonics activist who's been part of Alcor since 1987, wrote:




"In a recent posting I raised the possibility that a system that simulates a brain at a deep level may, to all appearances, have consciousness and feeling. For instance, a robot of the future could appear to be a human being, both physically and behaviorally, but have no protoplasm. Its brain, say, simulates a human brain at a deep level but, once again, can be distinguished in some physical way from natural wetware. Under these conditions I, once again, offer that there would be no compelling reason (as usual barring some fundamental new discovery about reality) not to regard the robot as possessing true consciousness and feeling."




Consciousness and sentience are something of a mystery; when I interact with another human being who seems to me to be conscious and sentient, it is easy to make the assumption that that is the case. But when discussing interactions with a technological entity, a being who's quite different from us, a mind that exists on a nonbiological substrate, our first reaction as human beings will often be not to make the same assumption. That is the essence of substrate chauvinism, and it could mean enslavement and a refusal to recognize the personhood of new, technological beings.

Humankind owes it to such beings to prepare for their likely arrival, and to be ethically sophisticated enough to recognize them for what they are.