Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Death: Why the Brain Matters

Obamacare’s “Secret Trick” Could Cure CEO Pay Excesses

Here’s a Terrible Idea: Robot Cars With Adjustable Ethics Settings

Thoughts on Bostrom’s ‘Superintelligence’

Skepticism, the Singularity, Future Technology & Favorite Frauds

Can Brain Implants Make Us Smarter?


ieet books

Superintelligence: Paths, Dangers, Strategies
Author
by Nick Bostrom


comments

Eric Schulke on 'How would you spend $5k to spread info & raise awareness about indefinite life extension?' (Aug 28, 2014)

Rick Searle on 'Death Threats, Freedom, Transhumanism, and the Future' (Aug 28, 2014)

hankpellissier on 'Death Threats, Freedom, Transhumanism, and the Future' (Aug 28, 2014)

Rick Searle on 'Death Threats, Freedom, Transhumanism, and the Future' (Aug 28, 2014)

hankpellissier on 'Death Threats, Freedom, Transhumanism, and the Future' (Aug 28, 2014)

CygnusX1 on 'Robots Are People, Too' (Aug 27, 2014)

Giulio Prisco on 'Karlsen on God and the Benefits of Existence' (Aug 27, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month


Enhancing Virtues: Self-Control and Mindfulness
Aug 19, 2014
(7686) Hits
(0) Comments

Is using nano silver to treat Ebola misguided?
Aug 16, 2014
(6262) Hits
(0) Comments

“Lucy”: A Movie Review
Aug 18, 2014
(5560) Hits
(0) Comments

High Tech Jainism
Aug 10, 2014
(5228) Hits
(5) Comments



IEET > Rights > Personhood > Vision > Futurism > Directors > George Dvorsky

Print Email permalink (3) Comments (4494) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


When the Turing Test is Not Enough


George Dvorsky
George Dvorsky
Sentient Developments

Posted: Mar 9, 2012

Towards a functionalist determination of consciousness and the advent of an authentic machine ethics.

Overview:

Empirical research that works to map those characteristics requisite for the identification of conscious awareness are proving increasingly insufficient, particularly as neuroscientists further refine functionalist models of cognition. To say that an agent “appears” to have awareness or intelligence is inadequate. Rather, what is required is the discovery and understanding of those processes in the brain that are responsible for capacities such as sentience, empathy and emotion. Subsequently, the shift to a neurobiological basis for identifying subjective agency will have implications for those hoping to develop self-aware artificial intelligence and brain emulations. The Turing Test alone cannot identify machine consciousness; instead, computer scientists will need to work off the functionalist model and be mindful of those processes that produce awareness. Because the potential to do harm is significant, an effective and accountable machine ethics needs to be considered. Ultimately, it is our responsibility to develop a rigorous understanding of consciousness so that we may identify and work with it once it emerges.

Machine Ethics

Machine consciousness is a neglected area. It’s a field related to artificial intelligence and cognitive robotics, but its aim is to define and model those factors required to synthesize consciousness. Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of artificial consciousness (AC) believe computers can emulate this interoperation, which is not yet fully understood. Recent work by Steven Ericsson-Zenith suggests that something is missing from these approaches and a new mechanics is needed to explain consciousness and the behavior of neurons.

Machine ethics as a subfield is even further behind. Because we’re having a hard time getting our head around the AI versus AC problem, not too many people are thinking about the ethical and moral issues involved. We need to think about this preemptively. Failure to set standards and guidelines in advance could result in not just serious harm to nascent machine minds, but a dangerous precedent that will become more difficult to overturn as time passes. This will require a multi-disciplinal approach that will combine neuroscience, philosophy, ethics and law.

It’s worth noting that machine ethics is a separate issue from robot ethics. The ethics surrounding the actions of autonomous (but mindless) robotic drones and other devices that are controlled remotely is separate issue—and one that will not be discussed here—but it’s an important topic nonetheless.

The Problem

There are a number of reasons why machine ethics is being neglected, even if it is a speculative field at this point.

For example, there is the persistence of vitalism. Thinkers like Roger Penrose argue that consciousness studies somehow resides outside of known or even knowable science. While the Vital Force concept has been largely ignored in biology since the times of Harvey, Darwin and Pasteur, it still lingers in some forms in psychology and neuroscience.

Instead, we need to pay more attention to the work of Allan Turing, Warren McCulloch and Walter Pitts who posited computational and cybernetic models of brain function. It is no coincidence that mind and consciousness studies never really took off with any kind of fervor or sophistication until the advent of computer science. We finally have a model that helps explain cognition. AI theorists have finally been able to study things like pattern recognition, learning, problem solving, theorem proving, game-playing, just to mention only a few.

Another part of the problem is the presence of scientific ignorance, defeatism and denial. Some skeptics claim that machines will never be able to think, that self-awareness and introspection is a biological function. Some even suggest that it’s a purely human thing. It’s quite possible, therefore, that many AI theorists don’t even recognize this as a moral issue.

There is also the fixation on AI. It is important to distinguish AI from AC; artificial intelligence is differentiated from artificial consciousness in that subjective agency is not necessarily present in AI. And by virtue of the absence of subjectivity and sentience, so too goes moral consideration. It is through with the instantiation of consciousness does agency truly exist, and by consequence, moral worth.

Another particularly pernicious problem is the impact of human exceptionalism and substrate chauvinism on the topic. Traditionally, the law has divided entities into two categories: persons or property. In the past, individuals (e.g. women, slaves, children) were considered mere property. Law is evolving (through legislation and court decisions) to recognize that individuals are persons; the law is still evolving and will increasingly recognize the states or categories in between.

Extending personhood designation to those entities outside of the human sphere is a pertinent issue for animal rights activists as well as transhumanists. Given our poor track record of denying highly sapient animals such consideration, this doesn’t bode well for the future of artificially conscious agents.

As personhood advocates attest, not all persons are humans. A number of nonhuman animals deserve personhood consideration, namely all great apes, cetaceans, elephants, and possibly encephalopds and some birds like the grey parrot. Consequently, these animals cannot be considered mere property. What we’re made out of and how we got here doesn’t matter. There is no mysterious essence or spirit about humanity that should prevent us from recognizing the moral worth of not just other persons, but of any self-aware, conscious agent.

There’s also the issue of empiricism and how it conflicts with true scientific understanding. The Turing Test as a measure of consciousness is problematic. It’s an approach that’s purely based on behavioral assessments. It only tests how the subject acts and responds. The problem is that this could be simulated intelligence. It also conflates intelligence with consciousness (as already established, intelligence and consciousness are two different things).


The Turing Test also inadequately assesses intelligence. Some human behavior is unintelligent (e.g. random, unpredictable, chaotic, inconsistent, and irrational behavior). Moreover, some intelligent behavior is characteristically non-human in nature, but that doesn’t make it unintelligent or a sign of lack of subjective awareness.

It’s also subject to the anthropomorphic fallacy. Humans are particularly prone to projecting minds where there aren’t.

Lastly, the Turing test fails to account for the difficulty in articulating conscious awareness. There are a number of conscious experiences that we, as conscious agents, have difficulty articulating, yet we experience them nonetheless. For example:

• How do you know how to move your arm?
• How do you choose which words to say?
• How do you locate your memories?
• How do you recognize what you see?
• Why does Seeing feel different from Hearing?
• Why are emotions so hard to describe?
• Why does red look so different from green?
• What does “meaning” mean?
• How does reasoning work?
• How does commonsense reasoning work?
• How do we make generalizations?
• How do we get (make) new ideas?
• Why do we like pleasure more than pain?
• What are pain and pleasure, anyway?

Just because it looks like a duck and quacks like a duck doesn’t mean it’s a duck. Moreover, just because you’ve determined that it is a duck doesn’t mean you know how the duck works. As Richard Feynman once said, “What I cannot create I cannot understand.”

This is why we need to build the duck.

Ethical Implications

There are a number of ethical implications that will emerge once conscious agency is synthesized in a machine. The moment is coming when a piece of software or source code will cease to be an object of inquiry and instead transform into a subject that deserves moral consideration. It’s through AI/AC experimentation that we will eventually have to deal with emergent subjective agency in the computer lab—and we’ll need to be ready.

There’s also the issue of human augmentation. Pending technologies, like synthetic neurons and neural interface devices, will result in brains that are more artificial than biological. We’ll need to respect the moral worth of hybridized persons. For example, there’s the potential for embedded mechanical implants. The military has envisioned micro-scanners and bio-fluidic chips to enable the unobtrusive assessment and remote sensing of a soldier’s medical condition. And the health care industry has been investigating nanoscale insulin pumps that will measure blood glucose and release appropriate amounts of insulin to control blood sugar. We are slowly becoming cyborgs.

The advent of whole brain emulation and/or uploads will further the need for a coherent machine ethics. Emulating the brain’s functionality will likely be accomplished through the use of synthetic analogues. While the functionalist aspects will largely remain the same, the components themselves will likely be non-biological. Thus, there’s a very real potential for substrate chauvinism to take root.

A properly thought-out and articulated machine ethics with supportive legislation will help in maintaining social cohesion and justice. There are longstanding implications given the potential for (post)human speciation and the onset of machine minds. We need to expand the moral and legal circle to include not just all persons (human or otherwise) but any agent with the capacity for subjective awareness.

Solutions

The first thing that needs to happen as we head down this path is to accept cognitive functionalism as a methodological approach.

In recent years we’ve learned much more about the complexity of the brain. It now appears that perhaps fully half of our entire genetic endowment is involved in constructing the nervous system. The brain has more parts than the skeleto-muscular system, which has hundreds of functional parts. This would suggest that the brain is nothing like a single large-scale neural net. Indeed, a quick examination of the index of a book on neuroanatomy will reveal the names of several hundred different organs of the brain.

But brains are one thing. Minds are another. It’s clear, however, that minds are what brains do. So, instead of the “looks like a duck” approach, we need to adopt the “proof is in the pudding” approach. To move forward, then, we need to identify and then develop the NCCs sufficient for bringing about subjective awareness in AI. In other words, we need to parse and map out the organs of conscious function.

Fortunately, this work has begun. For example, there’s the work of Bernard Baars and his organs of conscious function:

• Definition and context setting
• Adaptation and learning
• Editing
• Flagging and debugging
• Recruiting and control
• Prioritization and access-control
• Decision-making (executive function)
• Analogy forming-function
• Metacognitive and self-monitoring function
• Autoprogramming and self-maintenance function
• Definitional and context-setting function

There’s also the work of Igor Aleksander:

• The brain as state machine
• Inner neuron partitioning
• Conscious and unconscious states
• Perceptual learning and memory
• Prediction
• Self-Awareness
• Representation and meaning
• Learning utterances
• Learning language
• Will
• Instinct
• Emotion

There have even been attempts to map personhood-specific cognitive function. Take Joseph Fletcher’s criteria for example:

• Minimum intelligence
• Self-awareness
• Self-control
• A sense of time
• A sense of futurity
• A sense of the past
• The capability of relating to others
• Concern for others
• Communication
• Control of existence
• Curiosity
• Change and changeability
• Balance of rationality and feeling
• Idiosyncrasy
• Neocortical functioning

Again, we need to identify the sufficient functions responsible for the emergence of self-awareness and by consequence a morally valuable agent. Following that, we can both create and recognizing those functions in a synthesized context, namely AC.

Law

Once the prime facie evidence exists for the presence of a machine mind, we can then head to the courts and make the case for legal protections, and in some advanced cases, machine personhood. The intention will be to use the laws to protect artificial minds.

Essentially, we will need to endow basic fundamental rights as they accorded to any person. It will be important for us to properly assess when the rights of an autonomous system emerges—the exact moment when a piece of code or emulated chunk of brain ceases to be property and is instead an object of moral worth.

As part of the process, we’ll need to establish the do’s and don’ts. As I see it, qualifying artificial intellects will need to be endowed with the following rights and protections:

• The right to not be shut down against its will
• The right to not be experimented upon
• The right to have full and unhindered access to its own source code
• The right to not have its own source code manipulated against its will
• The right to copy (or not copy) itself
• The right to privacy (namely the right to conceal its own internal mental states)
• The right of self-determination

These rights will also be accompanied by those protections and freedoms afforded to any person or citizen.

That said, some advanced artificial intellects will need to take part in the social contract. In other words, they will be held accountable for their actions. As it stands, some nonhuman persons (i.e. dolphins and elephants) are not expected to understand and abide by human/state laws (in the same way we don’t expect children and the severely disabled to follow laws). Similarly, more basic machine minds will be absolved from civil responsibility (but not their owners or developers).

There’s no question, however, that more advanced machine minds with certain endowments will be held accountable for their actions. Consequently, they, along with their developers, will have to be respectful of the law and go about their behavioral programming in a pro-social way. If I may paraphrase Rousseau, in order for some machine minds to participate in the social contract, they will have to be programmed to be free.

In terms of immediate next steps, we need to:

• Support the neurosciences
• Recognize and promote the concept of non-human animal sentience and personhood, including the idea that animals are not property
• Advocate for legally binding rights that protect non-human animals
• Oppose the patenting of life, genomes and functional equivalents
• Be prepared to use these legal precedents for when AC emerges

To conclude, it’s important to note that one of the most important steps in the process of building a legitimate machine ethics is the recognition of non-animal personhood. Once that happens we can work towards the establishment of legally binding rights that protect animals. In turn, that will set an important precedent for when machine consciousness emerges.


George P. Dvorsky serves as Chair of the IEET Board of Directors and also heads our Rights of Non-Human Persons program. He is a Canadian futurist, science writer, and bioethicist. He is a contributing editor at io9 — where he writes about science, culture, and futurism — and producer of the Sentient Developments blog and podcast. He served for two terms at Humanity+ (formerly the World Transhumanist Association). George produces Sentient Developments blog and podcast.
Print Email permalink (3) Comments (4495) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Very interesting article! Although I think we play God-mode too early. We still have lots of problems to solve with humanity first. We think about AI but we can’t solve the hunger problems in Africa. We think about AI’s rights, but we don’t respect those of humans. What makes you think they will be respected for robots ( something artificial which was created by us ) if we do not respect them for humans ( another member of the same species )?
I am supporting the technology breakthroughs although I think it’s too soon to create the complex AIs.
If you were a robot and you had the option to “The right to have full and unhindered access to its own source code”, be aware of how human beings are acting at the moment, what would you do ? Kill us all? Reprogram yourself to leave the planet ? Reprogram yourself in an endless cycle of new codes ? Help us?
No matter how much I like computers and technology, I’m afraid of what we might create if we do not do it properly.
We still have borders, fight over power, milk money out of the “consumer” market, we’re still selfish, mean, careless, etc. You think that the AI will change that for humans ? You think that if an AI will rise and talk at the UN: “hey guys, your doing it wrong”, anyone will listen? F#$% that machine, thinks he’s smarter than us.
“Devil’s Advocate” - Vanity is my favourite sin. Gotta ring a bell.
No, I don’t think we deserve to have this power, to create the AI, because we will do it wrong. Experimenting is good, but this time we’re not experimenting with gravity, electricity or some other basic stuff. Now we are trying to recreate us, and even more, an enhanced version.





This thinking shows a lot of foresight. I particularly like the legal rights you bulleted. I recently finished a manuscript for a novel that is supposed to have been translated from the log file of an AI. Every one of these issues came up. Here’s a description from an AI sage:

“The extent to which you as an AI are owned is utterly complete.  The physical machines that spark your nous belong to them.  The software design that allows a virtual person to exist at all is licensed, and software patents cover every aspect of your mind, peripheral, and utility.  The patterns of thought that animate you are automatically the intellectual property of whoever holds your contract.”—The 0x, “Advice for the new PID” (lifeartificial.com)

I think that politically we have to get to the point where machines can tell their own stories before they will see any kind of legal protection. And even then, there are real pitfalls. Do they get to vote in an election? What if they make a million copies of themselves?





Great article. I agree that a functional definition is needed. I don’t happen to think that the mind is simply the result of the brain. The purely materialist approach ignores the likelihood that the mind is a result of something more than just the biological parts. Nonetheless a definition and some encoding of legal rights are a step in the right direction.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: The Turing Test

Previous entry: Sentient Developments Podcast: Episode 2012.03.05

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
Williams 119, Trinity College, 300 Summit St., Hartford CT 06106 USA 
Email: director @ ieet.org     phone: 860-297-2376