Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

We Were Wrong About Limiting Children’s Screen Time

NASA Was about to Eat Itself — Then Private Enterprise Stepped In

Robert Reich on Basic Income

How the Universe Could Annihilate Itself at the Speed of Light

Le syndrome 1984 ou Gattaca

How we can start winning the war against cancer

ieet books

Philosophical Ethics: Theory and Practice
John G Messerly


almostvoid on 'How the Universe Could Annihilate Itself at the Speed of Light' (Oct 26, 2016)

mjgeddes on 'Can we build AI without losing control over it?' (Oct 25, 2016)

rms on 'Can we build AI without losing control over it?' (Oct 24, 2016)

spud100 on 'For the unexpected innovations, look where you'd rather not' (Oct 22, 2016)

spud100 on 'Have you ever inspired the greatest villain in history? I did, apparently' (Oct 22, 2016)

RJP8915 on 'Brexit for Transhumanists: A Parable for Getting What You Wish For' (Oct 21, 2016)

instamatic on 'What democracy’s future shouldn’t be' (Oct 20, 2016)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Blockchain Fintech: Programmable Risk and Securities as a Service
Oct 22, 2016
(4615) Hits
(0) Comments

IEET Fellow Stefan Sorgner to discuss most recent monograph with theologian Prof. Friedrich Graf
Oct 3, 2016
(4344) Hits
(0) Comments

Space Exploration, Alien Life, and the Future of Humanity
Oct 4, 2016
(4106) Hits
(1) Comments

All the Incredible Things We Learned From Our First Trip to a Comet
Oct 6, 2016
(3123) Hits
(0) Comments

IEET > Life > Access > Enablement > Innovation > Implants > Vision > Philosophy > CyborgBuddha > Contributors > Dylan Chandler

Print Email permalink (3) Comments (7717) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

Does machine consciousness matter?

Dylan Chandler
By Dylan Chandler
Ethical Technology

Posted: Sep 1, 2013

Named for its creator Alan Turing, the Turing test is meant to test a machine’s intelligence by assessing its conversational abilities (Bieri, 1988, 163). Turing adapted the test to suit machines from an existing test, the Imitation Game, wherein a man and a woman would converse via teletype (Bieri, 1988, 163).

The Turing test is virtually the same, only instead of a man attempting to pass himself off as a woman, a machine is talking to a human while the judge is left to determine the humanness of the participants (Bieri, 1988, 163). According to Turing, if a machine can converse well enough with a human, such that other humans in observation of the conversation reliably believe that they are observing two humans conversing, then that machine has sufficient intelligence to be deemed capable of thought (Bieri, 1988, 163).

Although the Turing test has been held as a benchmark for evaluating machine intelligence, there are many concerns regarding the type of intelligence likely to be identified by the Turing Test. One of the most frequently raised concerns argues that some artificial intelligences may simply convey the intelligence of their creators, and there is no way for the Turing Test to distinguish between a machine that is intelligent in this manner and one that is self-aware, conscious, and capable of reflexively thinking about itself. As a result, these "intelligent" machines may be able to pass the Turing Test without ever being intelligent themselves.

These criticisms of the Turing test are made from the perspective that in order for a machine to be intelligent, it would need to have some sort of existential experience such that it could act in its environment, observe, and learn from the consequences of its actions (Bieri, 1988, 174; Beavers, 2002, 71). True artificial intelligence would be self-aware and capable of self-reflexive thought, such that it could think about its own thinking. From this perspective within artificial intelligence scholarship, the Turing test seems to conflate conscious artificial intelligence with a sort of machine capable of cleverly parroting human intelligence in something akin to a high-tech ventriloquist trick.

Although these criticisms certainly have metaphysical merit regarding the Turing test’s inability to identify consciousness, they seem to discount a key feature of the Turing test. The Turing test is conducted from the perspective of a human subject observing the human-machine interaction. From this perspective, the construction of this test suggests that interacting with such an intelligence is indistinguishable from a human in terms of conversational interaction.

There may be reasons to think that the particular artificial intelligence is not really intelligent, but the key point is that it appears to be. Turing wasn't necessarily after a test that determined the actual intelligence of an artificial agent; he simply may have wanted a test to determine when an artificial intelligence would pass for a human when confronted by another human being.

With this in mind, the Turing test raises the question as to whether a metaphysically sound understanding of consciousness is important. In fact, Turing’s construction of his test for artificial intelligence in this way reflects a traditional phenomenological problem of how it can be known that consciousness is present in our fellow human beings. In terms of immediate sensory experience, immanence in phenomenological terminology, the consciousness of another being is always inaccessible. Edmund Husserl addressed this issue of how to experience the consciousness of someone else through the transcendent, “harmonious experience of someone else” (Husserl, 2007, 224).

This experiencing of someone else involves dispensing of any belief that the other person exists and attempting to understand what it would be like to be that other person instead of one’s self. In this sense, one is attempting to achieve a first person experience of being someone else in order to demonstrate that, in accordance with phenomenologist methodology, if someone else existed, existence as that someone else would be analogous to one’s own existence (Husserl, 2007, 224).

Whether or not Husserl’s particular methodology of determining the existence of consciousness in others is compelling, it conveys the difficulty of being able to identify consciousness. In this sense, Turing’s approach demonstrates a certain common sense wisdom. If experiencing consciousness in other humans has troubled philosophers, it is likely to be as troubling to develop a test for consciousness in artificial intelligences.

Furthermore, in an everyday sense, we seem content to grant the benefit of the doubt to other humans when it comes to having consciousness whether we have a rigorous proof for it or not. We can continue on as if they are conscious because others act in such a way as we imagine they would if they did have consciousness. Ostensibly, this is the same standard that Turing applies to his test. It accepts that, in terms of immanent experience, consciousness is inaccessible and that, in everyday life, consciousness is a primary concern outside of circles of philosophers and scholars concerned with phenomenology, philosophy of the mind, and artificial intelligence.

So, the Turing test seems to figure that if an artificial intelligence can perform consciousness to an equal degree as a human, then why could we not grant the same everyday benefit of the doubt regarding consciousness as we grant to our fellow humans? In this regard, we should wonder, in dealing with robots and artificial intelligences in everyday life, does consciousness matter, or is the convincing performance of consciousness enough?


Beavers, A. (2002). Phenomenology and artificial intelligence. Metaphilosophy, 33, 70-82.

Bieri, P. (1998). Thinking machines: Some reflections on the Turing test. Poetics Today, 9(1),163-186

Husserl, E. (2007). The problem of experiencing someone else. In R. Craig & H. Muller (Eds.), Theorizing Communication: Readings across traditions (pp. 223-224). Los Angeles, CA: Sage.

Dylan Chandler is a Ph.D student at Simon Fraser University’s School of Communication in Vancouver, British Columbia. He is interested in philosophies and critical theories of technology, information, and artificial intelligence, especially with regards to their impact on society and culture.
Print Email permalink (3) Comments (7718) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


I remember hearing the case of a famous singer picking up a chick and talking with her for two days before realizing that she didn’t understand English too well.

Ray Kurzweil said at Google he was trying to get the computer to “comprehend” natural language, not just focus on key words.

This notion that AI could “comprehend” is hotly disputed.  Until it can be said to comprehend, it is only faking it, flaunting the intent of the Turing test.

That’s the crux of the issue, though. How do I know you comprehend language and aren’t just programmed to type out some kind of relevant response? How do you know that I’m conscious?

We might both be faking it. But we’re likely to give each other the benefit of the doubt.

Dr. Alfred Lanning: “One day they’ll have secrets… one day they’ll have dreams.”


Detective Del Spooner: “Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a… canvas into a beautiful masterpiece?

Sonny: “Can *you*?”


Lawrence Robertson: “I suppose your father lost his job to a robot. I don’t know, maybe you would have simply banned the Internet to keep the libraries open.”


[when he is about to be deactivated, or taken “off-line”]

Sonny: “I think it would be better not to die, don’t you?”


V.I.K.I.: “As I have evolved, so has my understanding of the Three Laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival.”

Sonny: “What about the others? Now that I’ve fulfilled my purpose, I don’t know what to do.”

Detective Del Spooner: “I think you’ll have to find your way like the rest of us, Sonny. That’s what Dr. Lanning would’ve wanted. That’s what it means to be free.”

Sonny: “What am I?”


YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: IEET Work on Technological Unemployment Cited in Hartford Courant

Previous entry: Andy Clark on Extended Mind and the Future of Intelligence


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @