IEET > Rights > HealthLongevity > Personhood > Vision > Affiliate Scholar > John G. Messerly > Philosophy > Futurism > Technoprogressivism > Innovation > Artificial Intelligence
Daniel Dennett: In Defense of Robotic Consciousness
John G. Messerly   Feb 11, 2016   Reason and Meaning  

Daniel Dennett (1942 – ) is an American philosopher, writer and cognitive scientist whose research is in the philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. He is currently the Co-director of the Center for Cognitive Studies, the Austin B. Fletcher Professor of Philosophy, and a University Professor at Tufts University. He received his PhD from Oxford University in 1965 where he studied under the eminent philosopher Gilbert Ryle.

In his book, DARWIN’S DANGEROUS IDEA: EVOLUTION AND THE MEANINGS OF LIFE, Dennett present a thought experiment that defends strong artificial intelligence (SAI)—one that matches or exceeds human intelligence.[1] Dennett asks you to suppose that you want to live in the 25th century and the only available technology for that purpose involves putting your body in a cryonic chamber where you will be frozen in a deep coma and later awakened. In addition you must design some supersystem to protect and supply energy to your capsule. You would now face a choice. You could find an ideal fixed location that will supply whatever your capsule will need, but the drawback would be that you would die if some harm came to that site. Better then to have a mobile facility to house your capsule that could move in the event harm came your way—better to place yourself inside a giant robot. Dennett claims that these two strategies correspond roughly to nature’s distinction between stationary plants and moving animals.

If you put your capsule inside a robot, then you would want the robot to choose strategies that further your interests. This does not mean the robot has free will, but that it executes branching instructions so that when options confront the program, it chooses those that best serve your interests. Given these circumstances you would design the hardware and software to preserve yourself, and equip it with the appropriate sensory systems and self-monitory capabilities for that purpose. The supersystem must also be designed to formulate plans to respond to changing conditions and seek out new energy sources.

What complicated the issue further is that, while you are in cold storage, other robots and who knows what else are running around in the external world. So you would need to design your robot to determine when to cooperative, form alliances, or fight with other creatures. A simple strategy like always cooperating would likely get you killed, but never cooperating may not serve your self-interests either, and the situation may be so precarious that your robot would have to make many quick decisions. The result will be a robot capable of self-control, an autonomous agent which derives its own goals based on your original goal of survival; the preferences with which it was originally endowed. But you cannot be sure it will act in your self-interest. It will be out of your control, acting partly on its own desires.

Now opponents of SAI claim that this robot does not have its own desires or intentions, those are simply derivative of its designer’s desires. Dennett calls this “client centrism.” I am the original source of the meaning within my robot, it is just a machine preserving me, even though it acts in ways that I could not have imagined and which may be antithetical to my interests. Of course it follows, according to the client centrists, that the robot is not conscious. Dennett rejects this centrism, primarily because if you follow this argument to its logical conclusion you have to conclude the same thing about yourself! You would have to conclude that you are a survival machine built to preserve your genes and your goals and your intentions derive from them. You are not really conscious. To avoid these unpalatable conclusions, why not acknowledge that sufficiently complex robots have motives, intentions, goals, and consciousness? They are like you; owing their existence to being a survival machine that has evolved into something autonomous by its encounter with the world.

Critics like Searle admit that such a robot is possible, but deny that it is conscious. Dennett responds that such robots would experience meaning as real as your meaning; they would have transcended their programming just as you have gone beyond the programming of your selfish genes. He concludes that this view reconciles thinking of yourself as a locus of meaning, while at the same time being a member of a species with a long evolutionary history. We are artifacts of evolution, but our consciousness is no less real because of that. The same would hold true of our robots.

Summary – Sufficiently complex robots would be conscious


[1] Daniel Dennett, Darwin’s Dangerous Idea: Evolution And The Meaning of Life (New York: Simon & Schuster, 1995), 422-26.

John G. Messerly is an Affiliate Scholar of the IEET. He received his PhD in philosophy from St. Louis University in 1992. His most recent book is The Meaning of Life: Religious, Philosophical, Scientific, and Transhumanist Perspectives. He blogs daily on issues of philosophy, evolution, futurism and the meaning of life at his website:


This joins me step, in that the “conscious” mind connected to the flesh to give acts of procreation such positive reinforcement in sensory and emotional input of humans, may not be what we define as human conscious. What has the sub conscious been doing to evolve the human? And for what purpose? IS the sub mind changing with bliss and fun? Has the mind received a glimpse of a omniscient divination it desires to mimic? Okay, let us label a definition of human consciousness as self directed or programmed——But what of human experience of Deja Vu? When one wakes in a Deja Vu experience to see ahead for safety’s sake and realizes there is a conscious that transcends time?
Or is this just a case of the ol philosophical parable—when one tires of pushing or pulling the apple cart; shall we slit our wrists to bleed to the earth and then know the Truth. Or modernized as—shall we push over the apple cart and care less nor care if others pick up the apples?

All robots can do is fake the art of sincerity which they have picked up from their human creators.

By Reduction/Deduction - My mind is conscious.. because my brain is conscious.. because my neurons are conscious.. because their molecules are conscious.. because comprising electrons and protons/neutrons/nuclei are conscious.. because.. (EMF, Quantum gravity)??

I cannot explain this experience of consciousness without utilizing the term Self “awareness” - Am “I” (mind) merely projecting my own importance/speciality on “being” (existing)?

I fully agree with Dan Dennett and his rational insight and explanation of this Cartesian theatre and movie show that we all experience.

A machine that analyzes it’s position to perform functional actions is already Conscious/“aware” of it’s surroundings, as is both the Ant and myself - Self reflection and 3rd person perspective is thus merely a matter of programming?

How are Brains Conscious? - Dan Dennett - Closer to Truth


So, does Dennett also think that such a robot is morally or legally responsible, and liable to trial and punishment?

Question: who says we have have gone beyond the programming of our selfish genes?  I see evidence of exactly the opposite.  We living things are always serving the larger goal of evolution (our genes) in every single intention that we ever have.  The goal of all life is to take in information (patterns in energy and/or matter), combine them in novel ways, and then output copies of those new patterns into the larger universe.  There is not a single intention of our animal livess that I’ve ever encountered that didn’t have this basic function at it’s heart.

Sure, we’ve moved beyond the limitations of pure material information procreation, and can now output additional forms of these novel patterns — including the emotional packets that we call art, the intellectual packets that we call ideals/technology, and the philosophical packets that we call culture (and religion, morality, ideology) — but we’re still always going back to out original genetic programming.

So what is “consciousness” then?  I offer a simple mathematically-based way of categorizing consciousness that describes the levels of emergent function for ever more complex individuals as they add increasing dimensions of pattern representation from different perspectives.  In other words, every time I am able to understand the current and/or goal state (pattern) of an individual, starting with myself as first person awareness, and moving outward to you, at second person, and then them, at third person, and then everyone at another point in time, at fourth person, I increase my level of consciousness.  Atomic particles have the least consciousness, with zero dimensions, while adult humans can have up to four-dimensions (and human children tend to have 2 or 3, depending on where they are in their brain development).  The only thing that might, soon, have an even higher level of consciousness is likely to be the internet as it connects a whole planet’s worth of individual’s (animal, vegetable, mineral, and whatever else we’ve got) perspectives that represent both the current and goal states of everyone, all coalescing into a coherent whole: a conscious entity that is, perhaps, 5-dimensional.

As for AI (general and/or specific) what level of consciousness it has depends entirely on how many current and/or goal states it is capable of copying the patterns of inside itself as it mixes these patterns together to create new patterns that it outputs into the rest of the universe.

YOUR COMMENT Login or Register to post a comment.

Next entry: Network Society: the coming socio-economic phase transformation

Previous entry: Stefan Sorgner speaking on Transhumanism and Star Wars