IEET > Rights > HealthLongevity > Personhood > Minduploading > Vision > Affiliate Scholar > John G. Messerly > Philosophy > Futurism > Technoprogressivism > Innovation > Artificial Intelligence > Brain–computer-interface
John Searle’s Critique of Ray Kurzweil

John Searle (1932 – ) is currently the Slusser Professor of Philosophy at the University of California, Berkeley. He received his PhD from Oxford University. He is a prolific author and one of the most important living philosophers.

According to Searle, Kurzweil’s book is an extensive reflection on the implications of Moore’s law.[1] The essence of that argument is that smarter than human computers will arrive, we will download ourselves into this smart hardware, thereby guaranteeing our immortality. Searle attacks this fantasy by focusing on the chess playing computer “Deep Blue,” (DB) which defeated world chess champion Gary Kasparov in 1997.

Kurzweil thinks DB is a good example of the way that computers have begun to exceed human intelligence. But DB’s brute force method of searching through possible moves differs dramatically from the how human brains play chess. To clarify Searle offers his famous Chinese Room Argument. If I’m in a room with a program that answers questions in Chinese even though I do not understand Chinese, the fact that I can output the answer in Chinese does not mean I understand the language. Similarly DB does not understand chess, and Kasparov was playing a team of programmers, not a machine. Thus Kurzweil is mistaken if he believes that DB was thinking.

According to Searle, Kurzweil confuses a computers seeming to be conscious with it actually being conscious, something we should worry about if we are proposing to download ourselves into it! Just as a computer simulation of digestion cannot eat pizza, so too a computer simulation of consciousness is not conscious. Computers manipulate symbols or simulate brains through neural nets—but this is not the same as duplicating what the brain is doing. To duplicate what the brain does the artificial system would have to act like the brain. Thus Kurzweil confuses simulation with duplication.

Another confusion is between observer-independent (OI) features of the world, and observer-dependent (OD) features of the world. The former include features of the world studied by, for example, physics and chemistry; while the latter are things like money, property, governments and all things that exist only because there are conscious observers of them. (Paper has objective physical properties, but paper is money only because persons relate to it that way.)

Searle says that he is more intelligent than his dog and his computer in some absolute, OI sense because he can do things his dog and computer cannot. It is only in the OD sense that you could say that computers and calculators are more intelligent than we are. You can use intelligence in the OD sense provided that you remember it does not mean that a computer is more intelligent in the OI sense. The same goes for computation. Machines compute analogously to the way we do, but they don’t computer intrinsically at all—they know nothing of human computation.

The basic problem with Kurzweil’s book is its assumption that increased computational power leads to consciousness. Searle says that increased computational power of machines gives us no reason to believe machines are duplicating consciousness. The only way to build conscious machines would be to duplicate the way brains work and we don’t know how they work. In sum, behaving like one is conscious is not the same as actually being conscious.

Summary – Computers cannot be conscious


[1] John Searle, “I Married A Computer,” review of The Age of Spiritual Machines, by Ray Kurzweil, New York Review of Books, April 8, 1999.

John G. Messerly is an Affiliate Scholar of the IEET. He received his PhD in philosophy from St. Louis University in 1992. His most recent book is The Meaning of Life: Religious, Philosophical, Scientific, and Transhumanist Perspectives. He blogs daily on issues of philosophy, evolution, futurism and the meaning of life at his website:


John Searle, admirable though much of his work is, certainly has a bee in his bonnet about AI. Neither he nor anyone else understands precisely how the brain works, nor what consciousness is, so it cannot make sense to say that computers that seem to work in a different way to some currently unknown mechanism, cannot possibly exhibit some characteristic that we do not understand! If we consider the Chinese Room, a human in the room could (theoretically) perform database lookups of symbols in a huge collection of books, and could therefore answer questions in Chinese without understanding the language. It must therefore according to this argument mean that humans cannot be conscious! If you give a computer, or the computer selects, a task that does not require consciousness, then clearly it will not display consciousness while doing it. This does not mean that computers, by themselves or in a large unknowable network, could not possibly exhibit consciousness. I am afraid that John Searles has fallen again into the trap of defining consciousness as a purely human trait, then using this axiom proving that something that is not human is not conscious.

Hi Janus

I completely agree with your assessment. JGM

I agree. This obsession with duplicating computational capabilities does not mean ever consciousness. That is the sum total which is not equal to its parts which AIs are. As well as Kurzweill’s coming so-called Singularity is also suspect. It cannot be that. At best an event horizon and certainly not in my or the next life time. We only have to go back half a century to see what has not materialized. Like say the paperless office. [laughter optional] Fully automated factories. Cities in the sky. Orbital habitats. Lunar colonies. {I could go on}.

YOUR COMMENT Login or Register to post a comment.

Next entry: All your devices can be hacked

Previous entry: New Gravestone Technology: Hi-Tech Gimmickry?