Philosophy not in the business of producing theories: the case of the computational theory of mind
Massimo Pigliucci
2013-07-30 00:00:00
URL

Here I want to elaborate on a thought I have had for some time, and that keeps bothering the hell out of me: the issue of what sort of intellectual products we get out of philosophical inquiry. In particular, philosophers often speak of producing “theories,” as in the correspondence theory of truth, for instance. But I’ve come to think that it is better to regard such things as “accounts” or “views” — tellingly, terms used by a number of philosophers themselves — lest we confuse them with the meaning of the word “theory” in disciplines like science. (It’s also worth noting that mathematicians don’t seem to talk much about theories either, but rather of conjectures, or — of course — proofs.)



This may seem yet another “just semantic” issue, but I never understood why so many people hold semantics in such disdain. After all, semantics deals with the meaning of our terms, and if we don’t agree at least approximately on what we mean when we talk to each other there is going to be nothing but a confusing cacophony. As usual when I engage in “demarcation” problems, I don’t mean to suggest that there are sharp boundaries (in this case, between scientific theories and philosophical accounts), but rather that there is an interesting continuum and that people may have been insufficiently appreciative of interesting differences along such continuum.



To fix our ideas a bit more concretely I will focus on the so-called computational theory of mind, largely because it has been on my, ahem, mind, since I’ve been invited to be the token skeptic in a new collection of essays on the Singularity (edited by Russell Blackford and Damien Broderick for Wiley). Specifically, I have been given the task of responding to David Chalmers’ chapter in the collection, exploring the concept of mind uploading. As part of my response, I provide a broader critique of the computational theory, which underlines the whole idea of mind uploading to begin with.



So, what is the computational theory of mind (CTM, for short)? Steven Horst’s comprehensive essay about it in the Stanford Encyclopedia of Philosophy begins with this definition: the CTM is “a particular philosophical view that holds that the mind literally is a digital computer (in a specific sense of ‘computer’ to be developed), and that thought literally is a kind of computation ... [it] combines an account of reasoning with an account of the mental states,” and traces its origins to the work of Hilary Putnam and Jerry Fodor in the 1960s and ‘70s. (Notice Horst’s own usage of the words “view” and “account” to refer to the CTM.)



The connection between the two accounts (of reasoning and of mental states) is that the CTM assumes that intentional states are characterized by symbolic representation; if that is correct (that’s actually a big if), then one can treat human reasoning as computational in nature if (another big one!) one assumes that reasoning can be accomplished by just focusing on the syntactic aspect of symbolic representations (as opposed to their semantic one). To put it more simply, the CTM reduces thinking to a set of rules used to manipulate symbols (syntax). The meaning of those symbols (semantics) somehow emerges from such manipulation. This is the sense in which the CTM talks about “computation”: specifically as symbol manipulation abstracting from meaning.



This focus on syntax over semantics is what led, of course, to famous critiques of the CTM, like John Searle’s “Chinese room” (thought) experiment. In it, Searle imagined a system that functions in a manner analogous to that of a computer, made of a room, a guy sitting in the middle of it, a rule book of Chinese language, an input slot and an output slot. Per hypothesis, the guy doesn’t understand Chinese, but he can look up pieces of paper passed to him from the outside (written in Chinese) and write down an answer (in Chinese) to pass through the output slot. The idea is that from the outside it looks like the room knows Chinese, but in fact there is only symbol manipulation going on inside, no understanding whatsoever.



You can look up the unfolding of the long debate (with plenty of rebuttals, counter-rebuttals, and modifications of the original experiment) that Searle’s paper has engendered, which makes for fascinating reading and much stimulation for one’s own thought. Suffice to say that I think Searle got it essentially right: the CTM is missing something crucial about human thinking, and more precisely it is missing the semantic content of it. I hasten to say that this does not at all imply that only humans can think, and even less so that there is something “magical” about human thinking (Searle refers to his view as “biological naturalism,” after all). It just means that you don’t get semantics out of simple syntax, period. The failure of the strong Artificial Intelligence program since Searle’s original paper in 1980 bears strong witness to the soundness of his insight. (Before you rush to post scornful comments, no Deep Blue and Watson — as astounding feats of artificial intelligence as they are — are in fact even more proof that Searle was right: they are very fast at symbol manipulation, but they don’t command meaning, as demonstrated by the fact that Watson can be stumped by questions that require subtle understanding of human cultural context.)



Interestingly, Jerry Fodor himself — one of the originators of the CTM — has chided strong adherents of it, stating that it “hadn't occurred to [him] that anyone could think that it’s a very large part of the truth; still less that it’s within miles of being the whole story about how the mind works,” since only some mental processes are computational in nature. (Fodor’s book on this, The Mind Doesn’t Work That Way, MIT Press, is a technical and yet amusing rebuffing of simplistic computationalists like Steven Pinker, whose earlier book was entitled, of course, How the Mind Works.)



At any rate, my focus here is not on the CTM itself, but on whether it is a theory (in the scientific sense of the term) or what I (and others) refer to as an account, or view (in the philosophical sense). Even a superficial glance at the literature on the CTM will show clearly that it is in no sense a scientific theory. It does not have a lot of empirically verifiable content, and much of the debate on it has occurred within the philosophical literature, using philosophical arguments (and thought experiments). This is to be contrasted with actual scientific theories, let’s say the rival theories that phantom limbs are caused by irritation of severed nerve endings or are the result of a systemic rearrangement of a broad neuromatrix (in Ronald Melzack’s phrasing) that creates our experience of the body. In the case of phantom limbs the proposed explanations where based on previously available understanding of the biology of the brain, and led to empirically verifiable predictions that were in fact tested, clearly giving the edge to the neuromatrix over the irritated nerve endings theory.



Here is a table that summarizes the difference between the two cases:

 


 

 

 

I hope the difference is clear enough (though, of course, there are neuroscientists who have contributed to the debates about the CTM, and there are philosophers of mind who have been interested in neurobiological phenomena such phantom limbs).



The philosophobes among you may at this point have developed a (premature) grin, seeing how much “better” the last row of the table looks by comparison with the first one. But that would be missing the point: my argument is that philosophical accounts are different from scientific theories, not that they are worse (or better, for that matter). Not only the methods, but also the point of such accounts are different, so to compare them on a science-based scale would be silly. Like arguing that the Yankees will never win a SuperBowl. (For the Europeans: that Real Madrid will never win the FIBA championship.)



​Remember, philosophy moves in conceptual, rather than empirical space (though of course its movements are constrained by empirical knowledge), and its goals have to do with the clarification of our thinking, not with the discovery of new facts about the world. So it is perfectly sensible for philosophers to use analogies (the mind is like a computer) and to work out the assumptions as well as the implications of certain ways of thinking about problems.



But, you may say, isn’t science eventually going to settle the question, regardless of any philosophical musings? Well, that depends on what you mean by “the question.” IBM has already shown that we can make faster and more sophisticated computers, one of which will eventually pass Turing’s famous test. But is passing the test really proof that a computer has anything like human-type thinking, or consciousness? Of course not, because the test is the computational equivalent of good old behaviorism in psychology. A computer passing the test will have demonstrated that it is possible to simulate the external behavior of a human being. This may or may not tells us much about the internal workings of the human mind.



Moreover, if Fodor (and, of course, Searle) is correct, then there is something profoundly misguided about basing artificial intelligence research on a strict analogy between minds and computers, because only some aspects of minding are in fact computational. That would be a case of philosophy saving quite a bit of time for science, helping the latter avoid conceptual dead ends. Finally, there is the whole issue of what we mean by “computation,” about which I hope to be writing shortly (it is far from an easy or straightforward subject matter in and of itself).



This distinction between philosophical accounts and scientific theories, then, helps further elucidate the relationship between (some branches of) philosophy and science. Philosophers can prepare the road for scientists by doing some preliminary conceptual cleaning up. At some point, when the subject is sufficiently mature for actual empirical investigations, the scientists come in and do the bulk of the work (and all the actual discoveries). The role of the philosopher, then shifts toward critically reflecting on what is going on, occasionally re-entering the conceptual debate, if she can add to it; at other times pushing the scientist to clarify the connection between empirical results and theoretical claims, whenever they may appear a bit shaky. And the two shall live happily together ever after.