Why “Why Transhumanism Won’t Work” Won’t Work
Phil Torres
2010-06-24 00:00:00
URL



As collective human knowledge grows exponentially, so does individual ignorance. In sententious form: Everyone today knows almost nothing about most things.

This is our common epistemic situation, and unless safe and effective cognitive enhancements soon become available [1] or a global catastrophic risk occurs, this situation is only going to get worse. Thus, there is nothing wrong with someone not knowing about a subject X.

There is something wrong, however, with someone not knowing about X and talking, or pontificating, about X as if he or she does. And that is precisely what Mark Gubrud – a physicist with little or no philosophy background – does in his recent Futurisms article, “Why Transhumanism Won’t Work,” as well as in his related 2003 paper, “Bursting the Balloon.”

In my opinion, Gubrud’s works are egregious examples of uninformed scholarship, or “sciolism.”

Let’s start by considering some general rules for avoiding sciolism.

For any concept/issue/phenomenon/etc. X:



Gubrud’s article and paper are replete with violations of these rules. He reasons that since X doesn’t make sense to him – someone who’s never actually studied X [3] – it must not make sense at all; and, furthermore, his conclusion that X makes no sense is then couched in unjustifiably strong language. (Gubrud states, for example, that if his thesis is unfamiliar to philosophers, and if it stands apart from the established body of philosophical works on mind and identity, then “perhaps it is something new, and perhaps it ought to be published, in some revised form, in a philosophical journal.”) [4]

One could compose an exegesis of Gubrud’s two pieces the size of a small book. For the present purposes, however, I will single out a small subset of the most flagrant confusions.


Point 1. The main thrust of Gubrud’s Futurisms article is, as the title suggests, that transhumanism won’t work. The (reconstructed) argument proceeds by equating transhumanism with the aim of mind-uploading [5], and then showing that mind-uploading is doomed to fail. Immediately, there are problems – for why would one think that “if mind-uploading fails, then so does transhumanism”? I’ve not found a single characterization of transhumanism in the literature that comes close to this; and certainly the major players in the movement would unanimously concur that the success of mind-uploading is unnecessary for the success of transhumanism overall. (Gubrud himself notes that Patrick Hopkins, of whom Gubrud seems rather fond, is both a transhumanist and a critic of mind-uploading.) In Kyle Munkittrick’s words, Gubrud’s strategy is “like arguing we can’t ever cure cancer because cold fusion is impossible.”


Point 2. Gubrud’s article becomes especially muddled when he attempts to affirm the above antecedent “if mind-uploading fails, …” As Gubrud writes, “not only is the idea of uploading one of the central dogmas of transhumanism, but the broader philosophy of transhumanism suffers from the same kind of mistaken dualism as uploading, a dualism applied not just to human beings but to humanity, collectively understood.”

In other words, despite its commitment to “naturalism” (at least among most of its exponents), transhumanism succumbs to a kind of ontological dualism, and this is a problem. Gubrud suggests that this fact is hidden by a bit of terminological trickery: talk of “souls” is surreptitiously replaced, in transhumanist discourse, with talk of (this is Gubrud’s long list) “patterns,” “processes,” and “essences,” as well as “minds,” “consciousness,” and “identity.” But the referent remains the same – a non-physical thing that transhumanists believe is preservable or transferable, through some sort of “voodoo” (Gubrud’s word), from one chunk of matter to another.


Let me divide this part of the critique into two subsections; both get slightly technical.

Point 2.1. Mentality. Gubrud conflates a number of distinct phenomena, including (but not limited to) those of (a) personal identity, or the self, and (b) mentality [6]. Each of these has its own distinct set of competing theories, which can be combined in various ways. For instance, one might hold that an uploaded mind will have conscious experiences (in the sense that we have conscious experiences), yet it will not be identical to the original (Schneider’s view [pdf]). Or, one might hold that it will be both conscious and identical (Kurzweil’s view); or, one might hold that it will be neither identical nor conscious (Searle’s view). It is a big mistake to confuse these as Gubrud does; in neuroscience, we call this a “lumping error.”

Now, which of the above views one accepts will, of course, depend on one’s philosophy of mind and theory of identity. It is crucial to note, in addition, that there are ontologically dualistic philosophies of mind that allow for the possibility of mind-uploading. This means that even if transhumanism succumbs to a kind of dualism – a view that posits the existence of non-physical mental phenomena – mind-uploading may still be possible. Allow me to explain.

First, talk of patterns and processes gestures at by far the most widely accepted theory of mentality among contemporary philosophers and cognitive neuroscientists, namely computationalism. Computationalism is a kind of functionalism that holds (a) that mental states are type-identical to functional states and token-identical to physical states (Figure 1), and (b) that the interrelations between mental (or functional) states are computational in nature [7]. (See the type-token distinction.) Thus, since every particular mental state is identical to some particular physical state, computationalism is (or may be [8]) an ontologically physicalist theory.

image

According to the computationalist, the mind is quite literally a kind of computer program being run on the “wetware” of the brain. It follows that, as a program, the mind is realizable by any physical substrate that exhibits the right sort of causal-functional organization, just like Microsoft Windows can be run on either a PC or Mac (or any other appropriately organized system). Software is, as it were, substrate independent.

One way of understanding this approach employs a (Quinean) distinction between the “ontology” and “ideology” of a theory, T. The ontology of T consists of those phenomena that must exist for T to be true, while its ideology is the set of its predicates. In the case of computationalism, the idea is that the predicates that apply to lower-level “implementational” phenomena are not applicable to higher-level “cognitive” phenomena [9].

One can thus view computationalism as entailing a kind of descriptive dualism plus ontological monism: its ontology affirms that “everything is physical,” but its ideology is distinct from that of neuroscience (or any other lower-level science) and non-reducible to it. Thus, we get the principled disregard of neuroscience by functionalists. This is a point I return to below; for the present it suffices to note that at least some of Gubrud’s confusion seems to arise from a failure to distinguish between descriptive and ontological dualism – he mistakenly holds that the former implies the latter [10].

But one need not hold that every property in the universe is physical to coherently maintain that mind-uploading is possible. The philosopher extraordinaire David Chalmers, for example, argues that consciousness is a strongly emergent [pdf] non-physical property of brains [11]; but he also accepts the possibility of mind-uploading [pdf]. The strategy here is not to construe functionalism as a “constitutive” account of what states of consciousness are (as the functionalist does), but rather as an account of “the conditions under which consciousness exists in the actual world.” Like computationalism above, this view is crucially based on the notion of supervenience.

(“Supervenience” is a philosophical term-of-art that signifies a kind of dependency relation between phenomena. The basic idea is that when X supervenes on Y, X depends on Y such that there can be no change in X without a change in Y. The following paragraphs should make this idea clear, if it is not already.)

To illustrate, consider the dot matrix below (Figure 2). What do you see? Here we have a property – specified by the predicate “is an ‘H’” – that supervenes (or depends) on the organizational properties of the dots. (You could imagine, then, one theory consisting of the predicate “is an ‘H’,” and another of the predicate “is a dot.” Both would have the same basic ontology, yet they would differ in their descriptions of that ontology. [12]) Thus, it follows from this dependency relation between the symbol “H” and the dots that as long as the organizational properties of the dots stay unchanged, the “H” will be preserved – that is, independent of whatever other property changes may occur to the dots.

image

By analogy, then, we might say that the “H” represents a mind, the red dots represent biological neurons, and the blue dots represent some other material, such as silicon. Two scenarios of destructive uploading are illustrated; in each, the “H” is successfully transferred across the black arrow(s) without invoking anything non-physical whatsoever. Thus, if the mind is like the “H” in the relevant sense, as computationalists maintain, then mind-uploading ought to be just as feasible as “H”-uploading (from the red to the blue dots). How’s that for spooky “voodoo”?

The exact same idea applies to Chalmers’ property dualism, which holds that certain mental properties (e.g., “qualia”) are ontologically non-physical. On this view, consciousness is an “organizational invariant,” just like the “H”: as long as the substrate exhibits the right causal-functional organization, consciousness will emerge (in a strong sense, rather than the weak sense of functionalism) from that substrate [13]. As Chalmers – and many other theorists – remind us, virtually all of contemporary cognitive neuroscience supports the proposition that minds are organizational invariants. Thus, mind-uploading is in principle possible.

It’s not surprising that, given his unfamiliarity with the literature, Gubrud stubbornly insists that talk of minds (and “Hs”?) “emerging” from appropriately organized physical systems “[slips] not only into dualism, but worse, magical imagery, as if a sorcerer puts in a pinch of this, a snip of that, and – poof! – the mind ‘arises’ like a conjured demon.” The above paragraphs have attempted to sketch out, with minimal detail, how mind-uploading involves no such thaumaturgy; indeed, even if one were to maintain that certain mental properties are non-physical in nature, this by itself would not mean that mind-uploading is impossible or even problematic.


Point 2.2. Personal Identity. Finally, the second issue. As I understand the view defended by Kurzweil, Moravec, and others, the self is nothing other than – i.e., is identical to – a specific pattern that endures over time despite perturbations in the physical material underlying that pattern. This is, in other words, functionalism/computationalism applied not to mentality but to the self [14]. If it is true, then the self is also like the “H” in Figure 2; and from this it follows that mind-uploading, if done correctly, would indeed preserve the individual’s identity.

Once again, there need not be anything dualistic about this position, at least not in the ontological sense (although there may still be a kind of descriptive or predicate dualism, as discussed above). What one transfers here is not a non-physical “soul,” or anything of the sort, but an abstract entity – a pattern – that supervenes on the appropriately organized systems, just as what the black arrows in Figure 2 transfer is nothing more than the abstract pattern that we identify with “H.” The materiality (e.g., neural tissue, silicon, or green slime) of the “supervenience base” is unimportant – all that matters are the base’s organizational properties.

Incidentally, my own view is that “patternism” is flawed as a theory about the self. I find myself in exactly the same camp as Susan Schneider, as explicated here [pdf]. What I do accept, though, is the overwhelmingly plausible claim that mentality – both the qualitative and non-qualitative aspects – is an organizational invariant. Thus, it follows that, on my view, if I were to upload my mind it would indeed be conscious, but it would fail to be me. (Although one might add that it would be far closer to being me than any other being [15].) There must be at least spatiotemporal continuity in addition to pattern preservation, and mind-uploading fails to maintain such continuity. But problems still abound.

In sum, Gubrud’s paper fails not only as a critique of transhumanism and/or mind-uploading (see Point 1 above), but as a piece of respectable scholarship too.

It fails in the same way that the creationist’s claim that Darwinian evolution is false because “all this design cannot be completely random” fails. (Genetic mutation is random; natural selection is absolutely not!) There are no doubt many good critiques of transhumanism that could be articulated (I’ve attempted to give one myself); but Gubrud’s is definitely not one.