IEET > Rights > HealthLongevity > CognitiveLiberty > Personhood > Vision > Virtuality > Contributors > Phil Torres > Innovation
Why “Why Transhumanism Won’t Work” Won’t Work
Phil Torres   Jun 24, 2010   Philosophical Fallibilism  

There is nothing wrong with someone not knowing about a given subject. But there is something wrong with someone not knowing about a subject and pontificating about it as if he does.

As collective human knowledge grows exponentially, so does individual ignorance. In sententious form: Everyone today knows almost nothing about most things.

This is our common epistemic situation, and unless safe and effective cognitive enhancements soon become available [1] or a global catastrophic risk occurs, this situation is only going to get worse. Thus, there is nothing wrong with someone not knowing about a subject X.

There is something wrong, however, with someone not knowing about X and talking, or pontificating, about X as if he or she does. And that is precisely what Mark Gubrud – a physicist with little or no philosophy background – does in his recent Futurisms article, “Why Transhumanism Won’t Work,” as well as in his related 2003 paper, “Bursting the Balloon.”

In my opinion, Gubrud’s works are egregious examples of uninformed scholarship, or “sciolism.”

Let’s start by considering some general rules for avoiding sciolism.

For any concept/issue/phenomenon/etc. X:

  • Remember that just because you find X unintelligible doesn’t mean that X really is unintelligible. The failure of intelligibility may fall on your side, rather than the side of X. (See the “mind projection fallacy” of E.T. Jaynes [2].)
  • When discussing X, try to maintain a degree of proportionality between the force of your assertions and the objective depth of your knowledge of X. Craig Venter, for example, pounding his fist about the possibility of synthetic life is quite different from Venter pounding his fist about, say, German Baroque “Stammwort” theory, since chances are he knows nothing about the latter.

Gubrud’s article and paper are replete with violations of these rules. He reasons that since X doesn’t make sense to him – someone who’s never actually studied X [3] – it must not make sense at all; and, furthermore, his conclusion that X makes no sense is then couched in unjustifiably strong language. (Gubrud states, for example, that if his thesis is unfamiliar to philosophers, and if it stands apart from the established body of philosophical works on mind and identity, then “perhaps it is something new, and perhaps it ought to be published, in some revised form, in a philosophical journal.”) [4]

One could compose an exegesis of Gubrud’s two pieces the size of a small book. For the present purposes, however, I will single out a small subset of the most flagrant confusions.

Point 1. The main thrust of Gubrud’s Futurisms article is, as the title suggests, that transhumanism won’t work. The (reconstructed) argument proceeds by equating transhumanism with the aim of mind-uploading [5], and then showing that mind-uploading is doomed to fail. Immediately, there are problems – for why would one think that “if mind-uploading fails, then so does transhumanism”? I’ve not found a single characterization of transhumanism in the literature that comes close to this; and certainly the major players in the movement would unanimously concur that the success of mind-uploading is unnecessary for the success of transhumanism overall. (Gubrud himself notes that Patrick Hopkins, of whom Gubrud seems rather fond, is both a transhumanist and a critic of mind-uploading.) In Kyle Munkittrick’s words, Gubrud’s strategy is “like arguing we can’t ever cure cancer because cold fusion is impossible.”

Point 2. Gubrud’s article becomes especially muddled when he attempts to affirm the above antecedent “if mind-uploading fails, …” As Gubrud writes, “not only is the idea of uploading one of the central dogmas of transhumanism, but the broader philosophy of transhumanism suffers from the same kind of mistaken dualism as uploading, a dualism applied not just to human beings but to humanity, collectively understood.”

In other words, despite its commitment to “naturalism” (at least among most of its exponents), transhumanism succumbs to a kind of ontological dualism, and this is a problem. Gubrud suggests that this fact is hidden by a bit of terminological trickery: talk of “souls” is surreptitiously replaced, in transhumanist discourse, with talk of (this is Gubrud’s long list) “patterns,” “processes,” and “essences,” as well as “minds,” “consciousness,” and “identity.” But the referent remains the same – a non-physical thing that transhumanists believe is preservable or transferable, through some sort of “voodoo” (Gubrud’s word), from one chunk of matter to another.

Let me divide this part of the critique into two subsections; both get slightly technical.

Point 2.1. Mentality. Gubrud conflates a number of distinct phenomena, including (but not limited to) those of (a) personal identity, or the self, and (b) mentality [6]. Each of these has its own distinct set of competing theories, which can be combined in various ways. For instance, one might hold that an uploaded mind will have conscious experiences (in the sense that we have conscious experiences), yet it will not be identical to the original (Schneider’s view [pdf]). Or, one might hold that it will be both conscious and identical (Kurzweil’s view); or, one might hold that it will be neither identical nor conscious (Searle’s view). It is a big mistake to confuse these as Gubrud does; in neuroscience, we call this a “lumping error.”

Now, which of the above views one accepts will, of course, depend on one’s philosophy of mind and theory of identity. It is crucial to note, in addition, that there are ontologically dualistic philosophies of mind that allow for the possibility of mind-uploading. This means that even if transhumanism succumbs to a kind of dualism – a view that posits the existence of non-physical mental phenomena – mind-uploading may still be possible. Allow me to explain.

First, talk of patterns and processes gestures at by far the most widely accepted theory of mentality among contemporary philosophers and cognitive neuroscientists, namely computationalism. Computationalism is a kind of functionalism that holds (a) that mental states are type-identical to functional states and token-identical to physical states (Figure 1), and (b) that the interrelations between mental (or functional) states are computational in nature [7]. (See the type-token distinction.) Thus, since every particular mental state is identical to some particular physical state, computationalism is (or may be [8]) an ontologically physicalist theory.


According to the computationalist, the mind is quite literally a kind of computer program being run on the “wetware” of the brain. It follows that, as a program, the mind is realizable by any physical substrate that exhibits the right sort of causal-functional organization, just like Microsoft Windows can be run on either a PC or Mac (or any other appropriately organized system). Software is, as it were, substrate independent.

One way of understanding this approach employs a (Quinean) distinction between the “ontology” and “ideology” of a theory, T. The ontology of T consists of those phenomena that must exist for T to be true, while its ideology is the set of its predicates. In the case of computationalism, the idea is that the predicates that apply to lower-level “implementational” phenomena are not applicable to higher-level “cognitive” phenomena [9].

One can thus view computationalism as entailing a kind of descriptive dualism plus ontological monism: its ontology affirms that “everything is physical,” but its ideology is distinct from that of neuroscience (or any other lower-level science) and non-reducible to it. Thus, we get the principled disregard of neuroscience by functionalists. This is a point I return to below; for the present it suffices to note that at least some of Gubrud’s confusion seems to arise from a failure to distinguish between descriptive and ontological dualism – he mistakenly holds that the former implies the latter [10].

But one need not hold that every property in the universe is physical to coherently maintain that mind-uploading is possible. The philosopher extraordinaire David Chalmers, for example, argues that consciousness is a strongly emergent [pdf] non-physical property of brains [11]; but he also accepts the possibility of mind-uploading [pdf]. The strategy here is not to construe functionalism as a “constitutive” account of what states of consciousness are (as the functionalist does), but rather as an account of “the conditions under which consciousness exists in the actual world.” Like computationalism above, this view is crucially based on the notion of supervenience.

(“Supervenience” is a philosophical term-of-art that signifies a kind of dependency relation between phenomena. The basic idea is that when X supervenes on Y, X depends on Y such that there can be no change in X without a change in Y. The following paragraphs should make this idea clear, if it is not already.)

To illustrate, consider the dot matrix below (Figure 2). What do you see? Here we have a property – specified by the predicate “is an ‘H’” – that supervenes (or depends) on the organizational properties of the dots. (You could imagine, then, one theory consisting of the predicate “is an ‘H’,” and another of the predicate “is a dot.” Both would have the same basic ontology, yet they would differ in their descriptions of that ontology. [12]) Thus, it follows from this dependency relation between the symbol “H” and the dots that as long as the organizational properties of the dots stay unchanged, the “H” will be preserved – that is, independent of whatever other property changes may occur to the dots.


By analogy, then, we might say that the “H” represents a mind, the red dots represent biological neurons, and the blue dots represent some other material, such as silicon. Two scenarios of destructive uploading are illustrated; in each, the “H” is successfully transferred across the black arrow(s) without invoking anything non-physical whatsoever. Thus, if the mind is like the “H” in the relevant sense, as computationalists maintain, then mind-uploading ought to be just as feasible as “H”-uploading (from the red to the blue dots). How’s that for spooky “voodoo”?

The exact same idea applies to Chalmers’ property dualism, which holds that certain mental properties (e.g., “qualia”) are ontologically non-physical. On this view, consciousness is an “organizational invariant,” just like the “H”: as long as the substrate exhibits the right causal-functional organization, consciousness will emerge (in a strong sense, rather than the weak sense of functionalism) from that substrate [13]. As Chalmers – and many other theorists – remind us, virtually all of contemporary cognitive neuroscience supports the proposition that minds are organizational invariants. Thus, mind-uploading is in principle possible.

It’s not surprising that, given his unfamiliarity with the literature, Gubrud stubbornly insists that talk of minds (and “Hs”?) “emerging” from appropriately organized physical systems “[slips] not only into dualism, but worse, magical imagery, as if a sorcerer puts in a pinch of this, a snip of that, and – poof! – the mind ‘arises’ like a conjured demon.” The above paragraphs have attempted to sketch out, with minimal detail, how mind-uploading involves no such thaumaturgy; indeed, even if one were to maintain that certain mental properties are non-physical in nature, this by itself would not mean that mind-uploading is impossible or even problematic.

Point 2.2. Personal Identity. Finally, the second issue. As I understand the view defended by Kurzweil, Moravec, and others, the self is nothing other than – i.e., is identical to – a specific pattern that endures over time despite perturbations in the physical material underlying that pattern. This is, in other words, functionalism/computationalism applied not to mentality but to the self [14]. If it is true, then the self is also like the “H” in Figure 2; and from this it follows that mind-uploading, if done correctly, would indeed preserve the individual’s identity.

Once again, there need not be anything dualistic about this position, at least not in the ontological sense (although there may still be a kind of descriptive or predicate dualism, as discussed above). What one transfers here is not a non-physical “soul,” or anything of the sort, but an abstract entity – a pattern – that supervenes on the appropriately organized systems, just as what the black arrows in Figure 2 transfer is nothing more than the abstract pattern that we identify with “H.” The materiality (e.g., neural tissue, silicon, or green slime) of the “supervenience base” is unimportant – all that matters are the base’s organizational properties.

Incidentally, my own view is that “patternism” is flawed as a theory about the self. I find myself in exactly the same camp as Susan Schneider, as explicated here [pdf]. What I do accept, though, is the overwhelmingly plausible claim that mentality – both the qualitative and non-qualitative aspects – is an organizational invariant. Thus, it follows that, on my view, if I were to upload my mind it would indeed be conscious, but it would fail to be me. (Although one might add that it would be far closer to being me than any other being [15].) There must be at least spatiotemporal continuity in addition to pattern preservation, and mind-uploading fails to maintain such continuity. But problems still abound.

In sum, Gubrud’s paper fails not only as a critique of transhumanism and/or mind-uploading (see Point 1 above), but as a piece of respectable scholarship too.

It fails in the same way that the creationist’s claim that Darwinian evolution is false because “all this design cannot be completely random” fails. (Genetic mutation is random; natural selection is absolutely not!) There are no doubt many good critiques of transhumanism that could be articulated (I’ve attempted to give one myself); but Gubrud’s is definitely not one.


As a card-carrying patternist, Schneider’s “Mindscanproofs” document seems simply confused and wrong to me.  Advising that: “I’ve argued
that the Mindscan example suggests that one should not upload” - seems particularly silly to me.

An artful demolition, Phil. Bravo!

Thank you! I can’t believe how bad this article was. Eww.

Actually Microsoft Windows can’t run on a Mac unless it is Intel-based. Windows can’t run on a PowerPC.

You might have been thinking of Microsoft Office, which does have a Mac variant.

Linux would be a much better example as it runs, properly compiled, on many different architectures.

Just a nit pick. grin

After reading all the comments of the original article, I find it impressive that the writer didn’t change his mind. His arguments are so loose and feeble that I cannot believe he didn’t concede a point, even one.

To add a bit to the conversation, I would like to point out that Uploading doesn’t have to be a way to achieve immortality. In the case that we get a functionnal copy of the brain but that consciousness isn’t transfered, uploading still work. In your own word, consciousness could be “arisen” in the new instance and it would continue to share your ideas and your expertise. This possibility is still not negligible as it would drastically change the way society work. Imagine if Einstein could still be researching, or even teaching in a university. I would like if my experiences could be kept to help future generations.

I’d like to point out that all the “human are this or that” arguments are pointless. We’re talking about transhumanism guys. Transcend humanity limits to become more than biological humans. Control our own evolution. Aww..

Yes, nice deconstruction. Intriguing links as well. Combined with all the uploading debates I’ve participated in, reading Schneider’s piece inclines me to adopt option four as my theory of the self. It’s an illusion. Am I meaningfully the same person I was at birth? I don’t even remember anything at all until age three or so. Practically speaking, the problem of identity doesn’t even come up with any form of destructive uploading. Kill the original and the copy has exclusive claims.

I may have to do a blog post on your fascinating critique of transhumanism. As most of my comrades are primitivists, it particularly resonates with me. I think you’re giving their arguments too much credit, but only marginally so.

Well, first of all, Philippe, about the first third of your post recapitulates your posturing as one in a position to put me down as unlearned.  What more can I say about this?  Nothing polite, and nothing that needs to be said.

So I’ll just reply to your Points.

Point 1.  My Futurisms post explained what I meant by calling transhumanism “uploading writ large,”

“Transhumanism posits that ‘the essence of humanity’ is something that can be preserved through any degree of alteration of the human form. In other words, it posits that humanity has an essence that is wholly separable from living human beings, an essence that is transferable to the products of technology.”

Now, you might not agree with that, but your response,

“I’ve not found a single characterization of transhumanism in the literature that comes close to this”

as if I’m not supposed to say anything about transhumanism that hasn’t already been said, or that transhumanists wouldn’t say about themselves, is ridiculous, and of a piece with the rest.

Point 2.  You do not a terribly bad job of abstracting part of my argument.

Point 2.1  Another ridiculous accusation: that I am the one who “conflates… (a) personal identity or the self, and (b) mentality.”  That I’m too stupid and unlearned to know the philosophical theories distinguishing between these.  But as you just explained, Philippe, my thesis is that transhumanist writers conflate these terms in talking about uploading.  I was not critiquing arguments in academic philosophy, I quoted Moravec and Kurzweil only, although there are many less well-known and similiarly sloppy salespeople for uploading.

Yes, Philiippe, one might hold that an uploaded mind will have conscious experience, yet not be identical to the original, or that it will be both conscious and identical, or neither.  Very nice enumeration of the relevant authors.  Thank you.

Your little essay on functionalism and computationalism is similiarly nice and irrelevant.  There is nothing I don’t understand and basically agree with, until we get to your “H” made of dots.

In this example, the “H” is “preserved” only in that you assume the existence of a perceiver, a vision system of some kind, for which an array of dots like this is recognized as the letter “H”.  It is crucial for this example that the change of colors isn’t overly disruptive of the reading of the letter.

If the dots are neurons. natural and artificial, who is the reader?  Who will be able to tell if the “H” is preserved, or if whatever this “H” is supposed to stand for in uploading, is preserved or not, as neurons are killed and replaced by neurobots?

In the end, this whole little essay, supposedly written to put me straight, boils down to the assertion that artificial neurons are possible, that the gradual replacement scenario and equally well all-at-once destructive copying could plausibly result in functional copies of the original brain which, at least initially, would appear to behave the same, be conscious, have unchanged personalities, and so on.

I have never denied that such some such imaginable technologies might be possible and might work as described in strictly physical terms.  It is the application of dualistic metaphysical concepts to these scenarios, and their manipulation to perform voodoo miracles (transfer of souls, escape from dying bodies) that I object to.

Point 2.2. Of course K & M identify “the self” with “a pattern that endures over time”; as I explained both in my Futurisms post and in Bursting the Balloon, this is what I mean by their using the idea of the soul—the true self, the true identity, and the true criterion of identity. 

Are person A and person B the same person?  If they have the same soul, if they are one soul, then yes.  That is my definition of “the soul”, although I recognize other meanings of the word “soul.”  The soul is the token of true identity.  And it is an object which does not exist in the world, but only in our minds.

The soul exists in our minds from our knowledge of life, or “biological continuity” if you prefer, which is the fact referenced by and appealed to in the notion of personal identity which endures across time.  The soul betokens the person.  It exists also because we know our own conscious experience, which is one’s self as if by looking in the mirror.  So the mind is, ultimately, found to be identical with the soul, or self, or true identity.

In K & M’s minds, “patterns” endure over time, are transmittable, are copyable, and are our true identities, the stuff of which the soul is made.  But if you take the position that an upload is a continuation of yourself, what do you say to the possibility of two copies, or N many?  Which one do you continue as?  You do not address these arguments, which are briefly sketched in BtB.

Finally, though, as I noted right at the outset of this nasty little conversation, we are basically in agreement, since you conclude that “if I were to upload my mind it would indeed be conscious, but it would fail to be me.”  You haven’t really given any better an account of what you mean by these italicized terms than I think I have given, but basically, Yeah, some kind of copy could be made, but you’d have to have your head cut up into little bits.

As I said, nobody here can prove what would happen to a consciousness during an upload, if an upload was possible.

Case 1 - Upload doesn’t work. Brain can’t be made a 1:1 copy for whatever reason. Or upload seem to work but the resulting brain is dead and don’t react. (No consciousness)
Transhumanism don’t die. We still have AI, we still have robotics arms or direct interface to the Internet that will make it thrive. Even Kurzweil talks about uploading really briefly in his book “The Singularity is Near”. What’s important is the augmentation of the human body and, more importantly, cognitive enhancement. Technically, it’s like if we were arguing over the possibility of a Dyson Sphere. Perhaps you should have made an article called: “Why Morravec and Kurzweil vision of uploading won’t work”. Then, you’d probably have to be clearer in your argumentation too. All the humanity talk is stupid. They’re transhumanist! They’re talking of transcending humanity. Of going further. We won’t be humans like we are now. We want to take this definition and change it and transform it to fit our goals. We want to evolve humanity to the direction we choose. Oh.. and please.. please.. don’t say that we should stay like we are now. Not with wars, BP, crimes, death, suffering, poverty, racism, and hundred others plagging us right now. Like it or not, we’re not optimal. Let’s just change that.

Case 2 - Upload works but the consciousness is lost. We can’t prove it because there’s a new consciousness that seems to appear it its place. (consciousness is an emergent property)

Victory! Transhumanists are happy! Children can be teached by experts with 200yrs of experience in a field. One day, we might even give that new consciousness a body of its own. We can even augment it to be faster and more intelligent by controlling its evolution. The human is death and is consciousness is too, but it’s like his last child. His last gift to humanity. Singularity happens in a couple of years.

Case 3 - Upload works and consciousness is transfered

Same as 2 but the person stays forever and is technically immortal. Singularity happens in a couple of years.

Whatever case we choose. Transhumanism still thrive. I don’t see the point. What about all the genetics enhancements we’re doing right now? It just doesn’t make sense to argue on uploading. Nobody can know for sure. People are throwing ideas around, nobody knows for sure what can or what will be done. We’ll only know in a few years, once research start being done on the subject. As for consciousness, like death, we’ll never know for sure if it stays. Therefore, the only way we can talk about that is through academic philosophy.


Thanks for your comments. Just a few points, at the risk of repeating myself.

“Transhumanism posits that ‘the essence of humanity’ is something that can be preserved through any degree of alteration of the human form.” I’m really not sure this is the case. I mean, the whole idea behind transhumanism is to “transcend,” as it were, our humanity. (See, e.g., Bostrom’s “In Defense of Posthuman Dignity,” or the Transhumanist FAQ.)

Second, I’m not sure that transhumanists *do* conflate personal identity and the mind. Either way, you make no distinction in your article between the two, as far as I could tell, often vacillating between them without making it clear that you’re doing so.

Third, I’m not sure that “H"s assume the existence of a percipient; this is somewhat obscure. Imagine that everyone around the world instantaneously died. Would “H"s instantaneously cease to exist as a result? Well, there clearly wouldn’t be anyone around to *read* the “H"s, but I’m not sure that means that “H"s wouldn’t *exist*. It’s really not an important point, though : consider *any* functional kind, such as cells, “a rather conspicuously functional term(!)” (see William Lycan 1987, 38-39). While some phenomena are defined in terms of what they *are*, materially, others are defined in terms of what they *do*, functionally. Compare “hydrogen” to “poison.” Thus, with respect to such functional kinds, it matters not what they *are*, as long as they can still *do* what they do. The example of the “H” works because “H"s are organizational invariants, just like (Chalmers argues, along with virtually all of contemporary cognitive science) minds. All that matters is the abstract pattern independent of how it’s instantiated.

“I have never denied that such some such imaginable technologies might be possible and might work as described in strictly physical terms. It is the application of dualistic metaphysical concepts to these scenarios, and their manipulation to perform voodoo miracles (transfer of souls, escape from dying bodies) that I object to.”

Again, I have no idea why you are talking about souls, voodoo, and the like. Computationalists (or the property dualists discussed) are talking about patterns, not souls, and they are talking about pattern preservation, not voodoo. If this doesn’t make sense to you, then fine. But that doesn’t mean it doesn’t make sense : patterns (as here understood) are *NOT* souls!

“Of course K & M identify “the self” with “a pattern that endures over time”; as I explained both in my Futurisms post and in Bursting the Balloon, this is what I mean by their using the idea of the soul—the true self, the true identity, and the true criterion of identity.”

Take a look at the Schneider paper that I linked in the article. Although the literature on personal identity is oceanic and highly sophisticated, she does an excellent job of telescoping it, and thereby making it intelligible. The point is that “the self = a soul” and “the self = a pattern” are two *distinct* theories. One might argue that transhumanists (or whoever) think they’re espousing one theory when, in fact, they’re *really* espousing the other. But your argument seems to involve a failure to recognize these theories as distinct in the first place, since you vacillate between them without any explicit recognition that they’re distinct. Holding that the self is a pattern is definitely *NOT* the same as holding that the self is a soul.

I would be happy to hear a good argument for why mind-uploading won’t work (either in terms of transferring consciousness or transferring identity, or whatever), but I haven’t yet heard one.


Your case #3 requires the dualistic existence of a transferable “consciousness.”  Your not serious about that, nor are you serious about the idea that copying just doesn’t work for some reason, so let’s consider the hard case, your #2.  People die and their “last child” is a kind of ghost in the machine, their own ghost.  Their soul.  Creepy. 

Well, you say we can’t prove it isn’t the same soul, but what if we make two copies, many copies?  So we have branching identity, and the time-directedness of identity is revealed; today’s child is not tomorrow’s adult, today’s child may not even survive.

If I’m going to wake up as my copy, how can I wake up first as one copy and then wake up as a second?  And be two then?  Clearly this is nonsense.

Thus there is nothing to prove, because the person dies and has her brain cut up, and nothing further that happens alters that fate.

So now, we really know that death is death and you don’t escape it by being “uploaded” or copied in any way.  What’s left to recommend uploading?  Your 200-year old experts?  Not a good enough reason.  We have enough well educated people in need of employment.  AI will give us the expert systems.

Why not uploading?  Because it would create these creepy animated ghosts, because such things would have a credible claim to human rights and even a compromised kind of “humanity” or “personhood,” and because the people so “immortalized” in silicon (or whatever) would still really have died, and “be dead” i.e. no longer be.



Ghost? Are you serious? Are you actually talking about ghost?

Meh.. I guess we will never agree. Just tell me you wouldn’t have like to a conference with Einstein. Is their a difference between Stephan Hawking now.. or the mind of stephan hawking transfered in a computer? If you prefer to just die and stop contributing to society, it’s your thing. Me, I’ll do anything to improve society. To make it more responsable. To make research thrive and to make life easier and erradicate suffering as much as I can. Uploading can work or not. I don’t care. Tranhumanism isn’t about uploading. Transhumanism is about transcend human individual restriction and become more. We’re actually doing that right now. Twitter, Facebook, all our blogs, they’re all influencing our choices and triggering discussions. Soon enough, we’ll have the internet’s informations available instantly directly from a brain interface.. or with special glasses with wireless, who care.

At that time, we’ll be more than human. Uploading or not.

Then, your case fail. Transhumanism cannot fail. Transhumanism is just the idea of searching for answers to improve ourself. The human has never been in evolutionnary stasis anyway. We can improve our tools, our tools can improve us. Uploading, if it works, would just be another tool. Just like nanobots, nanofactories or AI (experts systems or General AI).

ps.. I don’t agree with your points. We need more experts. As for the nonsense of uploading, I am not qualified, nor agile enough with the english language to start a debate on that. Still, I think we all seems to have strong opinions that are rooted deep and none of us is about to change their minds.


Casual conflation of “personal identity” and “the mind” is one of the least of transhumanist transgressions against reason and clarity of thought.  As I have argued, transhumanist authors appeal loosely to terms like “personal identity”, “your mind”, “your consciousness” and “you” as stand-ins for a single underlying idea, true identity, the soul, a numerically unique token of who and what you are, or I am. 

This true identity is a fiction of our minds, a mental token we apply to the images of objects in our senses, and in which we recall each thing that we have identified.  In the case of people, it is the idea of the soul.  If people do have souls, as in the traditional religious view, then the soul, whatever it may be, is certainly the carrier of true identity. 

We like to visualize souls as made of some substance, some insubstantial substance, such as “information” or “pattern” or “pure theory”.  But there is no such substance without substance of the ordinary kind. 

Thus the absurdity of your question, whether the Hs go out of existence with the last reader capable of recognizing an H.  Well, I guess if your ontology of Hs is that they are accepted by some theory, then that theory has to be written somewhere, and so if the last copy of that theory, or its last variant in the form of, say, an OCR device, were to go out of existence, then I guess - poof! - so would the Hs.

But this is silly, and the source of the silliness is the original ontology - pure abstract Hs don’t exist at all, H-recognizers do, and formations of ink and other stuff in arrangements that OCRs and human eyes will recognize. 

That brings us back again to my question about who is the reader if your H is the mind and the blue dots are the neurobots.  Who can tell if the “one mind” is preserved or if something stranger is happening.  Such as, for example, the slow replacement of each neuron by two artificial neurons in parallel.  Can we make the “one mine” double while preserving its wholeness? 

So finally, why do I talk about souls, voodoo and magic?  Because the various salespeople for uploading use terms like “information,” “pattern” and “algorithm” to suggest some kind of substance, potentially the substance of which the mind or consciousness is made, potentially the substance of what is my true identity, my soul, which I follow where it goes, and which, by various thought experiments, should be transferable from one mere “computational substrate” to another, so that this becomes a way for me to escape my doomed body and become something greater than a human being. 

All the distinctions made in your various philosophical arguments between consciousness, mind, identity, self, soul, and so on melt away in this spotlight on the underlying psychological game, the verbal voodoo which works apparent miracles.

By suggesting that terms like “pattern” and “information” provide a possible second substance to substantiate the mind, the consciousness, the soul, the true identity, the self…. these shamans of identity transfer appropriate a crudely dualistic theory of “the hard problem,” and with it an object identifiable as the soul, which can be made to appear, in various thought experiments, as if transferred from one embodiment to another.

So now I have restated what is actually the central thesis of BtB, which you have not really engaged and do not seem to have comprehended in any depth.



As Christopher Hitchens once said in a debate with Shmuley Boteach: “There are no statements worth arguing here All you can do is underline them.”

Philippe - Christopher Hitchens is a pissant.  You write:

Holding that the self is a pattern is definitely *NOT* the same as holding that the self is a soul.

So who’s “underlining” here?  My entire argument is that it is the same.  Where’s your “critique” of that?  It’s the same because “pattern” is presumed to be a thing that exists apart from the substance of the body and separable from it, transferable to another body, another substance.  This is essentially a mythical, magical image of soul transfer.  Well, I guess I’m just “underlining” my argument again.  And I guess you’ve proven (somehow, ad hominem I suspect) that it just isn’t worth your even bothering to critique it.

I’ve been reading these comments all weekend, and can no longer resist to voice some opinion, (well you know me?)

Despite all this “rude-speak” and disrespectful judgment of persons, there are still valid and important points all round here worthy of much further debate. Philippe : glad to see you are a supporter of “mind uploading”, yet surprised you have bailed so early in the argument?

Firstly the point that everyone has been blatantly ignoring here from Simon Dufour : that transhumanism cannot be equated to mind uploading alone. If this article is a critique against criticisms of transhumanism in general, then I don’t see why mind uploading has been singled out. I rather view the idea of mind uploading and final freedom from the human form and mindset as posthuman, not merely transhuman?

Simon’s points also highlight that it is the drive towards these ideals that is important, and not whether they ultimately fail or succeed? And if we are all adamant that these things are impossible and can never be realised, then we may as well all stop now and accept failure. So thank heavens there are those idealists that do not subscribe to this attitude.

I have been hearing lots of arguments and charges levied at mind uploading proponents by the critics regarding misunderstandings of materialism and physicalism and the leaps of faith, (of identity transfer) which involve dualism. And I have to say that this appears to be valid?

It is only natural that we may rationalise regarding physicalism and the reduction of mind states to patterns, yet these patterns are still only understood phenomenally by our minds? For example these H’s may be transferred between substrates yet the coding H really does need to be understood in a non-physical manner. Once the last mind dies out, then unless replaced, the “phenomenal meaning” of the coding H will be lost. Trees falling do not make a sound if there is no one around to hear them! Dolphins do not use or understand H’s, (..hmm?)

Two questions that always arise where arguments against mind uploading is concerned are..

1. Can my mind, my identity, my intellect be transferred : will it really be “me”?
2. Continuity of consciousness : it is impossible for minds to be transferred without continuity of consciousness, and is even more impossible for the consciousness of this mind to exist in more than one place, as there can only really be “one” me?

1. Transfer of identity, of me.. To contemplate this possibility it really does all equate to one’s understanding of “Self”. This “Self”, this ego is most certainly created, and guided by our life experiences and intellect (physical properties and limitations of IQ). In other words our memories.

Other physical states affect our ego, our character and the way we interact, brain topology, chemicals and cultural mindsets, memes are not phenomenal but physical and may be reduced.
All this is reducible to patterns and algorithms, and understanding of interaction of brain and chemicals and the topologies of neural networks right? Well maybe so, is this not what all the brain slicing is about?

So the point is that “the most important” attribute of mind uploading, of transfer of identity is the transfer of memories? : which is exactly how Martine Rothblatt explains the purpose and aim of mindfiles, mindware and mindclones. For more about this read all there is to read here at IEET from Martine, who explains it far better than I ever could.

2. Continuity of consciousness : Once again Martine explains this fully, including the possible use of broadband connection of physical mind states and their perceptions/apperception’s via cloud computing, and the speculated consequences of breaks in service and reconnection of minds and shared consciousness.

Also this link to Daniel Dennett’s thought experiment regarding all of the topics and points raised and highlighted here is useful.. Have a read of this, it is both humorous and thought provoking.

“Where am I?” : Daniel Dennett

“...But wait,” I said to myself, “shouldn’t I have thought, ‘Here I am, suspended in a bubbling fluid, being stared at by my own eyes’?” I tried to think this latter thought. I tried to project it into the tank, offering it hopefully to my brain, but I failed to carry off the exercise with any conviction. I tried again. “Here am I, Daniel Dennett, suspended in a bubbling fluid, being stared at by my own eyes.” No, it just didn’t work. Most puzzling and confusing. Being a philosopher of firm physicalist conviction, I believed unswervingly that the tokening of my thoughts was occurring somewhere in my brain: yet, when I thought “Here I am,” where the thought occurred to me was here, outside the vat, where I, Dennett, was standing staring at my brain.”


I seem to recalling having this conversation (in a much briefer form) with Athena Andreadis over at H+ when she tried to make the same claims about the impossibility of mind uploads

I agree with you Valkyrie Ice. As you said in your comment, at the point where we’re at, we’re just debating beliefs. However, thanks CygnusX1, I still stand that uploading != transhumanism.

I have been following this debate for the last couple of days, and I must admit that I am somewhat confused. Gubrud seems to have argued that an upload might be conscious, but it would not allow for the survival of the original person. Verdoux says that the upload would be conscious, but would fail to be ‘me’. Prisco seemed to be saying that ‘you’ ‘are’ whoever ‘you’ ‘accept’ as ‘you’, Hughes apparently is on record as saying that the self is an illusion, and Anissimov seems to think that his facebook page is an “exoself”. As I recall, a person named Summerspeaker said that whether or not uploading allows you to survive only matters to ‘you’ anyway, and then said some things about how transhumanism is a death cult but how that’s okay because sacrifice is laudible for some cultures. I guess some people are a bit upset about the way transhumanism is becoming “mainstream” and “PC”, and while I’m no PR expert, I would probably advise that even the most hardcore transhumanist splinter group avoid referring to itself as a ‘death cult’.

There have been a lot of arguments as to who the dualists are in this debate, (which kind of seems to be just an irrelevant point-of-honour issue for philosophy of mind wonks) but isn’t there only one really important question in the whole uploading issue? Namely, whether or not you will be alive or dead at the end of it? And have any of the people in this debate actually disagreed on that point?

To clarify, I was exploring the implications of Mark’s view. The whole subject is strange because it’s fairly likely they’ll be no definitive way to determine nature of the process when and if uploading ever becomes reality. After a destructive upload, there’s just one entity left to claim the mantle of identity. If it professes continuity of experience, what are opponents going to say?

“No, you’re a copy, the human being is dead!”

What would you do if someone yelled that sort of thing at you every morning?

I have difficulty imaging a satisfactory resolution to this problem.

Sorry Brendan.

Transhumanism != Uploading
Transhumanism != Immortality
Uploading != Immortality

It’s crazy how much people have problem understanding that. They’re all different concept.

Some people think that Uploading could lead to Immortality, however, it’s not the only reason to do it. People have tried for hundreds of years to put their brains in books. Now imagine if we could literally do it. Make a complex AI mind that is all the knowledge of our ancestors merged into one databank that can be prompted at will.

Consciousness or not. Death or not. It does not matter really much. Sure, I wouldn’t suicide myself to upload my brain. However, if we find a non-destructive way to do it. I’d be pleasured if my knowledge could be used by every human. Or.. posthumans. Because if all human knowledge can be accessed fast enough, I just can’t imagine the kind of progress we’d face.


You bring up some excellent points. Just to be clear, my main contention with Gubrud is less the conclusion that he arrives at, and more the way he arrives at that conclusion. (Note: it is possible to hold the correct view about a subject X but for the wrong reasons, just as it is possible to be epistemically justified, given the available evidence at some time t, in holding an incorrect view about X. Obviously, science aims for both truth and justification.)

So, for example, Gubrud insists that “the self = a pattern” is equivalent to “the self = a soul.” (See a few comments above this one.) But this is absolutely false(!), and any academically respectable critique of uploading ought to distinguish between these two theories—if for no other reason than because they are *distinct*!

(Incidentally, my argument “if ‘H’-uploading is possible, then self-uploading ought to be possible, on the assumption (which I reject) that the self is nothing more than a pattern,” is an attempt to demonstrate clearly and simply how transferring one’s *self* from one material substrate to another is possible without introducing any immaterial entities. That is to say, if no such immaterial entities are needed for “H”-uploading, then no such entities are needed for uploading one’s self either, since both “H"s and selves (at least according to one theory) are *constituted* by mere patterns supervening on the physical.)

As for your comment: “... isn’t there only one really important question in the whole uploading issue? Namely, whether or not you will be alive or dead at the end of it? And have any of the people in this debate actually disagreed on that point?”

As I state in the article, I think patternism fails as a theory of the self. But there are different ways for theories to fail: they could be simply incorrect (in which case they ought to be discarded), or they could be merely incomplete (in which case they ought to be modified). Patternism is not, in my view, incorrect, but it is nontrivially incomplete. There also need to be, it seems, something like “spatiotemporal continuity”—that is, in addition to pattern preservation. (This is what Schneider tentatively argues in the paper linked above.)

I then claimed above that an upload would be *conscious*, but it wouldn’t be *me*. This is a very course-grained and misleading statement, though; I should have been more precise in my language. Why? Because, given spatiotemporal continuity, there are some *types* of uploading that seem to preserve such continuity. Compare the following scenarios:

(1) you scan your brain using some advanced scanning technology, thus producing a mind-clone in a computer simulation. (This is the scenario of Mindscan.) Here we have psychological but not spatiotemporal continuity, which is why : for most people, at least : our intuitions seem to push us in the direction of denying that the uploaded mind is the same as the original. (After all, it is numerically distinct even though qualitatively identical; this is what Summerspeaker just gestured at above.)

(2) you undergo a decade-long procedure that involves gradually replacing one neuron at a time with a functional isomorph : an embedded unit that is materially distinguishable but functionally indistinguishable from your neurons. (This is a scenario similar to the one described by Moravec.) In this case, given that *both* the psychological and spatiotemporal continuity criteria are, at least ostensibly, satisfied, it seems that the resulting upload would indeed be the same as the original.

Clearly, though, more work on these issues is needed. Ultimately, the debate may boil down to (as some philosophers have put it) “the dull thud of conflicting intuitions.” But, in the meantime, we ought to be clear about the different theories of the self and of mentality, and avoid conflating them as Gubrud insists on doing.

Summerspeaker: I’m not sure I agree with you or Simon that it is so strange to wonder about whether or not a procedure involving your brain will kill you or not. I feel like I could certainly *imagine* the possibility of a ‘copy’ of myself claiming to be me, which would be quite frightening even if I were still alive to see it. The prospect of *not* being around to see it is a bit more frightening still! Now the fact that I can imagine this is of course not the best kind of evidence or argument, but it is certainly enough to motivate me to think quite carefully about this particular aspect of such a procedure.
I must confess that I am still quite confused by the deflationary attitude taken by so many transhumanists towards what seems to me to be quite literally a life or death question. I suppose I might just be an unregenerate bio-chauvinist, but being alive remains higher on my list of priorities than contributing a hyper accurate “Life of Brendan” volume to the vast AI data-banks of the future.

Hmmm.. do you guys think that transhumanist want to organize mass suicide that involve killing themselves, slicing their brains and uploading in computers?

A procedure for uploading is only viable if non-destructive or work postmortem. Nobody would kill himself voluntarly with the goal to upload.

Am I missing something?

My view is thus.

I do not think we will end up becoming “brains in a box” though I do believe we will create “backup systems” which will enable us to survive catastrophic body loss.  The reason being that we exist in a physical universe, and I believe we will continue to exist in a physical non virtual universe. As such, I believe we will continue to possess “organic” bodies in that they will be primarily composed of CHON, the same elements we currently use, but those elements will be structured through highly advanced nano-engineering. This allows such bodies to self repair under the widest range of possible conditions using the most common elements in the universe.

However such a body will be linked to a computer system which will record every detail up to the millisecond of catastrophic body loss, to enable recreating of the entity up to just prior to the point of cessation.

We don’t yet know precisely how much of that pattern needs to be maintained, but we will eventually discover how much fine grained detail is needed, and like Philippe I believe that full spatiotemporal continuity needs to be maintained, but if such continuity is kept, then I have every reason to see my “respawned” self as “me”

Nor do I suffer from the worry that multiple copies of “Me” could exist, as I believe the mind is easily capable of the plasticity needed to enable those multiples to be re-merged so that “I” could maintain the continuity of “both” of us, as well as the willingness to allow a “me” to decide that it does not wish to re-merge, but assume an independent existence.

However I do prefer the “transition” mode as opposed to fast destructive upload, simply to maintain my personal continuity through the process.

I’m not sure I agree with you or Simon that it is so strange to wonder about whether or not a procedure involving your brain will kill you or not.

It’s not strange to wonder about it, but given the questionable nature of identity in the first place, it may be impossible to resolve even if and when the technology becomes available. Personally, I fear suffering rather than the state being dead. If I were to find myself in line for a destructive upload, my main concerns would be whether the process had any chance of hurting and how those close to me felt about it. If everyone I knew accepted that consciousness would transfer and there wasn’t any possibility of waking up to feel my brain being sliced into pieces, I could hardly complain. Now, I’d opt against the process if I could and certainly wouldn’t seek it out, but I can understand why people would feel differently.

Summerspeaker wrote:

“After a destructive upload, there’s just one entity left to claim the mantle of identity.”

Why only one?  If you can make one copy, you can make another, as many as you like.

Taking that road creates additional complications. Identify obviously fractures with the creation of an arbitrary number of copies. The particularly egoistical might consider this a positive outcome, however.

A followup on the main topic:

The soul or true identity is of course what must be saved in order for uploading to work.  The task of magic is to persuade or create the illusion, enabling make-believe, that miracles have been performed. 

Therefore, to save the dying, uploaders must merely create the illusion of conditions implying the transfer and preservation of true identity, of the token of true identity held by the mind being swayed by the magic. 

If they persuade you that your true identity is a pattern, or that it is “the mind” and that these are independent of “substrates,” they can describe how to transfer a pattern, even while “the mind” continues to be aware, thus seeming to prove the mind’s transferability.  And is my mind, my thinking and feeling and willing, not what I want to save from destruction?

Thus, “the mind” or “consciousness” are used synonymously with “the soul” or as true identity, the true me, the core of my being that must be preserved, that wishes to live, that fears to die, that the shaman must save somehow.

The claim of “slow uploading” is that it proves “the mind” is saved (and transferred).  However, it proves no such thing, as is shown by the possibility that the mind was cloned at the same time.  It is consistent with the fact that death does not require our knowledge or consent.

It needs to be said also that perfect simulation of natural neurons by artificial ones is very unlikely to work as advertised. 

In fact, exact simulation of natural neurons is probably a very hard and impractical, inefficient thing to do.  It is of philosophical interest that even a very sophisticated simulation of neuron growth and response would inevitably deviate substantially, in the long run, from what the natural neuron would have done.  Accordingly, a simple axon-synapse-dendrite network model, which is what one might reasonably hope to pick up from 3D scanning of brain tissue, would probably be strangely “tinny” in its behavior, and possibly run off the tracks in short order due to its lacking subtler cellular mechanisms that underlie learning and the maintenance of network balances.

That said, a leading candidate for AGI is new chip technology that is neuromorphic at a low level, using memristors or similar components.  Features of this architecture would be noisy signaling and inherent parallelism and redundancy, but relatively sparse coding of distinct objects. 

It is probably much easier to implement an artificial brain that steals a few principles from the human brain and implements them on faster and possibly denser hardware, than it would be to try to emulate an actual human brain, let alone faithfully.

So much for “merging with technology.”  The destroyer (AGI) will arrive well before the savior (uploading), if the latter would ever arrive at all.


I can see your point now. It still doesn’t explain why transhumanism won’t work. It’s working right now. It doesn’t make sense to me.

“Transhumanism is an international intellectual and cultural movement supporting the use of science and technology to improve human mental and physical characteristics and capacities. The movement regards aspects of the human condition, such as disability, suffering, disease, aging, and involuntary death as unnecessary and undesirable. Transhumanists look to biotechnologies and other emerging technologies for these purposes. Dangers, as well as benefits, are also of concern to the transhumanist movement.” - Wikipedia

However, it proves no such thing, as is shown by the possibility that the mind was cloned at the same time.

The mind could be cloned during the normal flow of atoms as well. In fact, nanobots might be able to take the very material discarded and form an identical brain. Who’s the original and who’s the copy then?

The destroyer (AGI) will arrive well before the savior (uploading), if the latter would ever arrive at all.

The destroyer, huh? Do you suspect, like Hugo de Garis, that the artilects will wipe us out? It’s a plausible enough fear, I guess. If so, how do you think we can prevent this from happening?

The point of the article being criticized is valid. If mind uploading is to be possible, the brain and the mind must be separable. Destructive “simulation” (like in Transcendence) and “copying” methods of uploading are pointless pursuits because they obviously destroy the mind. Only the gradual neural replacement shows promise as a potential method for preserving the individuals existence.
This process appears so complex that I cannot imagine it happening soon enough to enable any of us to live indefinitely, but who knows. We need to identify what the consciousness is. My guess is that our minds “propagate” continuously in our brain like a wave in water. Breaking the continuous propagation of this wave (by destroying the brain or shutting it down completely) will result in death, and any subsequent waves will be NEW minds. If we integrate the brain with an artificial neural system, it is possible that our “wave” could extend beyond our brain and we would continue to live after our biological brain is destroyed.
It should be noted that this theory has the implication that minds can be divided and combined, and that “true you” can survive the destruction of all but a few working neurons although your memories, knowledge, and sense of self would not survive such a catastrophe. It follows that your memories and self perception “patterns” are NOT “you”, they are just information that you access and use.

This true identity is a fiction of our minds, a mental token we apply to the images of objects in our senses, and in which we recall each thing that we have identified.  In the case of people, it is the idea of the soul.  If people do have souls, as in the traditional religious view, then the soul, whatever it may be, is certainly the carrier of true identity.

Can’t read all the comments and still have time to sleep. If you’ve read the following before in so many words, apologies. It’s not either/or: not either no identity can be preserved or entire identity can be retained. Could be 23.55912 percent of identity can be preserved. Or 49.0093312 of identity might be. Or 61.49711 of identity can be preserved. 80.462001. 98.00007022. Etc.

“We” is a problem. First off, when communicating with solipsists, libertarians, and others who object to collectivism.

Second, “we” in the present tense presents no great difficulties—but in the future tense it most certainly does. For starters in the context of transhumanism it presumes everyone wants to live indefinitely; something we shouldn’t take for granted. And to the majority, ‘immortalism’ means God/Jesus/reincarnation, etc.
Super-longevity to the majority conjures up someone like, just say, Methuselah.

The rest of it you have covered above and elsewhere.

Singularity happens in a couple of years.

That’s good to hear, Simon.

YOUR COMMENT Login or Register to post a comment.

Next entry: Welcome to real science!

Previous entry: The Human Genome at Ten