As a critical posthumanist (with speculative leanings), I found myself always a little leary of transhumanism in general. Much has been written on the difference between the two, and one of the best and succinct explanations can be found in John Danaher’s “Humanism, Transhumanism, and Speculative Posthumanism.
Very briefly, I believe it boils down to a question of attention: a posthumanist, whether critical or speculative, focuses his or her attention on subjectivity; investigating, critiquing, and sometimes even rejecting the notion of a homuncular self or consciousness, and the assumption that the self is some kind of modular component of our embodiment.
... knowing the force and action of fire, water, air the stars, the heavens, and all the other bodies that surround us, as distinctly as we know the various crafts of our artisans, we might also apply them in the same way to all the uses to which they are adapted, and thus render ourselves the lords and possessors of nature. And this is a result to be desired, not only in order to the invention of an infinity of arts, by which we might be enabled to enjoy without any trouble the fruits of the earth, and all its comforts, but also and especially for the preservation of health, which is without doubt, of all the blessings of this life, the first and fundamental one; for the mind is so intimately dependent upon the condition and relation of the organs of the body, that if any means can ever be found to render men wiser and more ingenious than hitherto, I believe that it is in medicine they must be sought for. It is true that the science of medicine, as it now exists, contains few things whose utility is very remarkable: but without any wish to depreciate it, I am confident that there is no one, even among those whose profession it is, who does not admit that all at present known in it is almost nothing in comparison of what remains to be discovered; and that we could free ourselves from an infinity of maladies of body as well as of mind, and perhaps also even from the debility of age, if we had sufficiently ample knowledge of their causes, and of all the remedies provided for us by nature.
- Rene Descartes, Discourse on the Method of Rightly Conducting the Reason and Seeking Truth in the Sciences, 1637
Being a critical posthumanist does makes me hyper-aware of the implications of Descartes’ ideas presented above in relation to transhumanism. Admittedly, Danaher’s statement “Critical post humanists often scoff at certain transhumanist projects, like mind uploading, on the grounds that such projects implicitly assume the false Cartesian view” hit close to home, because I am guilty of the occasional scoff.
But there really is much more to transhumanism than sci-fi iterations of mind uploading and AIs taking over the world. Just like there is more to Descartes than his elevation, reification, and privileging of consciousness. From my critical posthumanist perspective, what has always been the hardest pill to swallow with Descartes wasn’t necessarily the model of consciousness he proposed. It was the the way that model has been taken so literally—as a fundamental fact—that has been one of the deeper issues which drive me philosophically.
But, as I’ve often told my students, there’s more to Descartes than that. Examining Descartes’s model as the metaphor it is gives us a more culturally based context for his work, and a better understanding of its underlying ethics. I think a similar approach can be applied to transhumanism, especially in light of some of the different positions articulated in Pellissier’s “Transhumanism: There are [at least] ten different philosophical catwgories; which one(s) are you?”
Rene Descartes’s faith in the ability of human reason to render us “lords and possessors of nature” through an “invention of an infinity of arts,” is, to my mind, one of the foundational philosophical beliefs of transhumanism. And his later statement, that “all at present known in it is almost nothing in comparison of what remains to be discovered” becomes its driving conceit: the promise that answers could be found which could, potentially, free humanity from “an infinity of maladies of bodies as well as of mind, and perhaps the debility of age.” It follows that whatever humanity can create to help us unlock those secrets is thus a product of human reason. We create the things we need that help us to uncover “what remains to be discovered.”
But this ode to human endeavor eclipses the point of those discoveries: “the preservation of health” which is “first and fundamental ... for the mind is so intimately dependent on the organs of the body, that if any means can ever be found to render men wiser and more ingenious ... I believe that it is in medicine that it should be sought for.”
Descartes sees an easing of human suffering as one of the main objectives to scientific endeavor. But this aspect of his philosophy is often eclipsed by the seemingly infinite “secrets of nature” that science might uncover. As is the case with certain interpretations of the transhumanist movement, the promise of what can be learned often eclipses the reasons why we want to learn them. And that promise can take on mythic properties. Even though progress is its own promise, a transhuman progress can become an eschatological one, caught between: a Scylla of extreme interpretations of “singularitarian” messianism and a Charybdis of similarly extreme interpretations of “survivalist transhuman” immortality.
Both are characterized by governing mythos—or set of beliefs—that are technoprogressive by nature, but risk fundamentalism in practice, especially if we lose sight of a very important aspect of technoprogressivism itself: “an insistence that technological progress needs to be wedded to, and depends on, political progress, and that neither are inevitable” (Hughes 2010. emphasis added). Critical awareness of the limits of transhumanism is similar to having a critical awareness of any functional myth. One does not have to take the Santa Claus or religious myths literally to celebrate Christmas; instead one can understand the very man-made meaning behind the holiday and the metaphors therein, and choose to express or follow that particular ethical framework accordingly, very much aware that it is an ethical framework that can be adjusted or rejected as needed.
Transhuman fundamentalism occurs when critical awareness that progress is not inevitable is replaced by an absolute faith and/or literal interpretation that—either by human endeavor or via artificial intelligence—technology will advance to a point where all of humanity’s problems, including death, will be solved. Hughes points out this tension: “Today transhumanists are torn between their Enlightenment faith in inevitable progress toward posthuman transcension and utopian Singularities, and their rational awareness of the possibility that each new technology may have as many risks as benefits and that humanity may not have a future” (2010).
Transhuman fundamentalism characterized by uncritical inevitablism would interpret progress as “fact.” That is to say, that progress will happen and is immanent. By reifying (and eventually deifying) progress, transhuman fundamentalism would actually forfeit any claim to progress by severing it from its human origins. Like a god that is created by humans out of a very human need, but then whose origins are forgotten, progress stands as an entity separate from humanity, taking on a multitude of characteristics rendering it ubiquitous and omnipotent: progress can and will take place. It has and it always will, regardless of human existence; humanity can choose to unite with it, or find itself doomed.
Evidence for the inevitability of progress comes by way of pointing out specific scientific advancements and then falling back on speculation that x advancement will lead to y development, as outlined by Verdoux’s “historical” critique of faith in progress, holding a “‘progressionist illusion’ that history is in fact a record of improvement” (2009). Kevin Warwick has used rat neurons as CPUs for his little rolling robots: clearly, we will be able to upload our minds. I think of this as a not-so-distant cousin of the intelligent design argument for the existence of God. Proponents point to complexity of various organic (and non-organic) systems as evidence that a designer of some kind must exist.
Transhuman fundamentalist positions point to small (but significant) technological advancements as evidence that an AI will rise (Singularitarianism) or that death itself will be vanquished (Survivalist Transhumanism). It is important to note that neither position is in itself fundamentalist in nature. But I do think that these two particular frameworks lend themselves more easily to a fundamentalist interpretation, due to their more entrenched reliance on Cartesian subjectivity, enlightenment teleologies, and eschatological religious overtones.
Singularitarianism, according to Pellissier, “believes the transition to a posthuman will be a sudden event in the ‘medium future’—a Technological Singularity created by runaway machine superintelligence.” Pushed to a fundamentalist extreme, the question for the singularitarian is: when the posthuman rapture happens, will we be saved by a techno-messiah, or burned by a technological antichrist? Both arise by the force of their own wills. But if we look behind the curtain of the great and powerful singularity, we see a very human teleology. The technology from which the singularity is born is the product of human effort. Subconsciously, the singularity is not so much a warning as it is a speculative indulgence of the power of human progress: the creation of consciousness in a machine. And though singularitarianism may call it “machine consciousness,” the implication that such an intelligence would “choose” to either help or hinder humanity always already infers a very anthropomorphic consciousness.
Furthermore, we will arrive at this moment via some major scientific advancement that always seems to be between 20 and 100 years away, such as “computronium,” or programmable matter. This molecularly-engineered material, according to more Kurzweilian perspectives, will allow us to convert parts of the universe into cosmic supercomputers which will solve our problems for us and unlock even more secrets to the universe. While the idea of programmable matter is not necessarily unrealistic, its mythical qualities (somewhere between a kind of “singularity adamantium” and “philosopher’s techno-stone”), promise the transubstantiation of matter toward unlimited, cosmic computing, thus opening up even more possibilities for progress. The “promise” is for progress itself, that unlocking certain mysteries will provide an infinite amount of new mysteries to be solved.
Survivalist Transhumanism can take a take a similar path in terms of technological inevitabilism, but pushed toward a fundamentalist extreme, awaits a more Nietzschean posthuman rapture. According to Pellissier, Survivalist Transhumanism “espouses radical life extension as the most important goal of transhumanism.” In general, the movement seems to be awaiting advancements in human augmentation which are always already just out of reach but will (eventually) overcome death and allow the self (whether bioengineered or uploaded to a new material—or immaterial—substrate) to survive indefinitely.
Survivalist transhumanism with a more fundamentalist flavor would push to bring the Nietzschean Ubermensch into being—literally—despite the fact that Nietzsche’s Ubermensch functions as an ideal toward which humans should strive. He functions as a metaphor for living one’s life fully, not subject to a “slave morality” that is governed by fear and placing one’s trust in mythological constructions treated as real artifacts. Even more ironic is the fact that Ubermensch is not immortal and is at peace with his immanent death. Literal interpretations of the Ubermensch would characterize the master-morality human as overcoming mortality itself, since death is the ultimate check on the individual’s development. Living forever, from a more fundamentalist perspective, would provide infinite time to uncover infinite possibilities and thus make infinite progress.
Think of all the things we could do, build, and discover, some might say. I agree. Immortality would give us time—literally. Without the horizon of death as a parameter of our lives, we would—eventually—overcome a way of looking at the universe that has been a defining characteristic of humanity since the first species of hominids with the capacity to speculate pondered death.
But in that speculation is also a promise. The promise that conquering death would allow us to reap the fruits of the inevitable and inexorable progression of technology. Like a child who really wants to “stay up late,” there is a curiosity about what happens after humanity’s bedtime. Is the darkness outside her window any different after bedtime than it is at 9pm? What lies beyond the boundaries of late-night broadcast television? How far beyond can she push until she reaches the loops of infomercials, or the re-runs of the shows that were on hours prior? And years later, when she pulls her first all-nighter, and she sees the darkness ebb and the dawn slowly but surely rise just barely within her perception, what will she have learned?
It’s not that the darkness holds unknown things. To her, it promises things to be known. She doesn’t know what she will discover there until she goes through it. Immortality and death metaphorically function in the same way: Those who believe that immortality is possible via radical life extension believe that the real benefits of immortality will show themselves once immortality is reached and we have the proper perspective from which to know the world differently. To me, this sounds a lot like Heaven: We don’t know what’s there but we know it’s really, really good. In the words of Laurie Anderson: “Paradise is exactly like where you are right now, only much, much better.” A survivalist transhuman fundamentalist version might read something like “Being immortal is exactly like being mortal, only much, much better.”
Does this mean we should scoff at the idea of radical life extension? At the singularity and its computronium wonderfulness? Absolutely not. But the technoprogressivism at the heart of transhumanism need not be so literal. When one understands a myth as that—a set of governing beliefs—transhumanism itself can stay true to the often-eclipsed aspect of its Cartesian, enlightenment roots: the easing of human suffering. If we look at transhumanism as a functional myth, adhering to its core technoprogressive foundations, not only do we have a potential model for human progress, but we also have an ethical structure by which to advance that movement. The diversity of transhuman views provides several different paths of progress.
Transhumanism has at its core a technoprogressivism that even critical posthumanism like me can get behind. If I am a technoprogressivist, then I do believe in certain aspects of the promise of technology. I do believe that humanity has the capacity to better itself and do incredible things through technological means. Furthermore, I do feel that we are in the infancy of our knowledge of how technological systems are to be responsibly used. It is a technoprogressivist’s responsibility to mitigate a myopic visions of the future—including those visions that uncritically mythologize the singularity or immortality itself as an inevitability.
To me it becomes a question of exactly what the transhumanist him or herself is looking for from technology, and how he or she sees conceptualizes the “human” in those scenarios. The reason I still call myself a posthumanist is because I think that we have yet to truly free ourselves of antiquated notions of subjectivity itself. The singularity to me seems as if it will always be a Cartesian one. A “thing that thinks” and is aware of itself thinking and therefore is sentient. Perhaps the reasons why we have not reached a singularity yet is because we’re approaching the subject and volition from the wrong direction.
To a lesser extent, I think that immortality narratives are mired in re-hashed religious eschatologies where “heaven” is simply replaced with “immortality.” As for radical life extension, what are we trying to extend? Are we tying “life” to the ability to simply being aware of ourselves being aware that we are alive? Or are we looking at the quality of the extended life we might achieve? I do think that we may extend the human lifespan to well over a century. What will be the costs? And what will be the benefits?
Life extension is not the same as life enrichment. Overcoming death is not the same as overcoming suffering. If we can combat disease, and mitigate the physical and mental degradation which characterize aging, thus leading to an extended life-span free of pain and mental deterioration, then so be it. However, easing suffering and living forever are two very different things. Some might say that the easing of suffering is simply “understood” within the overall goals of immortality, but I don’t think it is.
Given all of the different positions outlined in Pellissier’s article, “cosmopolitan transhumanism” seems to make the most sense to me. Coined by Steven Umbrello, this category combines the philosophical movement of cosmopolitanism with transhumanism, creating a technoprogressive philosophy that can “increase empathy, compassion, and the univide progress of humanity to become something greater than it currently is. The exponential advancement of technology is relentless, it can prove to be either destructive or beneficial to the human race.” This advancement can only be achieved, Umbrello maintains, via an abandonment of “nationalistic, patriotic, and geopolitical allegiances in favor [of] global citizenship that fosters cooperation and mutually beneficial progress.”
Under that classification, I can call myself a transhumanist. A commitment to enriching life rather than simply creating it (as an AI) or extending it (via radical life extension) should ethically shape the leading edge of a technoprogressive movement, if only to break a potential cycle of polemics and politicization internal and external to transhumanism itself. Perhaps I’ve read too many comic books and have too much of a love for superheroes, but in today’s political and cultural climate, a radical position on either side can unfortunately create an opposite.
If technoprogressivism rises under fundamentalist singularitarian or survivalist transhuman banners, equally passionate luddite, anti-technological positions could potentially rise and do real damage. Speaking as a US citizen, I am constantly aghast at the overall ignorance that people have toward science and the ways in which the very concept of “scientific theory” and the very definition of what a “fact” is has been skewed and distorted. If we have groups of the population who still believe that vaccines cause autism or don’t believe in evolution, do we really think that a movement toward an artificial general intelligence will be taken well?
Transhumanism, specifically the cosmopolitan kind, provides a needed balance of progress and awareness. We can and should strive toward aspects of singularitarianism and survivalist transhumanism, but as the metaphors and ideals they actually are.
Image #1: “Eternal Singularity” by Mark Lloyd
Anderson, Laurie. “Language is a Virus” Home of the Brave (1986)
Descartes, Rene. 1637. Discourse on the Method of Rightly Conducting the Reason and Seeking Truth in the Sciences.
Hughes, James. 2010. “Problems of Transhumanism: Belief in Progress vs. Rational Uncertainty.” (IEET.org).
Pellissier, Hank. 2015. “Transhumanism: There Are [at Least] Ten Different Philosophical Categories; Which One(s) Are you?” (IEET.org)
Verdoux, Philippe. 2009. “Transhumanism, Progress and the Future.” Journal of Evolution and Technology 20(2):49-69.