Transhumanism is often misunderstood and maligned by who are ignorant of it – or those who were exposed solely to detractors such as John Gray, Leon Kass, and Taleb himself. This essay will serve to correct these misconceptions in a concise fashion. Those who still wish to criticize transhumanism should at least understand what they are criticizing and present arguments against the real ideas, rather than straw men constructed by the opponents of radical technological progress.
After the publication of my review of Nassim Taleb’s latest book Antifragile, numerous comments were made by Taleb’s followers – many of them derisive – on Taleb’s Facebook page. (You can see a screenshot of these comments here.) While I will only delve into a few of the specific comments in this article, I consider it important to distill the common misconceptions that motivate them. Transhumanism is often misunderstood and maligned by who are ignorant of it – or those who were exposed solely to detractors such as John Gray, Leon Kass, and Taleb himself. This essay will serve to correct these misconceptions in a concise fashion. Those who still wish to criticize transhumanism should at least understand what they are criticizing and present arguments against the real ideas, rather than straw men constructed by the opponents of radical technological progress.
Misconception #1: Transhumanism is a religion.
Transhumanism does not posit the existence of any deity or other supernatural entity (though some transhumanists are religious independently of their transhumanism), nor does transhumanism hold a faith (belief without evidence) in any phenomenon, event, or outcome. Transhumanists certainly hope that technology will advance to radically improve human opportunities, abilities, and longevity – but this is a hope founded in the historical evidence of technological progress to date, and the logical extrapolation of such progress. Moreover, this is a contingent hope. Insofar as the future is unknowable, the exact trajectory of progress is difficult to predict, to say the least. Furthermore, the speed of progress depends on the skill, devotion, and liberty of the people involved in bringing it about. Some societal and political climates are more conducive to progress than others. Transhumanism does not rely on prophecy or mystical fiat. It merely posits a feasible and desirable future of radical technological progress and exhorts us to help achieve it. Some may claim that transhumanism is a religion that worships man – but that would distort the term “religion” so far from its original meaning as to render it vacuous and merely a pejorative used to label whatever system of thinking one dislikes. Besides, those who make that allegation would probably perceive a mere semantic quibble between seeking man’s advancement and worshipping him. But, irrespective of semantics, the facts do not support the view that transhumanism is a religion. After all, transhumanists do not spend their Sunday mornings singing songs and chanting praises to the Glory of Man.
Misconception #2: Transhumanism is a cult.
A cult, unlike a broader philosophy or religion, is characterized by extreme insularity and dependence on a closely controlling hierarchy of leaders. Transhumanism has neither element. Transhumanists are not urged to disassociate themselves from the wider world; indeed, they are frequently involved in advanced research, cutting-edge invention, and prominent activism. Furthermore, transhumanism does not have a hierarchy or leaders who demand obedience. Cosmopolitanism is a common trait among transhumanists. Respected thinkers, such as Ray Kurzweil, Max More, and Aubrey de Grey, are open to discussion and debate and have had interesting differences in their own views of the future. A still highly relevant conversation from 2002, "Max More and Ray Kurzweil on the Singularity", highlights the sophisticated and tolerant way in which respected transhumanists compare and contrast their individual outlooks and attempt to make progress in their understanding. Any transhumanist is free to criticize any other transhumanist and to adopt some of another transhumanist’s ideas while rejecting others. Because transhumanism characterizes a loose network of thinkers and ideas, there is plenty of room for heterogeneity and intellectual evolution. As Max More put it in the “Principles of Extropy, v. 3.11”, “the world does not need another totalistic dogma.” Transhumanism does not supplant all other aspects of an individual’s life and can coexist with numerous other interests, persuasions, personal relationships, and occupations.
Misconception #3: Transhumanists want to destroy humanity. Why else would they use terms such as “posthuman” and “postbiological”?
Transhumanists do not wish to destroy any human. In fact, we want to prolong the lives of as many people as possible, for as long as possible! The terms “transhuman” and “posthuman” refer to overcoming the historical limitations and failure modes of human beings – the precise vulnerabilities that have rendered life, in Thomas Hobbes’s words, “nasty, brutish, and short” for most of our species’ past. A species that transcends biology will continue to have biological elements. Indeed, my personal preference in such a future would be to retain all of my existing healthy biological capacities, but also to supplement them with other biological and non-biological enhancements that would greatly extend the length and quality of my life. No transhumanist wants human beings to die out and be replaced by intelligent machines, and every transhumanist wants today’s humans to survive to benefit from future technologies. Transhumanists who advocate the development of powerful artificial intelligence (AI) support either (i) integration of human beings with AI components or (ii) the harmonious coexistence of enhanced humans and autonomous AI entities. Even those transhumanists who advocate “mind backups” or “mind uploading” in an electronic medium (I am not one of them, as I explain here) do not wish for their biological existences to be intentionally destroyed. They conceive of mind uploads as contingency plans in case their biological bodies perish.
Even the “artilect war” anticipated by more pessimistic transhumanists such as Hugo de Garis is greatly misunderstood. Such a war, if it arises, would not come from advanced technology, but rather from reactionaries attempting to forcibly suppress technological advances and persecute their advocates. Most transhumanists do not consider this scenario to be likely in any event. More probable are lower-level protracted cultural disputes and clashes over particular technological developments.
Misconception #4: “A global theocracy envisioned by Moonies or the Taliban would be preferable to the kind of future these traitors to the human species have their hearts set on, because even the most joyless existence is preferable to oblivion.”
The above was an actual comment on the Taleb Facebook thread. It is astonishing that anyone would consider theocratic oppression preferable to radical life extension, universal abundance, ever-expanding knowledge of macroscopic and microscopic realms, exploration of the universe, and the liberation of individuals from historical chains of oppression and parasitism. This misconception is fueled by the strange notion that transhumanists (or technological progress in general) will destroy us all – as exemplified by the “Terminator” scenario of hostile AI or the “gray goo” scenario of nanotechnology run amok. Yet all of the apocalyptic scenarios involving future technology lack the safeguards that elementary common sense would introduce. Furthermore, they lack the recognition that incentives generated by market forces, as well as the sheer numerical and intellectual superiority of the careful scientists over the rogues, would always tip the scales greatly in favor of the defenses against existential risk. As I explain in “Technology as the Solution to Existential Risk” and “Non-Apocalypse, Existential Risk, and Why Humanity Will Prevail”, the greatest existential risks have either always been with us (e.g., the risk of an asteroid impact with Earth) or are in humanity’s past (e.g., the risk of a nuclear holocaust annihilating civilization). Technology is the solution to such existential risks. Indeed, the greatest existential risk is fear of technology, which can retard or outright thwart the solutions to the perils that may, in the status quo, doom us as a species. As an example, Mark Waser has written an excellent commentary on the “inconvenient fact that not developing AI (in a timely fashion) to help mitigate other existential risks is itself likely to lead to a substantially increased existential risk”.
Misconception #5: Transhumanists want to turn people into the Borg from Star Trek.
The Borg are the epitome of a collectivistic society, where each individual is a cog in the giant species machine. Most transhumanists are ethical individualists, and even those who have communitarian leanings still greatly respect individual differences and promote individual flourishing and opportunity. Whatever their positions on the proper role of government in society might be, all transhumanists agree that individuals should not be destroyed or absorbed into a collective where they lose their personality and unique intellectual attributes. Even those transhumanists who wish for direct sharing of perceptions and information among individual minds do not advocate the elimination of individuality. Rather, their view might better be thought of as multiple puzzle pieces being joined but remaining capable of full separation and autonomous, unimpaired function.
My own attraction to transhumanism is precisely due to its possibilities for preserving individuals qua individuals and avoiding the loss of the precious internal universe of each person. As I expressed in Part 1 of my “Eliminating Death” video series, death is a horrendous waste of irreplaceable human talents, ideas, memories, skills, and direct experiences of the world. Just as transhumanists would recoil at the absorption of humankind into the Borg, so they rightly denounce the dissolution of individuality that presently occurs with the oblivion known as death.
Misconception #6: Transhumanists usually portray themselves “like robotic, anime-like characters”.
That depends on the transhumanist in question. Personally, I portray myself as me, wearing a suit and tie (which Taleb and his followers dislike just as much – but that is their loss). Furthermore, I see nothing robotic or anime-like about the public personas of Ray Kurzweil, Aubrey de Grey, or Max More, either.
Misconception #7: “Transhumanism is attracting devotees of a frighteningly high scientific caliber, morally retarded geniuses who just might be able to develop the humanity-obliterating technology they now merely fantasize about. It's a lot like a Heaven's Gate cult, but with prestigious degrees in physics and engineering, many millions more in financial backing, a growing foothold in mainstream culture, a long view of implementing their plan, and a death wish that extends to the whole human race not just themselves.”
This is another statement on the Taleb Facebook thread. Ironically, the commenter is asserting that the transhumanists, who support the indefinite lengthening of human life, have a “death wish” and are “morally retarded”, while he – who opposes the technological progress needed to preserve us from the abyss of oblivion – apparently considers himself a champion of morality and a supporter of life. If ever there was an inversion of characterizations, this is it. At least the commenter acknowledges the strong technical skills of many transhumanists – but calling them “morally retarded” presupposes a counter-morality of death that should rightly be overcome and challenged, lest it sentence each of us to death. The Orwellian mindset that “evil is good” and “death is life” should be called out for the destructive and dangerous morass of contradictions that it is. Moreover, the commenter provides no evidence that any transhumanist wants to develop “humanity-obliterating technologies” or that the obliteration of humanity is even a remote risk from the technologies that transhumanists do advocate.
Misconception #8: Transhumanism is wrong because life would have no meaning without death.
Asserting that only death can give life meaning is another bizarre contradiction, and, moreover, a claim that life can have no intrinsic value or meaning qua life. It is sad indeed to think that some people do not see how they could enjoy life, pursue goals, and accumulate values in the absence of the imminent threat of their own oblivion. Clearly, this is a sign of a lack of creativity and appreciation for the wonderful fact that we are alive. I delve into this matter extensively in my “Eliminating Death” video series. Part 3 discusses how indefinite life extension leaves no room for boredom because the possibilities for action and entertainment increase in an accelerating manner. Parts 8 and 9 refute the premise that death gives motivation and a “sense of urgency” and make the opposite case – that indefinite longevity spurs people to action by making it possible to attain vast benefits over longer timeframes. Indefinite life extension would enable people to consider the longer-term consequences of their actions. On the other hand, in the status quo, death serves as the great de-motivator of meaningful human endeavors.
Misconception #9: Removing death is like removing volatility, which “fragilizes the system”.
This sentiment was an extrapolation by a commenter on Taleb’s ideas in Antifragile. It is subject to fundamentally collectivistic premises – that the “volatility” of individual death can be justified if it somehow supports a “greater whole”. (Who is advocating the sacrifice of the individual to the collective now?) The fallacy here is to presuppose that the “greater whole” has value in and of itself, apart from the individuals comprising it. An individualist view of ethics and of society holds the opposite – that societies are formed for the mutual benefit of participating individuals, and the moment a society turns away from that purpose and starts to damage its participants instead of benefiting them, it ceases to be desirable. Furthermore, Taleb’s premise that suppression of volatility is a cause of fragility is itself dubious in many instances. It may work to a point with an individual organism whose immune system and muscles use volatility to build adaptive responses to external threats. However, the possibility of such an adaptive response requires very specific structures that do not exist in all systems. In the case of human death, there is no way in which the destruction of a non-violent and fundamentally decent individual can provide external benefits of any kind worth having. How would the death of your grandparents fortify the mythic “society” against anything?
Misconception #10: Immortality is “a bit like staying awake 24/7”.
Presumably, those who make this comparison think that indefinite life would be too monotonous for their tastes. But, in fact, humans who live indefinitely can still choose to sleep (or take vacations) if they wish. Death, on the other hand, is irreversible. Once you die, you are dead 24/7 – and you are not even given the opportunity to change your mind. Besides, why would it be tedious or monotonous to live a life full of possibilities, where an individual can have complete discretion over his pursuits and can discover as much about existence as his unlimited lifespan allows? To claim that living indefinitely would be monotonous is to misunderstand life itself, with all of its variety and heterogeneity.
Misconception #11: Transhumanism is unacceptable because of the drain on natural resources that comes from living longer.
This argument presupposes that resources are finite and incapable of being augmented by human technology and creativity. In fact, one era’s waste is another era’s treasure (as occurred with oil since the mid-19th century). As Julian Simon recognized, the ultimate resource is the human mind and its ability to discover new ways to harness natural laws to human benefit. We have more resources known and accessible to us now – both in terms of food and the inanimate bounties of the Earth – than ever before in recorded history. This has occurred in spite – and perhaps because of – dramatic population growth, which has also introduced many new brilliant minds into the human species. In Part 4 of my “Eliminating Death” video series, I explain that doomsday fears of overpopulation do not hold, either historically or prospectively. Indeed, the progress of technology is precisely what helps us overcome strains on natural resources.
The opposition to transhumanism is generally limited to espousing some variations of the common fallacies I identified above (with perhaps a few others thrown in). To make real intellectual progress, it is necessary to move beyond these fallacies, which serve as mental roadblocks to further exploration of the subject – a justification for people to consider transhumanism too weird, too unrealistic, or too repugnant to even take seriously. Detractors of transhumanism appear to recycle these same hackneyed remarks as a way to avoid seriously delving into the actual and genuinely interesting philosophical questions raised by emerging technological innovations.
These are questions on which many transhumanists themselves hold sincere differences of understanding and opinion. Fundamentally, though, my aim here is not to “convert” the detractors – many of whose opposition is beyond the reach of reason, for it is not motivated by reason. Rather, it is to speak to laypeople who are not yet swayed one way or the other, but who might not have otherwise learned of transhumanism except through the filter of those who distort and grossly misunderstand it. Even an elementary explication of what transhumanism actually stands for will reveal that we do, in fact, strongly advocate individual human life and flourishing, as well as technological progress that will uplift every person’s quality of life and range of opportunities.
Those who disagree with any transhumanist about specific means for achieving these goals are welcome to engage in a conversation or debate about the merits of any given pathway. But an indispensable starting point for such interaction involves accepting that transhumanists are serious thinkers, friends of human life, and sincere advocates of improving the human condition.
Gennady Stolyarov II (G. Stolyarov II) is an actuary, science-fiction novelist, independent philosophical essayist, poet, amateur mathematician, composer, and Editor-in-Chief of The Rational Argumentator, a magazine championing the principles of reason, rights, and progress. Mr. Stolyarov regularly produces YouTube Videos discussing life extension, libertarianism, and related subjects.
(10) Comments •
(12347) Hits •