Most Enlightenment thinkers believed in the inevitability of human political and technological progress, transforming the Christian expectation that history was predetermined to end in the Kingdom of Heaven on Earth into a conviction that humanity would be able to continually improve itself. But the scientific worldview does not support historical inevitability, only uncertainty.
We may annihilate ourselves or regress. Even the normative judgment of what progress is, and whether we have made any, is open to empirical skepticism. Today transhumanists are torn between their Enlightenment faith in inevitable progress toward posthuman transcension and utopian Singularities, and their rational awareness of the possibility that each new technology may have as many risks as benefits and that humanity may not have a future.
This article is part of a continuing series. See also:
Problems of Transhumanism: Introduction
Problems of Transhumanism: The Unsustainable Autonomy of Reason
Problems of Transhumanism: Atheism vs. Naturalist Theologies
Problems of Transhumanism: Liberal Democracy vs. Technocratic Absolutism
Problems of Transhumanism: Moral Universalism vs. Relativism
Progressive Optimism vs. Radical Uncertainty
Along with the value of Reason, the Enlightenment thinkers shared a faith in the inevitability of human Progress. In fact Enlightenment thinkers portrayed themselves as having invented the idea of human progress, portraying all pre-Enlightenment views of history as static or cyclical (Bury, 1920). Historians have disputed the novelty of the Enlightenment faith in progress, pointing to precedents in pre-Enlightenment thought (Nisbet, 1979). In fact, the faith in progress appears to be more of a secularization of Christian eschatology than its repudiation (Becker, 1932). Nonetheless there is a clear difference between Enlightenment beliefs in continual, linear political, intellectual, and material improvement, and the dominant Christian historical narrative in which little would change until the End Times and Christ’s return.
Kant (1784), for instance, in his Idea of a Universal History from a Cosmopolitical Point of View, argued for the inevitable progress and moral perfection of man on religious grounds:
All capacities implanted in a creature by nature are destined to unfold themselves completely and conformably to their end, in the course of time…The history of the human race, viewed as a whole, may be regarded as the realization of a hidden plan of nature to bring about a political constitution, internally, and, for this purpose, also externally perfect, as the only state in which all the capacities implanted by her in mankind can be fully developed. (quoted in Nisbet, 1979)
Another famous statement of the inevitability of progress was written by our proto-transhumanist Enlightenment hero the Marquis de Condorcet in his 1795 Sketch for a Historical Picture of the Progress of the Human Mind. In this last monograph, Condorcet expresses his conviction that humanity will eventually conquer all oppression, inequality, ignorance, and even death and the need to toil. Human progress has evolved through nine stages with the current tenth stage being one of complete liberation of the human possibility.
Such is the aim of the work that I have undertaken, and its result will be to show by appeal to reason and fact that nature has set no term to the perfection of human faculties; that the perfectibility of man is truly indefinite; and that the progress of this perfectibility, from now onwards independent of any power that might wish to halt it, has no other limit than the duration of the globe upon which nature has cast us. This progress will doubtless vary in speed, but it will never be reversed as long as the earth occupies its present place in the system of the universe, and as long as the general laws of this system produce neither a general cataclysm nor such changes as will deprive the human race of its present faculties and its present resources…
It is reasonable to hope that all other diseases may likewise disappear as their distant causes are discovered. Would it be absurd then to suppose that this perfection of the human species might be capable of indefinite progress; that the day will come when death will be due only to extraordinary accidents or to the decay of the vital forces, and that ultimately the average span between birth and decay will have no assignable value?...
Condorcet believed that humanity’s progress could be predicted as certainly as natural phenomena.
This Enlightenment faith in the inevitability of political and scientific progress continues down through Comte’s “positivism” and Marxist theories of historical determinism to neoconservative triumphalism about the “end of history” in democratic capitalism.
Even Darwinism’s theory of natural selection was suborned to the doctrine of inevitable progress, aided in part by Darwin’s own teleological interpretation:
As all the living forms of life are the lineal descendants of those which lived long before the Silurian epoch, we may feel certain that the ordinary succession by generation has never once been broken, and that no cataclysm has desolated the whole world. Hence we may look with some confidence to a secure future of equally inappreciable length. And as natural selection works solely by and for the good of each being, all corporeal and mental environments will tend to progress towards perfection. (Darwin, 1859)
But this belief in the historical inevitability of progress has also always been in conflict with the rationalist, scientific observation that humanity could regress or disappear altogether. Enlightenment pessimism or at least realism has dogged the heals of Enlightenment optimism.
Henry Vyberg (1958) illustrated that there were French Enlightenment thinkers who did not believe in a linear historical progress, but proposed historical cycles or even decadence instead. Rousseau is general seen as having believed in the superiority of the “savage” over the civilized. Vico and Montesquieu believed all civilizations were subject to cycles of progress and decay. In D’Alembert’s Dream Denis Diderot (1769) muses that humanity could regress to inertia, or into a Borg-anism, as easily as it could progress into a society of free individuals.
Who knows if everything isn’t tending to reduce itself to a large, inert, and immobile sediment? Who knows how long this inertia will last? Who knows what new race could result some day from such a huge heap of sensitive and living points? Why not a single animal? … Watch out for the logical fallacy of the ephemeral…when a transitory being believes in the immortality of things. (Diderot, 1769)
Certainly the theory of natural selection provides no support for “progress,” only that humanity, like all creatures, is on a random walk through a mine field, that human intelligence is only an accident, and that we could easily go extinct as many species have done.
As Thomas Henry Huxley noted in 1888 in The Struggle for Existence in Human Society:
It is an error to imagine that evolution signifies a constant tendency to increased perfection. That process undoubtedly involves a constant remodelling of the organism in adaptation to new conditions; but it depends on the nature of those conditions whether the directions of the modifications effected shall be upward or downward.
Faith in the inevitability of progress has waxed and waned with historical events. It can be found in New Age beliefs that the world is headed for a millennial age, and in techno-optimist futurism. But since the rise and fall of fascism and communism, the implosion of New Left and countercultural utopianism, and the mounting evidence of the dangers and unintended consequences of technology, there are few groups that still hold fast to an Enlightenment belief in the inevitability of conjoined scientific and political progress. The transhumanist community, however, is a community where many still have such a faith.
Transhumanist Optimism vs. Future Uncertainty
Transhumanists have inherited the tension between Enlightenment convictions in the inevitability of progress, and Enlightenment’s scientific, rational realism that human progress or even civilization may fail.
In the 1990s, transhumanists were full of exuberant Enlightenment optimism about unending progress. For instance, Max More’s 1998 Extropian Principles defined “Perpetual Progress” as the first precept of their brand of transhumanism:
Seeking more intelligence, wisdom, and effectiveness, an indefinite lifespan, and the removal of political, cultural, biological, and psychological limits to self-actualization and self-realization. Perpetually overcoming constraints on our progress and possibilities. Expanding into the universe and advancing without end. (More, 1998)
For More himself, this principle was more a normative goal than a faith in historical inevitability. In 2002 he said, for instance:
....extremely fast phase change from human to transhuman to posthuman appears as a highly likely scenario. I do not see it as inevitable. It will take vast amounts of hard work, intelligence, determination, and some wisdom and luck to achieve. It’s possible that some humans will destroy the race through means such as biological warfare. Or our culture may rebel against change, seduced by religious and cultural urgings for “stability,” “peace” and against “hubris” and “the unknown.”...History since the Enlightenment makes me wary of all arguments to inevitability… (More and Kurzweil, 2002)
Similarly for Greg Burch in his “Progress, Counter-Progress, and Counter-Counter-Progress” address to the final, 2001 conference of the Extropians, the Enlightenment and transhumanist commitment to progress is to a political program, fully cognizant that there are many powerful enemies of progress and that victory was not inevitable:
...we are poised to continue the program of the Enlightenment, now with a full set of tools only imagined by its founders. Unfortunately, in this last three centuries, the enemies of progress have had time to prepare their positions for this renewal of progress outside of the purely scientific and technological realms….opposition to the core notion of humane progress should give us cause for deep concern. As my graphical depiction of those who stand opposed to continuing with the program of the Enlightenment demonstrates, we are in a very real sense completely encircled in the cultural, social and political realms.(Burch, 2001)
Nonetheless, for many extropians and transhumanists perpetual progress was an unstoppable train; one either got on board for transcension or consigned oneself to the graveyard.
Greg Stock’s 1993 Metaman: The Merging of Humans and Machines into a Global Superorganism, for instance, harked back to Condorcet’s conviction that the spread of global commerce and communication would lead humanity to an inevitable quickening of consciousness. A few transhumanists such as John Smart (2008) even linked this historical teleology to religious eschatologies such as Teilhard de Chardin’s belief that humanity would converge into a divine Omega Point.
Since the 2000 dot-com crash, however, transhumanists have increasingly tempered their expectations about progress. While some transhumanists still press for technological innovation on all fronts and oppose all regulation, others are focusing on reducing the civilization-ending potentials of asteroid strikes, genetic engineering, artificial intelligence and nanotechnology.
One influential example of this anti-millennial realism is Nick Bostrom’s 2001 essay “Analyzing Human Extinction Scenarios and Related Hazards” which sketched out the “bangs,” “crunches,” “shrieks,” and “whimpers” that could end human existence. He specifically includes not just scenarios that wipe out the species, but also scenarios in which we gradually evolve into dead-ends, like H.G. Wells’ Eloi and Morlocks in The Time Machine.
In other words, Bostrom addresses not just how we can ensure that there are descendants of humanity, but also how we can ensure that we will be proud to claim them.
Subsequently, Bostrom began work on catastrophic risk estimation at the Future of Humanity Institute at Oxford, and edited the 2008 Global Catastrophic Risks volume with the transhumanist astrophysicist and IEET fellow Milan Circovic. Catastrophic risk is also a programmatic focus for the Institute for Ethics and Emerging Technologies and for the transhumanist non-profit, the Lifeboat Foundation.
Bostrom has urged transhumanists to be more critical of technological progress, since:
...it is far from a conceptual truth that expansion of technological capabilities makes things go better. [And] even if empirically we find that such an association has held in the past (no doubt with many big exceptions), we should not uncritically assume that the association will always continue to hold. (Bostrom 2009)
The tension between eschatological certainty and pessimistic risk assessment has played out in the debate over the Singularity. Ray Kurzweil (2005) staunchly defends the unstoppability of his accelerating trendlines towards a utopian merger of enhanced humanity and godlike artificial intelligence by pointing to the steady exponential march of technological progress through wars and depressions. He gives little weight to the dystopian and apocalyptic predictions of how humanity might fare under superintelligent machines, suggesting that we will merge with them into apotheosis.
Technoprogressive Optimism of Will, Pessimism of the Intellect
The IEET has been a site for teasing out this tension between “optimism of the will and pessimism of the intellect,” as Antonio Gramsci framed it. On the one hand, we have championed the possibility of, and evidence of, human progress. By adopting the term “technoprogressivism” as our outlook, we have placed ourselves on the side of Enlightenment political and technological progress.
On the other hand, we have promoted technoprogressivism precisely in order to critique uncritical techno-libertarian and futurist ideas about the inevitability of progress. We have consistently emphasized the negative effects that unregulated, unaccountable, and inequitably distributed technological development could have on society. Technoprogressivism is an insistence that technological progress needs to be wedded to, and depends on, political progress, and that neither are inevitable.
For instance, in 2005 we published Dale Carrico’s essay “Progress as a Natural Force Versus Progress as the Great Work.”
...there is all the difference in the world between those who profess to believe in progress and those who would work to achieve it.
When progress is imagined to be some kind of force that the knowledgeable can discern in history, a natural force in which one can believe with ones whole heart or to which profess ones full faith, or, better yet, a force in the name of which one can claim to be some kind of priestly mouthpiece, then it tends to be little more than a self-congratulatory fable that the powerful and their orbiting opportunists tell themselves to deny the part luck has played in their attainment of power and then to justify the bad behavior they typically employ subsequently to maintain it…
And for those who are swept up in the exhilaration of some particular narrative of natural progress it is likewise difficult to see past the mandate of inevitability it confers, difficult to perceive the winning streak it celebrates as one that can ever come to an end, that the players it extols can ever lose their way, that the forces it documents can ever peter out.
While it is easy to find examples of this kind of naturalizing idea of progress in the crass champions of Empire from the Edwardian English to the Project for a New American Century, I will offer up as a slightly less obvious example something that strikes closer to home (for me, at any rate): the kind of corporate futurists and science fiction fanboys who sometimes like to glibly handwave about the inevitable consequences of accelerating technological development.
In 2008, I published “Millennial Tendencies in Responses to Apocalyptic Threats,” as an essay in Nick Bostrom and Milan Cirkovic’s Global Catastrophic Risks.
In that essay I argued that millennialism was a psycho-cultural dynamic found throughout world history, in many different civilizations, including in contemporary secular technomillennialism.
I identified four characteristic cognitive biases that millennialism generated: over-optimistic expectation of the inevitability of utopia, over-pessimistic expectation of the certainty of apocalypse, fatalism about the irrelevance of human effort to effect the outcome, and misplaced messianic beliefs about the magical efficacy of particular individuals or actions to avoid apocalypse and ensure utopia. I proposed that Kurzweilian Singularitarianism was a manifestation of millennial over-optimism and fatalism, while people like Hugo de Garis, certain that AI would eventually cause “mega-deaths,” represented over-pessimism and fatalism. Among some followers of the Singularity Institute for Artificial Intelligence, on the other hand, one can find magical messianic thinking about the importance of certain activities that will supposedly create friendly AI, preventing apocalypse and ensuring utopia.
In December 2009 we published Phil Torres’s “Transhumanism, Progress and the Future”, which again critiqued transhumanism’s belief in inevitable progress.
Verdoux offers three critiques of transhumanist and Enlightenment faith in progress: futurological, historical, and anthropological. The futurological argument is that our technological capabilities are exponentially increasing our capability to wipe ourselves out. The historical argument is that transhumanists tend to cherry pick the signs of progress, ignoring both signs of stagnation and evidence that “progress” is creating the problems it purports to solve such as cures for cancers that were caused in turn by toxins. The anthropological argument is that pre-moderns were probably as happy or happier than we moderns.
Verdoux goes on to argue for transhumanism on moral grounds and as a less dangerous course than any attempt at “relinquishing” technological development, but only after the naive faith in progress has been set aside. In this, Verdoux is very similar to the 21st century Left, arguing for egalitarianism and radical democracy on moral grounds but without any of Marxism’s historical inevitabilism or utopianism, and cautious of the tragic history of communism.
Unfortunately, the “rational capitulationism” to the transhumanist future that Verdoux offers, like the managerial centrism of contemporary social democracy, is not something that stirs men’s souls. We need to embrace these critical, pessimistic voices and perspectives, but also re-discover our capacity for vision and hope.
In 2009, at Nick Bostrom’s urging, the Board of Directors of Humanity+ adopted a new version of the Transhumanist Declaration which replaced this 1998 language:
Transhumanists think that by being generally open and embracing of new technology we have a better chance of turning it to our advantage than if we try to ban or prohibit it….
In planning for the future, it is mandatory to take into account the prospect of dramatic technological progress. It would be tragic if the potential benefits failed to materialize because of ill-motivated technophobia and unnecessary prohibitions. On the other hand, it would also be tragic if intelligent life went extinct because of some disaster or war involving advanced technologies.
With these lines:
We recognize that humanity faces serious risks, especially from the misuse of new technologies. There are possible realistic scenarios that lead to the loss of most, or even all, of what we hold valuable. Some of these scenarios are drastic, others are subtle. Although all progress is change, not all change is progress.
Research effort needs to be invested into understanding these prospects. We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented.
Reduction of existential risks, and development of means for the preservation of life and health, the alleviation of grave suffering, and the improvement of human foresight and wisdom should be pursued as urgent priorities, and heavily funded.
Voting in favor—while serving as members of the Humanity+ Board of Directors—were myself and IEET managing director Mike Treder, IEET Board members Nick Bostrom, George Dvorsky, and Michael LaTorra, and IEET Fellow Ben Goertzel.
One of the motivations behind the creation of the “technoprogressive” brand has been to distinguish Enlightenment optimism about the possibility of human political, technological and moral progress from millennialist techno-utopian inevitabilism. Without optimism that humans can collectively exercise foresight and invention, and peacefully deliberate our way to a better future, we too easily fall into the traps of utopian or apocalyptic fatalism, or fixation on techno-fixes and dei ex machinae.
Remaining always mindful of the myriad ways that our indifferent universe threatens our existence and how our growing powers come with unintended consequences is the best way to steer towards progress in our radically uncertain future.
Becker, Carl L. 1932. The Heavenly City of the Eighteenth-Century Philosophers. New Haven, Conn.
Bostrom, Nick. 2001. Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology 9(1).
Bostrom, Nick. 2009. The Future of Humanity, in New Waves in Philosophy of Technology, eds. Jan-Kyrre Berg Olsen, Evan Selinger, & Soren Riis. New York: Palgrave McMillan.
Bostrom, Nick and Milan M. Cirkovic eds. 2008. Global Catastrophic Risks. Oxford University Press.
Burch, Greg. 2001. Progress, Counter-Progress and Counter-Counter-Progress. Delivered June 16, 2001 at Extro 5.
Bury, J. Bagnell 1920. The Idea of Progress: An Inquiry into its Origin and Growth. Macmillan.
Carrico, Dale. 2005. Progress as a Natural Force Versus Progress as the Great Work. Amor Mundi.
Condorcet, Marie-Jean-Antoine-Nicolas Caritat Marquis de. 1795. Sketch for a Historical Picture of the Progress of the Human Mind.
Darwin, Charles. 1859. On the Origin of Species.
Diderot, Denis. 1769. D’Alembert’s Dream.
Hughes, James. 2008. Millennial Tendencies in Responses to Apocalyptic Threats in Global Catastrophic Risks eds. Nick Bostrom and Milan M. Cirkovic. Oxford University Press. 2008. pgs 72-89.
Humanity+. 1998/2009. Transhumanist Declaration.
Huxley, Thomas Henry. 1888. The Struggle for Existence in Human Society.
Kant, Immanuel. 1784. Idea of a Universal History from a Cosmopolitical Point of View.
Kurzweil, Ray. 2005. The Singularity is Near. Viking.
More, Max. 1998. The Extropian Principles v3. Extropy Institute.
More, Max and Ray Kurzweil. 2002. Max More and Ray Kurzweil on the Singularity. KurzweilAI.net.
Nisbet, Robert. 1979. The Idea of Progress: A Bibliographical Essay. Literature of Liberty: A Review of Contemporary Liberal Thought 2(1).
Stock, Greg. 1993. Metaman: The Merging of Humans and Machines into a Global Superorganism. Doubleday.
Tuveson, Ernest Lee. 1949. Millenium and Utopia: A Study in the Background of the Idea of Progress. Berkeley: University of California Press.
Verdoux, Philippe. 2009. Transhumanism, Progress and the Future. Journal of Evolution and Technology 20(2):49-69.
Vyverberg, Henry, Historical Pessimism in the French Enlightenment. Cambridge, Massachusetts: Harvard University Press, 1958.
Walker, M. 2009. Ship of fools: Why transhumanism is the best bet to prevent the extinction of civilization. The Global Spiral.