Problems of Transhumanism: Liberal Democracy vs. Technocratic Absolutism
J. Hughes
2010-01-23 00:00:00


This article is part of a continuing series. See also:

Problems of Transhumanism: Introduction
Problems of Transhumanism: The Unsustainable Autonomy of Reason
Problems of Transhumanism: Atheism vs. Naturalist Theologies
Problems of Transhumanism: Moral Universalism vs. Relativism
Problems of Transhumanism: Belief in Progress vs. Rational Uncertainty



Enlightenment Liberalism and Enlightened Despotism



The Enlightenment rationale for liberalism, most powerfully articulated in Mill’s On Liberty, was that if individuals are given liberty they will generally know how to pursue their interests and potentials better than will anyone else. So, society generally will become richer and more intelligent if individuals are free to choose their own life ends rather than if they are forced towards betterment by the powers that be. In order to ensure that all interests and views of the good are equally weighed in the marketplace of ideas and expressed in collective decision-making, society should guarantee free debate and equal legal and political empowerment. The most radical expression of these ideals was liberal and social democracy, which are often assumed to be the consensual political ideal of the Enlightenment.

In fact, Enlightenment philosophers were intensely conflicted about the virtues of powerful monarchies and technocratic elites versus popular democracy. Some believed an absolute state was the best form of governance. Thomas Hobbes argued that political absolutism was necessary to prevent the war of "all against all." Voltaire said that he "would rather obey one lion, than 200 rats of [his own] species."

Other Enlightenment thinkers argued against absolutism and the divine right of kings, but held out for the desirability of "enlightened despots" who had political legitimacy because they were pursuing their people's interests. Free peoples, as individuals and democracies, often do not choose the ends that are in their best interests. As Spinoza said, “the masses can no more be freed from their superstition than from their fears…they are not guided by reason” (Spinoza, 1670: 56). The benevolent rationale for authoritarianism has always been that rulers and their advisors understand the needs of the people better than the people do themselves.

Before the Enlightenment, the alleged source of this superior understanding was the rulers’ wisdom and spiritual guidance. After the Enlightenment, the idea that some people were more or less advanced on the path of reason and progress than others lent itself to justifications for enlightened monarchy, colonialism, and scientific dictatorships. Most Enlightenment philosophers placed their hopes for progress in the benign governance by modernizing monarchs and reformed aristocrats, certainly not in radicalized peasants. If society needs to be rationally re-organized it is far more straightforward to make existing elites and monarchs the agents of Reason than to try to convert the masses and establish Reason from the bottom up; once society is rationally reorganized from the top, the masses will find their way to Reason that much more easily -- or so the argument goes.

A number of monarchs, such as Frederick II of Prussia, Joseph II of Austria, and Peter the Great and Catherine the Great of Russia, were directly influenced by and friendly towards the Enlightenment. These enlightened absolutists believed that the monarchical state could embody and advance the new science and Reason. They promoted public education, social reform, and the modernization of laws, economies, and militaries (Outram, 2005). Frederick II of Prussia promoted religious tolerance and abolished serfdom. Joseph II centralized the Austrian state, restricted the power of the Catholic Church, and abolished serfdom.

The American revolution was a step forward from enlightened despotism in Enlightenment political thought. But the founders of the American republic also were almost all suspicious of "mobocracy," and the American state is carefully constructed to cripple direct democracy. The separation of judicial and executive power from legislative power, following the ideas of the Baron de Montesquieu, ensured that the wisdom of landed male elites would temper the passions of the mob as they continue to do today. Even within the legislative branch, the Senate was a landowners' body, originally appointed by state legislatures, to check the potential of radical populism from the House.

For two hundred years, Counter-Enlightenment thinkers have argued that the French Revolution's descent into Terror and the Marxist-Leninist totalitarianism of the 20th century each were a natural consequence of the Enlightenment's attempt to apply rationality to governance, ignoring the fact that the liberal tradition is as much a product of the Enlightenment. In its own violent way, however, the French revolutionary government represented a mix of both popular democratic and elite authoritarian reforms. The expansion of democratic rights under the Assembly was combined with political executions directed by elites and unpopular top-down reforms.

The legacy of enlightened despotism is actually found far less ambiguously in the reign of modernizing dictators like Napolean Bonaparte and his many successors through to today like Vladimir Putin. Bonaparte established schools and scholarships to attend them. He promoted meritocracy and thoroughly rationalized French law in a way that institutionalized Enlightenment values of universalism and egalitarianism. He promoted religious tolerance and ended the hostility between Church and state by putting the clergy on the state payroll. The conflicts within the Enlightenment tradition between absolutism and liberalism are not only found on the Left but also on the Right among latter-day Bonapartists, right-wing modernizing dictators.

Enlightenment arguments for benevolent modernizing dictatorships also were used to rationalize French and British colonialism and the expansion of both the Soviet Union and Pax Americana. Bentham, Condorcet, Diderot, Kant, and Adam Smith were all early critics of imperialism (Muthu, 2003), but even their attacks on Western arrogance and exploitation were muted by their support for ethical universalism, which hoped to see everyone eventually benefit from the Enlightenment. Since de-colonization and the rise of Vietnam era anti-imperialism, arguments for beneficial, enlightening colonialism sound like thin excuses for exploitation, unless you are a fan of the U.S. occupation of Iraq. But both respect for the noble savages and their national self-determination and the idea that primitive peoples could benefit from a period of tutelage by the enlightened nations are woven together throughout the history of Enlightenment thought.


Transhumanist Liberalism vs. Transhumanist Technocracy



Transhumanists are overwhelmingly and staunchly civil libertarian, defenders of juridical equality and individual rights. Most also believe democratic government to be superior to any of the extant alternatives. But many are also suspicious of the capacity of ordinary people to make decisions that are truly in their own interests, individually or as polities. Some transhumanists explicitly argue that rather than try to win popular support for transhumanist values, far more can be accomplished by winning over powerful elites.

The 2005 and 2007 surveys of the members of the World Transhumanist Association (WTA, 2005; WTA, 2007) asked, "Although we may devise better political systems in the future, do you believe that multi-party democracies with civil liberties for individuals are the best of the existing political orders?" A third of the respondents were unwilling to affirm the superiority of liberal democracy among existing political systems. Transhumanist Max More, for instance, looks toward a post-democratic minarchy:
Democratic arrangements have no intrinsic value; they have value only to the extent that they enable us to achieve shared goals while protecting our freedom. Surely, as we strive to transcend the biological limitations of human nature, we can also improve upon monkey politics? (More, 2004)
Billionaire transhumanist Peter Thiel (2008) hopes that anarchist utopias at sea, in outer space, or in cyberspace can escape the authoritarian clutches of the democracies:
... the great task for libertarians is to find an escape from politics in all its forms — from the totalitarian and fundamentalist catastrophes to the unthinking demos that guides so-called “social democracy.” The critical question then becomes one of means, of how to escape not via politics but beyond it. Because there are no truly free places left in our world, I suspect that the mode for escape must involve some sort of new and hitherto untried process that leads us to some undiscovered country; and for this reason I have focused my efforts on new technologies that may create a new space for freedom. (Thiel, 2008)
Libertarian transhumanists like Thiel and More are consistent critics of all forms of governance and have never advocated enlightened despotism. However, the belief that mob democracy is hopeless and that the only avenue for progress lies with elites and unbridled technological change does support anti-democratic authoritarian views among some transhumanists.

imageOne of the transhumanist forebears it is important to keep in mind when considering transhumanist ambivalence about liberal democracy is H.G. Wells. Wells was a Fabian socialist, an advocate for the evolution of liberal democracies toward democratic socialism. But he also believed that this evolutionary process would be accelerated by global war and catastrophe.

In his classic 1933 The Shape of Things to Come, a technocratic world government is established in the wake of a global nuclear war. The new "Dictatorship of the Air" rules benevolently for a hundred years, eradicating religion and promoting science, until it is overthrown and the state withers away. For Wells, as for many transhumanists, the urgent catastrophic risks humanity faces trumps any preference for liberal democracy.

For instance, in considering how best to awaken and prepare society for global catastrophic risks, such as the emergence of machine minds, Eliezer Yudkowsky considers attempts to convince people of the risks, and dismisses them:
Majoritarian strategies take substantial time and enormous effort... (it is) vastly easier to obtain a hundred million dollars of funding than to push through a global political change. (Yudkowsky, 2008)
In particular, it is supposed, a hundred million dollars from Peter Thiel put toward the project of making a benevolent super-AI will do far more to improve the world than any political movement, since the first super-AI will, in Yudkowsky's view, be the last form of government humans will ever know. AI is either the solution to all of humanity's problems, or its final solution.

Nick Bostrom also has argued the need for a global “singleton” to mitigate “existential risks” (Bostrom, 2001), though he is far more open-minded about the possible nature of the global dictator than is Yudkowsky. Global government of some kind, Bostrom argues, is necessary in order to mitigate threats such as nuclear war and bioterrorism, but also in order to avoid humanity’s unthinking evolution into something we might regret. For instance, international competition might encourage the engineering of workers for some form of hyper-capitalism, while a global government of some kind could impose restrictions on this kind of competition and guide global civilization past these shoals.
A singleton does not need to be a monolith. It can contain within itself a highly diverse ecology of independent groups and individuals. A singleton could for example be a democratic world government or a friendly superintelligence. (Bostrom, 2001)
In his subsequent “What is a Singleton?” (Bostrom, 2006), Bostrom defines the singleton as:
A world order in which there is a single decision-making agency at the highest level. Among its powers would be (1) the ability to prevent any threats (internal or external) to its own existence and supremacy, and (2) the ability to exert effective control over major features of its domain (including taxation and territorial allocation).
He again specifies that a singleton could be a democratic world republic, a dictatorship, or a superpowerful intelligent machine or posthuman. Such a global agency would be able to suppress wars and arms races, protect our common planetary and solar system resources from wasteful competition, relieve inequality, and establish a more rational economy. Technological innovations such as "improved surveillance, mind-control technologies, communication technologies, and artificial intelligence," as well as the proliferation of apocalyptic technologies that require global invasive suppression, would all increase the likelihood of the emergence of a singleton.

Bostrom leaves open the possibility that the singleton could evolve from liberal democratic self-governance and be accountable to human beings in an equal and transparent way. But the prospect of a radical improvement in the cognitive powers and moral characters of posthumans and machine minds has led transhumanists like Yudkowsky to advocate for humanity to abdicate self-governance to more enlightened successors.

Yudkowsky has focused a lot of his writing on the problem of human cognitive biases. Yudkowsky, like other believers in a coming artificial intelligence “Singularity,” believes that human cognitive limitations will be quickly superceded by the super-rationality of a recursively self-improving artificial intelligence unconstrained by biology and evolutionary drives. Human brains, he argues, will never have the same capacity for self-improvement and perfect rationalization since machine minds will have “total read/write access to their own state,” the ability to “absorb new hardware,” “understandable code,” “modular design,” and a “clean internal environment” (Yudkowsky, 2008). In fact, argues Yudkowsky, human cognition is so irredeemably constrained by bias, and our motivations so driven by aggression and self-interest, that we should give up on the project of self-governance through rational debate and do our best to hasten the day when we can turn our affairs over to a super-rational artificial intelligence programmed to act in our best interests.

In his 2004 essay “Coherent Extrapolated Volition” (CEV), Yudkowsky argues that a super-AI would be able to intuit the desires and needs of all human beings and make the decisions necessary to satisfy them. In this, Yudkowsky and his followers (unconsciously) echo Marxist-Leninist theories of scientific socialism and the perfect reflection of the general will through the Party.

As described by Kaj Sotala in a refutation of fourteen objections to Yudkowsky's theory of "friendly AI":
In the CEV proposal, an AI will be built ... to extrapolate what the ultimate desires of all the humans in the world would be if those humans knew everything a superintelligent being could potentially know; could think faster and smarter; were more like they wanted to be (more altruistic, more hard-working, whatever your ideal self is); would have lived with other humans for a longer time; had mainly those parts of themselves taken into account that they wanted to be taken into account. The ultimate desire -- the volition -- of everyone is extrapolated, with the AI then beginning to direct humanity towards a future where everyone's volitions are fulfilled in the best manner possible... Humanity is not instantly "upgraded" to the ideal state, but instead gradually directed towards it. (Sotala, 2007)
The masses labor under "false consciousness," unaware of their true interests which can only be revealed through submitting to the tutelage of the scientific dictatorship. At the end of Yudkowsky's original 2004 essay, he asks, “What if someone disagrees with the CEV?” to which he answers:
Imagine the silliness of arguing with your own extrapolated volition. It's not only silly, it's dangerous and harmful; you're setting yourself in opposition to the place you would have otherwise gone… (Yudkowsky, 2004)
Any objection to rule by this godlike AI is based on anthropocentric projections of the fallibility of human despotism. As Michael Anissimov explains it, enlightened AI despotism will be completely trustworthy. In fact, he suggests, only godlike AI, built from pure code and free of evolved Darwinian behaviors but somehow programmed for human friendliness, can be trusted as a global totalitarian singleton:
The fear of patriarchy objection stems largely from history, wherein all of the relevant actors were members of our unique species, for which power is proven to corrupt. Power corrupts humans for evolutionary reasons -- if one is on top of the heap, one had better take advantage of the opportunity to reward one’s allies and punish one’s enemies. This is pure evolutionary logic and need not be consciously calculated. AIs, which can be constructed entirely without selfish motivations, can be immune to these tendencies. Insofar as significant power asymmetries in general bother people, this seems hard to avoid in the long term -- technological development will lead to a diversity of possible beings, and with this diversity will inevitably come a diversity in levels of capability and intelligence. (Anissimov, 2007)
Dictatorship by friendly AI is by no means the only form of incipient illiberal and anti-democratic theory possible or extant among transhumanists. As the transhumanist movement grows, there will undoubtedly be a growing conflict between transhumanist defenders of democratic self-governance and advocates of enlightened technocracy. Russian transhumanists, for instance, include both radical liberals as well as supporters of Putin's authoritarianism. Just as Chinese advocates for market liberalization are divided between political liberals and defenders of the wise stewardship of the Chinese Communist Party, we are likely to see Chinese enthusiasts for human enhancement divided over the virtues of state-mandated eugenics.

In response, we defenders of liberal democracy need to marshal our arguments for the virtuous circle of reinforcement between human technological enablement and self-governance. In Citizen Cyborg, for instance, I argue that cognitive liberty, bodily autonomy, and reproductive freedom are core Enlightenment and transhumanist values, not to be lightly trumped by corporate power and state projects for betterment. I argue that cognitive enhancement, assistive artificial intelligence, and electronic communication all would strengthen the ability of the average citizen to know and pursue their own interests and would make liberal democracy increasingly robust. I also argue against a pessimistic view that transhumanists are a permanent minority, and make the case that political majorities can be won for a technoprogressive platform.

A faith in the possibility of progress through liberal democracy is certainly difficult to sustain in the wake of the failure of a Democratic super-majority to pass health care reform in the United States, the collapse of meaningful climate change negotiations, the hand-wringing impotence of international institutions to intervene against genocide and the proliferation of weapons of mass destruction, and the persistence of myriad forms of popular ignorance and superstition. If I could convince myself that turning our fate over to the enlightened despotism of HAL or Khan Noonien Singh was the only way forward I also would be tempted. I am certainly looking forward to new forms of governance that satisfy my Enlightenment values better than do the existing forms of imperfect liberal democracy. For now, however, I think transhumanists need to focus on achieving our better world through liberal democracy.


References



Anissimov, Michael. 2007. Objections to Coherent Extrapolated Volition. Singularity Institute for Artificial Intelligence.

Bostrom, Nick. 2001. Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology 9(1).

_____. 2006. What is a Singleton? Linguistic and Philosophical Investigations 5(2): 48-54.

More, Max. 2004. Democracy and Transhumanism. Extropy Institute.

Muthu, Sankar. 2003. Enlightenment against Empire. Princeton University Press.

Outram, Dorinda. 1995. The Enlightenment. Cambridge, UK: Cambridge University Press.

_____. 2005. The Enlightenment, 2nd ed. Cambridge, UK: Cambridge University Press.

Spinoza, Benedict de. (1670)1989. Tractatus Theologico-Politicus, translated by S Shirley with an introduction by B S Gregory. Leiden.

Thiel, Peter. 2009. The Education of a Libertarian. Cato Unbound. April 13.

World Transhumanist Association. 2005. Report on the 2005 Interests and Beliefs Survey of the Members of the World Transhumanist Association.

_____. 2007. Report on the 2007 Interests and Beliefs Survey of the Members of the World Transhumanist Association.

Sotala, Kaj. 2007. 14 objections against AI/Friendly AI/The Singularity answered.

Yudkowsky, Eliezer. 2004. Coherent Extrapolated Volition. Singularity Institute for Artificial Intelligence.

____. 2008. Artificial intelligence as a positive and negative factor in global risk. Global Catastrophic Risks. Nick Bostrom and Milan Cirkovic eds. Oxford University Press.