IEET > Vision > Staff > J. Hughes > HealthLongevity
Problems of Transhumanism: The Unsustainable Autonomy of Reason
J. Hughes   Jan 8, 2010   Ethical Technology  

Reason is not self-legitimating. Like all Enlightenment advocates for reason, transhumanists find that the project of Reason erodes all premises including the superiority of reason over unreason. Consequently transhumanists, like Enlightenment advocates in general, need to defend our values with nonrational a prioris. Unfortunately some transhumanists continue to advocate a naïve conception of pure rationality as an end in itself.

This article is part of a continuing series. See also:

Problems of Transhumanism: Introduction
Problems of Transhumanism: Atheism vs. Naturalist Theologies
Problems of Transhumanism: Liberal Democracy vs. Technocratic Absolutism
Problems of Transhumanism: Moral Universalism vs. Relativism
Problems of Transhumanism: Belief in Progress vs. Rational Uncertainty

The Enlightenment and Reason

Reason was the central value of the Enlightenment. Some historians see the beginning of the Enlightenment in the early seventeenth century “Age of Reason,” associated with the Descartes, Spinoza, Leibniz, Hobbes, Locke, and Berkeley. Historian Dorinda Outram defined the central claims of the Enlightenment around its appeal to reason:

Enlightenment was a desire for human affairs to be guided by rationality rather than by faith, superstition, or revelation; a belief in the power of human reason to change society and liberate the individual from the restraints of custom or arbitrary authority; all backed up by a world view increasingly validated by science rather than by religion or tradition. (Outram, 1995: 3)

When Kant wrote his essay (1784a) “Was ist Aufklärung” or “What is Enlightenment?” for the Berlinische Monatschrift, he summed up the slogan of the Enlightenment as “sapere aude” or “dare to know.”  Though divided by epistemology and theology, these thinkers attempted to ground philosophy on uncontestable propositions such as “cogito ergo sum.”

This thorough-going undermining of all irrational a prioris led to a number of philosophical dead-ends, however, immediately generating a score of post-rationalist movements. In the midst of the Enlightenment, Jean-Jacques Rousseau valorized the primitive and decried the harmful effects of hyper-rationalism on morality (Glendon, 1999). After all, as Hume underlined, the Enlightenment had severed any connection between the IS and the OUGHT. Although Kant and the utilitarians would attempt to re-ground ethics on what appeared to be empirical observations about human nature, they could never answer the next question: why should ethics be grounded on observations about human nature and not something else, like ancient religious dogmas?

Eighteenth century Romanticism was also a reaction to the overreach of reason in its assertion of the value of aesthetic and emotional experience. From the eighteenth century through World War Two, movements on both the right and left turned against Enlightenment rationalism. On the Left, the Frankfurt School writers criticized the Enlightenment’s instrumental rationality for its complicity in authoritarianism (Adorno and Horkheimer, 2006; Marcuse, 1964; Saul, 1992; Gray, 1995). Various strains of feminism and anti-imperialism attacked the patriarchal and Eurocentric construction of Enlightenment reason (Harding, 1982). These post-rationalist movements rejected the autonomy and universality of reason because it came into conflict with other values of the Enlightenment, such as respect for the rights of persons and for cultural diversity. Meanwhile, theologians and philosophers of the Right blamed communism on the totalizing logic of the Enlightenment’s assertion of utopian reason. 

In the 20th century, Enlightenment rationalism also began to question its own first principles. One example is found in Wittgenstein’s turn from logical positivism. The logical positivists attempted to ban from philosophical discourse all terms and concepts without empirical referents. Ludwig Wittgenstein, although an early and influential advocate of this position, eventually changed his mind as he further investigated how language actually worked. Having turned empirical investigation on the process of reasoning itself, and attempting to purify language of all irrationality, Wittgenstein concluded that the goal was chimerical (Wittgenstein, 1953). Language is a series of word games in which meanings are created only in reference to other words and not to empirical facts. The positivist project of building a rational philosophy from uncontestable empirical observations is impossible.

Foucault, Derrida, and the postmodernists also represent an implosion of Enlightenment reason.  Although I believe postmodernist “criticism” to be mostly a dead end, the essential insight is true: all claims for Enlightenment reason are historically situated and biased by power and position. The Enlightenment is just one historical narrative among many and there is no rational reason to choose the Enlightenment narrative over any other. Reason can only be argued for from metaphysical and ethical a prioris, even if those are only such basic assumptions as ‘it is good to be able to accomplish one’s intended goals.’

Most tangibly, contemporary neuroscience, also a product of Enlightenment reason, now recognizes that reason severed from emotion is impotent. In Damasio’s (1994) now classic studies of patients with brain damage that severed the ties between emotion and decision-making, the victims were incapable of making decisions. The desire to stop deliberating and make a decision is not itself rational-it is a product of temperament. Reason was built to serve, but is incapable of generating its own commands.

Transhumanists and Reason

Most transhumanists argue the Enlightenment case for Reason without ackowledging its self-undermining nature. For instance Max More’s Extropian Principles codified “rational thinking” as one of its seven precepts (More, 1998):

Like humanists, transhumanists favor reason, progress, and values centered on our well being rather than on an external religious authority. (More, 1998)

The Transhumanist FAQ defines transhumanism as the consistent application of reason:

The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason…We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings… Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. (Humanity+, 2003)

One of the central transhumanist blogs is Less Wrong, based at Oxford University under the aegis of transhumanist philosopher Nick Bostrom and dedicated to “the art of refining human rationality.” A frequent contributor there is Eliezer Yudkowsky, an auto-didact writer on artificial intelligence and human cognitive biases who also is a co-founder of the Singularity Institute for Artificial Intelligence. Yudkowsky has said that one of his goals is to lead a “mass movement to train people to be black-belt rationalists.” The Less Wrong blog highlights Yudkowsky’s definitions of rationality and their importance as its raison d’etre:

What Do We Mean By “Rationality”?

We mean:

1. Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed “truth” or “accuracy”, and we’re happy to call it that.

2. Instrumental rationality: achieving your values. Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as “winning.”

But why should we want a map that corresponds to the territory? Where do the values that rationality help us achieve come from? What if the valuation of instrumental rationality in fact is an obstacle to achieving the things we value, as the romantics claim, such as beauty, meaning, contentment, and awe? Yudkowsky goes so far as to acknowledge the problem in order to define it as something that is simply not to be discussed:

… many of us will regard as controversial—at the very least—any construal of “rationality” that makes it non-normative. For example, if you say, “The rational belief is X, but the true belief is Y” then you are probably using the word “rational” in a way that means something other than what most of us have in mind…  Similarly, if you find yourself saying, “The rational thing to do is X, but the right thing to do is Y” then you are almost certainly using one of the words “rational” or “right” in a way that a huge chunk of readers won’t agree with.

Fortunately for Yudkowsky, he has been ceded authority by his readers to write off all philosophical debate about the relationship of IS and OUGHT. But this will leave his transhumanist rationality experts defenseless debating those with different metaphysics, or when they face their own dark nights of the soul.

One of the central philosophical debates between bioconservatives and transhumanists, and “bioliberals” more generally, over the last two decades has been over the legitimacy of emotivist arguments such as Leon Kass’ (1997) “wisdom of repugnance” (Roache and Clarke, 2009). In 2003, the bioconservative Yuval Levin wrote in “The Paradox of Conservative Bioethics” of the tragic dilemma faced by conservatives trying to devise rational arguments in defense of irrational taboos. Once liberal democracy forces the conservative to abandon appeals to tradition or intuition, democratic debate naturalizes the new.

The very fact that everything must be laid out in the open in the democratic age is destructive of the reverence that gives moral intuition its authority. A deep moral taboo cannot simply become another option among others, which argues its case in the market place. Entering the market and laying out its wares takes away from its venerated stature, and its stature is the key to its authority. By the very fact that it becomes open to dispute—its pros and cons tallied up and counted—the taboo slowly ceases to exist… A conservative bioethics…is forced to proceed by pulling up its own roots, and to begin by violating some of the very principles it seeks to defend. (Levin, 2003)

Transhumanists and the Enlightenment face the opposite dilemma: how to advocate for rationality in a way that avoids its potential for self-erosion. Just as the bioconservatives cannot validate their taboos and ethical a prioris in the public square, there is likewise no rational reason why society should reject taboos and superstition in favor of a transhuman future; value judgments in favor of tradition, faith, and taboo, or in favor of progress, reason, and liberty both stem from pre-rational premises.

Transhumanists need to acknowledge their own historical situatedness and defend their normative and epistemological first principles as existential choices instead of empirical absolutes somehow derived from reason. One example of a transhumanist acknowledging the pre-rational roots of transhumanist values is anti-aging activist and IEET Fellow Aubrey de Grey’s 2008 essay “Reasons and methods for promoting our duty to extend healthy life indefinitely.” De Grey directly addresses Leon Kass’ emotivist argument and turns it on its head. What, de Grey asks, is more repugnant than sickness, aging, and death? Those arguing the anti-aging cause, de Grey concludes, should start from these shared intuitions and prejudices instead of starting from reasoned arguments that presume the “objectivity of morality” and the “unreliability of gut feelings.” When I first heard de Grey’s argument, I demurred, thinking he had given away too much to the emotivists. But that was simply my own fear of letting go of my superior rational ethical viewpoint.
When I imagine the project of Reason, I think of building a house in mid-air. I look over at the other houses floating in mid-air, the pre-Enlightenment houses, and they are ramshackle huts of mud daub and random flotsam, tied up with string. To get from one room to another in our neighbors’ houses, you have to crawl to the basement and then up a laundry chute. They sit in darkened rooms with few windows, and none that show that the house is not in fact rooted to the earth.

With the pure, lean precision of Reason we have built our houses of Kantianism, utilitarianism, liberal democracy, and other clean architectural marvels, Frank Lloyd Wright structures of thought with lots of windows, and even glass floors. But most of us steadfastly ignore the fact that, just like our neighbors, we are floating in mid-air. Acknowledging that we are all in mid-air and don’t know how we got aloft in the first place is damned scary, and we have repeatedly seen people defect from our Enlightenment houses with glass floors to our neighbors’ houses of faith and dogma where they are not forced to look down. We need to learn the courage to acknowledge that we got this thing in the air through an act of will—that Reason is a good tool but that our values and moral codes are not grounded in Reason—or else we will lose many more people to the forces of irrationality in the future.


Adorno, T. W., and Max Horkheimer. 2002. Dialectic of Enlightenment. Trans. Edmund Jephcott. Stanford: Stanford UP.

Berlin, Isaiah. 1998. The Proper Study of Mankind: An Anthology of Essays. Farrar Straus Giroux.

Damasio, Antionio. 1994. Descartes’ Error: Emotion, Reason, and the Human Brain. Putnam.

de Grey, Aubrey D.N.J. 2008. Reasons and methods for promoting our duty to extend healthy life indefinitely. Journal of Evolution and Technology 18(1): 50-55.

Glendon, Mary Ann. 1999. Rousseau & the Revolt Against Reason. First Things 96 (October 1999): 42-47.

Gray, John. 1995. Enlightenment’s Wake: Politics and Culture at the Close of the Modern Age. Routledge.

Harding, Sandra. 1982. Is Gender a Variable in Conceptions of Rationality? A Survey of Issues. Dialectica 36: 226-241.

Humanity+. 2003. Transhumanist FAQ.

Kant, Immanuel. 1784. Was ist Aufklärung. Berlinische Monatschrift Dezember-Heft: 481-494.

Kass, Leon R. 1997. The wisdom of repugnance. The New Republic 216(22): 17-26.

Levin, Yuval. 2003. The Paradox of Conservative Bioethics. The New Atlantis 1(1): 53-65.

Marcuse, Herbert. 1964. One-dimensional man: Studies in the ideology of advanced industrial society. Boston: Beacon Press.

More, Max.  1998. The Extropian Principles v3. Extropy Institute.

Outram, Dorinda. 1995. The Enlightenment. Cambridge, UK: Cambridge University Press.

_____. 2005. The Enlightenment, 2nd ed. Cambridge, UK: Cambridge University Press.

Roache, Rebecca and Steve Clarke. 2009. Bioconservatism, Bioliberalism and Repugnance. Monash Bioethics Review 28(1):4.1-21.

Wittgenstein, Ludwig. 1953/2001. Philosophical Investigations. Blackwell Publishing.

James Hughes Ph.D., the Executive Director of the Institute for Ethics and Emerging Technologies, is a bioethicist and sociologist who serves as the Associate Provost for Institutional Research, Assessment and Planning for the University of Massachusetts Boston. He is author of Citizen Cyborg and is working on a second book tentatively titled Cyborg Buddha. From 1999-2011 he produced the syndicated weekly radio program, Changesurfer Radio. (Subscribe to the J. Hughes RSS feed)


Nice article, in general I agree.  Rationality is a choice, though seemingly a useful one—in most cases, having a more accurate map is the best way to achieving whatever your goals are.  Certainly, rational means could be applied to many different ends, including evil or dishonest ends.

With regards to rationality, my favorite way of seeing it in the lens of Bryan Caplan’s “rational irrationality”.  Irrationality is an economic good, just like any other, and people often demand it because it can be psychologically satisfying and/or the path of least resistance.  However, when it comes to life-or-death personal choices that matter to the individual, they can be surprisingly rational, because they are willing to forgo the good of irrationality for a compelling enough reason.

It isn’t true that Less Wrong discourages discussion about the relationship of IS and OUGHT.  In fact, such discussions are common there.  For instance, Roko Mijic posted a link to a long doctoral thesis on the topic.  Eliezer’s definition of rationality merely says that it is synonymous with having a map that reflects the territory—he himself points out that it has to do more with IS than OUGHT.  It’s somewhat of an odd accusation, in fact, because much of Yudkowsky’s work, such as his Human Values sequence on Less Wrong, go into great detail on the question of untangling OUGHT from IS.

Well I agree values can’t be grounded on reason but that doesn’t mean we’re floating in mid-air, values could still be grounded in conscious experience. 

Human conscious intuitions may differ so much simply because human consciousness is fairly weak and unreliable.  Transhumans with stronger powers of introspection (ability to reflect on themselves better) combined wth new modes of consciousness (e.g. ability to ‘see’ their own thoughts directly) may agree on much more.  So there could still be some kind of ‘universal morality’.  That is to say, there may be a way to make ‘intuition’ reliable, even though I agree that it can’t be grounded by reason. 

The LW guru you refer to appears to be ‘running scared’ of consciousness.  For instance in his video answers to reader questions he does his best to evade the quesions related to subjective experience, and refers to social emotions as ‘danger zones’ and ‘land mines’.  Unfortunately for him, there may those who know exactly what consciousness is and how to implement it in code.

An excellent article, really. I have always seen rationality as a useful tool, the best tool to achieve measurable goals. But these goals and the values they are based upon are not a product of reason. They are usually, on the contrary, a product of the more veiled and fractal world of emotions, hopes and fears.

I think reason and emotion have common cognitive roots in brain software and underlying physics, which we will soon understand in sufficient detail to modify in a precise and measurable way, or re-engineer in AIs. But I think also AIs, in order to function, will need some kind of emotional systems (not necessarily similar to ours).

I have never been a big fan of Less Wrong and its companion Overcoming Bias, because I question the a-priori need to overcome bias. I love my doggy and don’t like cockroaches, which is certainly a bias. But why should I want to overcome it? I am happy enough with loving my doggy.

Thoroughly fascinating and enjoyable writing. Very glad to find it, especially here. Don’t think I have anything to add, except that at the end of it I really wanted to know where J feels we should go from here. Also wonder if my own preoccupation with the emergence and modulation of values in the brain can help moving forward.

(Also, it’s a small matter but the sentence “Where do the values that from that rationality help us achieve?” looks like it could use some TLC.)

[Ed. - thanks, that’s been corrected.]

@ Anissimov

Glad to hear that LW does indeed discuss the disconnect between IS and OUGHT. I get the impression that many of the advocates attracted to the project are still scrambling for a modern versionof natural law, a way to ground ethics and values on reason and empiricism. If the blog and the pursuit of friendly AI helps lay bare that value and ethics have non-rational origins while preserving a respect for the power and importance of rationality - i.e. innoculating against relapses into counter-Enlightenment romanticism and irrationality - that’s great.

@ Marc

I don’t think grounding value in conscious experience gets us much farther. I observe that I have desires, and that I share those desires with many others. But that doesn’t tell me if they are good or bad. That is the IS-OUGHT problem.

@ Giulio

Yes, I think one of the errors of the artificial intelligence crowd has been the presumption that cognitive complexity and rationality would lead to intelligent, self-willed behavior. That is why I have presumed that - for better or worse - intentional or accidental a-life, where the creature is built from the ground level with the goal of self-preservation and reproduction, is far more likely to generate machine intelligence than the coding of elaborate expert systems in boxes. In other words I think a lot of AI research is an effort to program Damasio’s patients, reasoning machines without desires and wills. That is probably a very good thing from a catastrophic risks point of view.


Where do we go from here? For me the Buddhist attempt at dodging the IS-OUGHT problem and building a naturalist fallacy has always been the most satisfying: if we examine our minds they want to stop suffering; desire causes suffering; deeply understanding the mind unravels clinging and brings contentment; when we stop tying ourselves up in counter-productive ego knots we become nicer people who want to help others. I don’t claim that the Buddhist system of thought is the only set of conclusions one can draw from empirical investigation of the human condition, only that it has great feng shui and the people living in this house (at least the upper stories) appear to be very happy.  Hopefully neurotechnologies will help us unravel more of the processes of the mind so we all can build our own metaphysics from first principles.

>I don’t think grounding value in conscious experience gets us much farther. I observe that I have desires, and that I share those desires with many others. But that doesn’t tell me if they are good or bad. That is the IS-OUGHT problem.

The relevant wiki links to my favored position:

“Ethical intuitionists claim that only an agent with a moral sense can observe natural properties and through them discover the moral properties of the situation. Without the moral sense, you might see and hear all the colors and yelps, but the moral properties would remain hidden, and there would be in principle no way to ever discover them (except, of course, via testimony from someone else with a moral sense).”

“Moral sense might inform us of the existence of objective morality, just as eyesight informs us of the existence of colors.”

There are different types of conscious feelings and desires, one type must be a particular sense of good/bad (only your consious awareness of this sense of good/bad can motivate you to consider such questions important in the first place). This type of feeling is judging your other desires.

@ Marc

Yes, I don’t think there is any empirical evidence to support moral intuitionism or moral realism, which is basically a restatement of natural law/naturalistic fallacy. IMO there are no self-evident moral facts or values, no objective morality. There were certainly Enlightenment thinkers like Jefferson who made that argument, but I think Humeian skepticism has basically won that Enlightenment argument.

I really find it hard to understand what, if anything, you are saying here.  Our facts should be reasonable, but our values are whatever they are, and values are required to choose acts.  Those of us who don’t want to die, we might sign up for cryonics, and those who do want to die, well it makes sense for them to not sign up.  But if you want to live yet don’t sign up because your beliefs about its effectiveness are not based on enough reason, well you are making a serious and costly mistake.  If you James are trying to defend that kind of unreason, well I can at least understand you, even if I’d disagree.  If not, I just don’t know what if anything you are saying.


I’m critiquing the transhumanist tendency to fetishize rationality and fall back into the naturalistic fallacy, and recommending an acknowledgement of the need to examine and embrace our irrational first principles, desires and values. You appear to be commendably clear on our inability to ground values in reason but many are not.

@James: not the transhumanist tendency to fetishize rationality, but the tendency of some transhumanists to fetishize rationality. I am a transhumanist and, as I said in my previous comment, I don’t fetishize rationality.

I get the impression that many of the advocates attracted to the project are still scrambling for a modern version of natural law, a way to ground ethics and values on reason and empiricism.

This is quite mistaken, and shows either a major failure in either the community to communicate our views or your ability to infer them.  A major part of the Less Wrong arc is about how values are complex, often viewed as a black box when they aren’t, and how different kinds of minds could have entirely different values and still operate just fine.  That’s one of the overarching points of the whole thing.  See the “complexity of value” entry in the Less Wrong Wiki:

You might also recall how Joshua Greene’s Ph.D thesis gets brought up often on both my blog and Less Wrong.  The whole argument of that thesis is that moral realism is wrong.

I really get the impression here that you’re taking a classic “rationalist” mistake, as in the Objectivist idea that we can use rationality to determine objective morality, and sort of assuming that the current Less Wrong community is making the same mistake, even though I can’t recall any post or instance that ever makes this claim.

Basically, the LW community puts more emphasis on rationality than anyone else, and what you seem to be expressing is the danger of a potential over-fetishism of rationality.  But one of the goals of the whole LW project has been to flesh out the definition of “rationality” in more detail so that it doesn’t retain the simplistic flaws that has given it a bad reputation in the past.  One reason why Eliezer’s definition of rationality works well is that it doesn’t make any specific moral claims.  This dovetails with the AI/economics idea of a utility function being distinct from the machinery that implements it.

James, I heartily agree that ethics are not based in epistemics. Rather, epistemics are based in ethics. Rationality is charity, which gambles (perhaps wrongly) that an altruistic pursuit of power will prove superior to an egotistic pursuit. Rationality presupposes the possibility of commonality, communication and congruence; that I can overcome estrangement with the other, and together we will share experience and understanding. Rationality is epistemic atonement, to use the Christian term.

Excellent article.

Critical thinking may stand in the way of free thought and new ideas however, if too closely scrutinised for rationality and validity. All ideas and views are important and of value enough for argument and debate. But your points concerning lofty heights and houses are well described.

Do transhumanists have lofty values and ideals? I’m still not sure what the real terms are that define the difference from a non-transhumanist : someone that does not agree with cloning? Or someone who frowns in overcoming disease, blindness, or having a robotic arm? My point is that the lines between transhumanist and bio-conservative and those that think critically may be too rigidly defined here. I think that most people would align themselves with transhuman ideals where they are explained and can be understood fully.

Transhumanism is not a philosophy, yet perhaps it should be embraced as one. An extension of humanitarianism and described as the furtherance and progress of humanity and not misunderstood as different from being human and the precursor to posthumanity, (whatever that may be described as). If transhumanism is promoted as being different or even elitist or idealist, then it may be taken and scorned for such.

It is good to hear your views on Buddhism and suffering. Certainly the Buddha was perhaps the forefather of existentialism and personal responsibility, and his doctrines concerning mindfulness and integrity promote rationality and critical thinking. I do believe that progress for trans-humanity lies in debate, ethics and philosophy, and the understanding of who we are which naturally leads to an understanding and embrace of potential and thus of what we may achieve and what we may become.

>IMO there are no self-evident moral facts or values, no objective morality. There were certainly Enlightenment thinkers like Jefferson who made that argument, but I think Humeian skepticism has basically won that Enlightenment argument.

According to Wikipedia, Hume is actually equally compatible with both moral realism and moral anti-realism:

Hume’s essential point (which you and I seem to both agree on) is quoted in the ‘Ethics’ section:

“Morals excite passions, and produce or prevent actions. Reason itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason”

But this doesn’t actually contradict moral intuitionism and moral sense theory:

“Arguably the most prominent defender of moral sense theory in the history of philosophy is David Hume (1711-1776). While he discusses morality in Book 3 of his Treatise of Human Nature (1739-40), Hume’s most mature, positive account of the moral sense is found in An Enquiry Concerning the Principles of Morals (1751).”

As to LW folks, they are just confused.  I’m sure Anissimov doesn’t even understand the views of his own guru, who has stated he isn’t a moral anti-realist.  Here’s an example of what appears to be a self-evident moral fact:

*Changes to the brain that improve the capability to make moral judgements are good.

LW folks are contradicting themselves.  First they say they agree with Hume that moral judgements are not based on reason, then they say that they can make a CEV (Coherent Extrapolated Volition) AGI that is entirely based on reason (has no consciousness) yet can perform reflective decision making (deciding what changes are ‘good’) to it’s own valuation system!

In fact (by Hume) an AGI with no consciousness would be quite unable to assess the truth or falsity of the above * statement, much less perform any reflective decision making!

James, you write: “Most transhumanists argue the Enlightenment case for Reason without ackowledging its self-undermining nature. For instance Max More’s Extropian Principles codified “rational thinking” as one of its seven precepts.’ That is rather frustrating, since I’m extremely familiar with the issues you discuss in this article. The Principles of Extropy are not meant to be, and cannot reasonably be, a complete philosophical statement. What’s more, it seems that you didn’t come across this piece of mine: Pancritical Rationalism: An Extropic Metacontext for Memetic Progress. <>

If you read that piece and understand PCR, you will see why I agree with you about rationality not being self-justifying and yet disagree that we have a problem in figuring out “how to advocate for rationality in a way that avoids its potential for self-erosion.”

I agree on the silliness of natural law and moral realism, as would most Less Wrong participants. Note that if there were such a natural law, which could magically rewrite the software of a superintelligence (in defiance of physics) to comply with it, there wouldn’t be a ‘Friendly AI problem.” Here’s a link for a Yudkowsky post that entertainingly critiques moral realism:

This article seems like a straw man attack to me.  Who exactly is supposed to believe this stuff?  Yudkowsky is singled out—but he is definitely not in the moral muddle described in this article.

I think there is a lot of noise here. It’s all about what we want and how to achieve it. Ethics is at best a means to an end. Rationality being just a matter of definition. It’s all about practicability, what works. If prayer worked, we’d use it if we wanted to use it. Meanings are not created in reference to other words but ultimately rely on the all-purpose of desire from within the entity uttering these words. These desires, which turn into volition, are the ultimate truth. The ultimate purpose on which all meaning is based, the subjective-first-person knowledge of volition. A truth which is self-evident. Volition is a truth that is adequately proven by circular reasoning. I want what I want, by reason that’s what I want.

Agreed, XiXiDu—well put. I assume you’ve read Nietzsche, but if you have not then I recommend it.

A very useful intervention. I’m a little leery of some of the historical generalizations, though and an contextualist insistence on ‘historical situatednes’s implies a synchronic conception of history that is hard to sustain. Derrida, interestingly, is absolutely not a postmodernist in the sense implied here - as his trenchant critique of Foucault’s archaeology of ‘unreason’ testifies. Maybe, one way through this impasse is to recognize that the critique of the claims of reason - whether in Hume, Rousseau or Kant - is internal to the tradition of Enlightenment thought.

@ David Roden

Thanks! I completely agree that “the critique of the claims of reason… is internal to the tradition of Enlightenment thought.” That’s why this is framed as the unfinished internal contradictions of the Enlightenment tradition. As for the distinctions between Derrida and Foucault I confess I’m not as up on their distinctions as I should be, and any clarifications are welcome.

@ Max

I don’t grok Pancritical Rationalism. Does it help answer, for instance, why one should believe in extropic principles as opposed to some other set? The self-undermining of values is the tendency to constantly ask why. The Extropian Principles actually always seemed relatively free of any first principle obfuscation in that they basically said “here they are - take ‘em or leave ‘em.”

Rationality is by all means a good tool, but spirituality - that intuitive introspection written of in another comment - should not be ignored in favor of it.  A healthy blend of both will be required as we move forward.  People are not rational creatures to begin with, and quite frankly, spiritual practices such as meditation, controlled breathing, and yoga might actually *increase* rational thinking in an individual, as all three are examples of practices which are for strengthening the mind.

Also remember that some 3,000 years ago, a couple of guys named Leucippus and Democritus had this wild superstitious idea that everyone was made of invisible particles called “a-toms.”  Ben Franklin pondered life extension in one of his essays.  The guiding principles behind transhumanism have their roots in the intuitive, the world of dreams and creativity, the people not afraid to ask “what if?” and then actively try to answer.

When a person dies, they are anatomically no different from when they were alive.  But we all know that something - dare I call it the soul? - leaves the body.  In a world where, for example, no one dies, it is all that much more important to exercise the soul like the muscle it is instead of continuing to rationalize it out of existence.  Leave the rational to rational topics.

I’m not a scholar of the Derrida/Foucault debate on madness, but the key text is Derrida’s ‘Cogito and the History of Madness’ (In Writing and Difference) - a response to Foucault’s Madness and Civilisation (published in French as Folie et deraison). Derrida argues that Foucault’s project of writing from the position of madness as the radical ‘other’ of reason presupposes the logical structures of rational discourse and thus is self-vitiating. Christopher Norris usefully compares Derrida’s argument with Donald Davidson’s anti-relativist arguments in ‘On the Very Idea of a Conceptual Scheme’ in his ‘Textuality, Difference and Cultural Otherness’ in Truth and the Ethics of Criticism.

More broadly, Derrida, like Davidson,consistently argued against the claim that a set of cultural or linguistic practices could constitute a concept or system of concepts. For Davidson our ability to interpret one another by applying norms of truth and logical coherence is more fundamental than the existence of a shared language. For Derrida, the ‘iterability’ or differential repeatability of signs or signifying states is a more basic condition of meaning than the existence of shared conceptual schemes or cultural practices. If we buy into this, the idea of the situated rationality (while attractive in many respects) needs to be complicated. 

Best wishes,


Amazing, just utterly amazing! the only other thinker/group of thinker that demonstrated such a breadth of insight is the American philosopher Ken Wilber. Dr. J has an astounding capacity integrating values of traditional religion, modern Enlightenment, postmodernity, and post-postmodernith in a way that I have not seen for a long time. I look forward to more of Dr. J’s opus integrating spirituality, reason, and transhumanism.

Yes. postmodernism is a dead end; and so is reason severed from emotion.

Transhumanism is historically situated, there are invariably some hidden, possibly fallacious, a priori assumptions that will be unveiled in post-transhumanist thought. And those post-transhumanist will as likely contain some hyperinflated assumptions that need to be, as our postmodern friends like to say, deconstructed. But beyond postmodernism and post-rationalism, there is post-postmodernity, or what some of us call, “integral thought.” But like transhumanism, integralism, is situated, contextual, and only one ladder on the spiral of the Grand Dialectics. Any form of post-postmodern thought, be it integral, or transhumanist—the healthiest thing that it can usher in is to help facilitate that drive toward higher complexity and human consciousness, and not to be an end unto itself, or as the article states, “a house floating in mid-air.”

In the mean while, I think an integration of Human Potential movement, contemplative spirituality, Ken Wilber’s Integral thought, and general trans-humanism is the most sophisticated worldview that we can achieve in the early 21st century
—that is, until the advent of cyberbrain…

As an outsider, I think this article is a good start towards creating a fruitful public discussion of transhumanist objectives and arguments. I respect the courageous self-reflection it takes to realize and acknowledge that your own values do not stem from some purely logical origin. From the standpoint of virtue epistemology, I think this is a worthwhile attitude to model to the members of your own clan, as well as an opportunity to call your interlocutors to greater self-awareness.

However, you should also be aware that rigorous examination does indeed happen (though certainly not often enough) in the so-called “houses of faith”. See, for instance, William P. Alston’s “Perceiving God; The Epistemology of Religious Experience”, Cornell UP, 1993. In fact, though your metaphor of houses up in the air is useful in pointing out the provincial status of so-called “rational arguments” that various groups with competing interests often hurl at one another, what is it exactly that gives transhumanists the rationalist high ground to designate other “houses” as being “ramshackle huts of mud daub and random flotsam, tied up with string”? In other words, what is the standpoint from which you critique ancient and medieval philosophy, for instance? What is it that makes, say, someone like Aristotle or Aquinas “unreasonable”, or “pre-rational”? I would also add that, if transhumanists want to see themselves in continuation with certain aspects of the Enlightenment movement, in addition to being wary of hyper-rationalism, they would be well advised to avoid the similar temptation of scientific hyper-empiricism.

To Carl Shulman: Natural Law and Moral Realism are not the same thing, nor are they necessarily directly related to one another. And, I always find it interesting when someone introduces a general argument into a public forum against moral realism, especially when that argument owes its very existence to realist democratic political principles that provide it with a right to be heard. Not that more nuanced critiques of MR are invalid, but this loose association between the Natural Law and MR seems bereft of, shall we say, rationality. For instance, I don’t think that one can successfully argue that Rawls based his “Theory of Justice” on a set of premises derived from some theory of Natural Law.

David Roden: To what exactly are you referring when you say: “contextualist insistence on ‘historical situatednes’s implies a synchronic conception of history that is hard to sustain”?

Vadim, I’m fully aware of the differences between natural law theory and moral realism, and that people with a vast variety of views describe those views as moral realism (some much less extravagant than others). The “and” between “natural law” and “moral realism” was not superfluous.

Hi Vadim,

Some anti-foundationalists assert that because there are no self-grounding principles we must ultimately appeal to historically contingent epistemic or ethical practices or, as in James’ case, ‘existential choices’. There are a number of problems with this position:  it provides a knock-down defense of almost any anti-Enlightenment craziness you might care to imagine on the grounds that ‘this is what folks do around here’. But more problematically, it assumes that the relevant contingencies are not only historical facts but (more importantly) metaphysically constitutive (of our form of life, our culture, our conceptual schemes, whatever…).

This approach reifies forms of life, cultures or praxis in a way that ignores problems of semantic/translational indeterminacy (Quine/Davidson), the structural possibility of re-contextualizing or re-inscribing any practice or choice within a new context (Derrida) or the dependence of all culture upon dynamically changing material conditions (Marx). So in response to the claim that we are situated by this or that, we need to insist on the problem of framing situations and contexts, and their inherent dynamism and openness. If that’s right, it’s a forlorn hope to supplement the the deficit of rational self-grounding with irrational self-grounding.

Thanks for the helpful clarification.
“So in response to the claim that we are situated by this or that, we need to insist on the problem of framing situations and contexts, and their inherent dynamism and openness. If that’s right, it’s a forlorn hope to supplement the the deficit of rational self-grounding with irrational self-grounding.” - I agree. Though I think that the impulse of this article is properly directed, it betrays a rather narrow purview of rationality. For instance, I don’t think that emotional, psychological, or moral processes are devoid of rationality, or that faith is necessarily the antithesis of reason. Not all people of faith are fideists, and Carl Jung, for an example, had (in?)famously responded to the question, do you believe in God, by saying: “I don’t believe, I know.”

If someone wants to take a rational stance, that is absolutely legit. It does not, however, by virtue of association, grant one a dictatorship over all things rational and the multitude of contexts where reason is manifested (i.e. science labs, philosophical debates, schools, churches, families, economic and political forums, etc). What I look forward to is a “thick” rational, ethical discourse among all of these spheres, rather than a usurpation of reason by one set of values against all the rest.

This is where the argument goes off the rails: “why should ethics be grounded on observations about human nature and not something else, like ancient religious dogmas?”

It is not ‘human nature’/rationality, but human survival/rights that must be the foundation of ethics for any kind of (Trans)Humanist. The inadequacies of Kant, Postmodernism, etc. are irrelevant. Let’s stick with modern philosophy:

Jonathan Glover’s “Causing Death and Saving Lives” is a modern statement of a Humanist foundation for ethics (that also criticizes Kant, etc.).

“Why should ethics be grounded on observations about human nature and not something else, like ancient religious dogmas?” Do I even need to answer this? Come on, think, you can do it.

“Reason can only be argued for from metaphysical and ethical a prioris, even if those are only such basic assumptions as ‘it is good to be able to accomplish one’s intended goals.’” Some goals are general and reason can be argued from an empirical perspective for being able to better accomplish those general goals. “Empirical” subjective experience also seems to confirm that good feelings are good and that bad feelings are bad.

“Most tangibly, contemporary neuroscience, also a product of Enlightenment reason, now recognizes that reason severed from emotion is impotent.” Although it could be said that ethics and thus decision making are based on the positiveness or negativeness of feelings (not on emotions, there is a difference), the fact that some humans cannot make decisions without emotions (and were really emotions separated from thought?) that’s not an absolute fact, only a contingent characteristic of human brains.

I agree that de Grey’s defense of immortality has a weak base, because immortality is not really necessary, although it may make one feel better and other things and thus have utilitarian ethical value.

Nietzsche taught me years ago that pure reason is like a gun which one obtains to protect one’s own but which fires off randomly, rendering it just as likely to blow one’s knees off as to scare away intruders…

“I agree that de Grey’s defense of immortality has a weak base, because immortality is not really necessary”

Think again:


YOUR COMMENT Login or Register to post a comment.

Next entry: Do We Own Our Bodies?

Previous entry: IEETers step down from and up to the Humanity+ Board