Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Superintelligences Are Already Out There!

For a Longer, Brighter and More Just Future

The small and surprisingly dangerous detail the police track about you

Alan Watts by South Park creators (All in one in HD)

Prototype

The Singularity - feat. Ray Kurzweil & Alex Jones


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

AmbassadorZot on '#22: Ray Kurzweil on Rationality and the Moral Considerability of Intelligent Machines' (Dec 20, 2014)

pansi4 on 'The Slut Shaming, Sex-Negative Message in the Christmas Story: It's Worth a Family Conversation' (Dec 20, 2014)

Vigrith on 'Technoprogressive Declaration - Transvision 2014' (Dec 20, 2014)

ericscoles on 'The small and surprisingly dangerous detail the police track about you' (Dec 20, 2014)

instamatic on 'Wage Slavery and Sweatshops as Free Enterprise?' (Dec 19, 2014)

instamatic on 'Four questions for Social Futurists, and others' (Dec 18, 2014)

CygnusX1 on 'Four questions for Social Futurists, and others' (Dec 18, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Review of Michio Kaku’s, Visions: How Science Will Revolutionize the 21st Century
Dec 15, 2014
(9773) Hits
(0) Comments

What Will Life Be Like Inside A Computer?
Dec 7, 2014
(8424) Hits
(0) Comments

Bitcoin and Science: DNA is the Original Decentralized System
Nov 24, 2014
(8059) Hits
(0) Comments

Is AI a Myth?
Nov 30, 2014
(5242) Hits
(32) Comments



IEET > Security > Resilience > Life > Enablement > Innovation > Vision > Futurism > Technoprogressivism > Affiliate Scholar > Phil Torres

Print Email permalink (6) Comments (9177) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Transhumanists Coming Out of the Closet


Phil Torres
By Phil Torres
Ethical Technology

Posted: Feb 14, 2011

It wasn’t that long ago that listing transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on, as academic interests on one’s CV might result in a bit of embarrassment.

Over just the past decade and a half, though, there seems to have been a sea change in how these issues are perceived by philosophers and other scholars: many now see them as legitimate subjects of research; they have, indeed, acquired a kind of academic respectability that they didn’t previously possess.

There are no doubt many factors behind this shift. For one, it seems to be increasingly apparent, in 2011, that technology and biology are coming together to form a new kind of cybernetic unity, and furthermore that such technologies can be used to positively enhance (rather than merely alter) features of our minds and bodies.

In other words, the claim that humans can “transcend” (a word I don’t much like, by the way) our biological limitations through the use of enhancement technologies seems to be increasingly plausible - that is, empirically speaking.

Thus, it seems to be a truism about our contemporary world that technology will, in the relatively near future, enable us to alter ourselves in rather significant ways. This is one reason, I believe, that more philosophers are taking transhumanism seriously. (In fact, the subject of human enhancement has become a rather prominent one in contemporary Ethics, with philosophers from Frances Kamm to Peter Singer writing about it.)
David Chalmers
An important recent event that’s changed the perception of techno-futurological issues among philosophers (if you’ll excuse the odd coinage) was David Chalmers’ presentation at the 2009 Singularity Summit in New York. Chalmers is, by all accounts, one of the most influential contemporary philosophers of mind, and as a result of his work, many in the philosophical community have been persuaded that the Singularity hypothesis is not silly speculation but a robust extrapolation into the future worth thinking about.*

In fact, I recently received an email requesting submissions for an upcoming Springer book entitled The Singularity Hypothesis: A Scientific and Philosophical Assessment. What struck me most about this email is that it was sent to me via a large academic mailing list specifically for philosophers. Thus, the fact that this announcement was sent out on such a list - right next to information about conferences on “Psycho-Ontology” and “Emergence and Panpsychism” in my inbox - suggests that techno-futurological issues like the Singularity are gaining a significant degree of academic respect.

Not only are such issues being discussed and written about by established academic philosophers, but philosophers with a prior interest in issues like these are getting quite good positions at venerable institutions. As many readers are aware, Nick Bostrom, who co-founded the World Transhumanist Association (WTA, now dba H+) with David Pearce in 1998, has a position at Oxford University. And the IEET’s own Susan Schneider currently holds a position in the Department of Philosophy at the University of Pennsylvania.

On a personal note, when I first discovered transhumanism, I was extremely skeptical about its claims (which, by the way, I think every good scientific thinker should be). I take it that transhumanism makes two claims in particular, the first “descriptive” and the second “normative”: (i) that future technologies will make it possible for us to radically transform the human organism, potentially enabling us to create a new species of technologized “posthumans”; and (ii) that such a future scenario is preferable to all other possible scenarios. In a phrase: we not only can but ought to pursue a future marked by posthumanity.

Now, if accepting both claims is necessary for one to be a transhumanist, then one might fail to be a transhumanist by rejecting either the first only, or the second only, or both the first and second together.

As I mentioned above, condition (i) is becoming increasingly difficult to reject, if only for empirical reasons. I take it that most bioconservatives, for example, agree with (i) while rejecting (ii). In their view, there are more preferable future scenarios than the one in which we cognitively, physically, and emotionally enhance ourselves with technology. It would nonetheless be possible, though, for one to accept (ii) but for various reasons reject (i) as implausible.

(I argue in a forthcoming paper, for example, that philosophers didn’t much consider the “meta-philosophical” position that Mark Walker calls “inflationism” - i.e., that we should use technology to create better philosophers rather than, say, deflating the goals of philosophy to better fit our limited philosophical abilities - because (i) didn’t seem all that plausible until relatively recently. Thus, I suspect that philosophers like William James, as well as Bertrand Russell and many of the Logical Positivists, would have agreed with (ii) while rejecting (i).)

Enhanced? For me, though, two simple considerations were sufficient to convince me that talk of posthumanity is not - or at least need not be - the result of “irresponsible fantasizing.” First of all, I recognized that the cyborg is already among us: the contemporary world is increasingly cluttered by organism-artifact hybrids, as a result of pacemakers, cochlear implants, pharmaceuticals, and even more mundane objects like glasses, which (some philosophers argue) become an embodied part of our phenomenological selves.

And second, I realized that Darwinian evolution is a “non-teleological” process, which simply means that life isn’t evolving towards any end goal or telos. If there is any global progress in evolution, it is backwards-looking rather than forwards-looking.

Thus, there’s absolutely no reason to think that Homo sapiens will remain in its present form for any significant period of time, since species are dynamically plastic entities, not static types with unchanging essences. It follows that even if radical human enhancements are never actualized, we should still expect our species to undergo evolutionary changes, that is, to the extend that fitness-driven differential reproduction occurs.

Put these two reasons together and one has pretty good reason for thinking that posthumanity is not some wacky or fantastical idea.

But what about the normative question concerning whether we should enhance? Probably the two best arguments, in my opinion, for why the transhumanist “map” for the future ought to be taken seriously are these: first, unless an existential catastrophe occurs, the genetics, nanotechnology, and robotics (GNR) revolution is going to happen. I call this, somewhat facetiously, “ceteris paribus inevitability” - i.e., unless something huge happens to disrupt current trends, these trends will continue no matter what.

Put differently, the idea is this: Yes, a profound catastrophe like an existential risk could indeed prevent radical enhancement technologies from being developed to their fullest. But the development of such technologies will not be stopped by government-imposed moratoria, by international policies of relinquishment, by technophobic disapproval among the population, and so on. The realization of radical human enhancements is in exactly this sense “inevitable, ceteris paribus.”

And finally, I became persuaded that a future full of posthumans might actually be preferable (in addition to being virtually unavoidable). One good argument for this comes from Mark Walker, who begins by noting that many GNR technologies are “dual use” in nature: that is, they’ll not only produce great benefits for humanity, but they’ll introduce a whole panoply of potentially catastrophic risks too. Thus, Walker argues (to paraphrase): who better to lead us through this perilous future than a new species of superintelligent and, at best, morally superior posthumans?

It was these (and other related) considerations that convinced me that transhumanism, human enhancement, and so on, have the potential to be as philosophically respectable as any other issue traditionally studied. A similar change of mind, a similar move towards acceptance of and respect for these issues, appears to be occurring now among philosophers throughout academia - especially young philosophers, although I have only anecdotal evidence to support this claim.

I will conclude with this: Nick Bostrom writes in his “A History of Transhumanist Thought” (2005) that he and David Pearce co-founded the WTA “to develop a more mature and academically respectable form of transhumanism, freed from the ‘cultishness’ which, at least in the eyes of some critics, had afflicted some of its earlier convocations.” As far as I can tell, from as distant and disinterested a perspective as I can manage, Bostrom and Pearce’s goal seems to be well on its way to being met.

* Chalmers later published a paper in the Journal of Consciousness Studies expounding the ideas he discussed in his Singularity Summit lecture. A PDF of the paper can be found at this link; it’s well-worth the read.


Print Email permalink (6) Comments (9178) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Perhaps something that goes unnoticed by people talking about the singularity is the effect that belief in its inevitability has on those who believe it.  Jaron Lanier, author of “You Are Not a Gadget: A Manifesto,” just did an interview with Jerry Brito of the Mercatus Center at George Mason University and he warned that techno-utopians may be unnecessarily giving up their personal information and consequently their freedom to technology and those who control it.  The can be found at http://surprisinglyfree.com





I think it’s only natural that the idea will keep getting more mainstream as technology progresses. The article in Time magazine about the singularity surprised me but I think that from now on we will see it pop up more often. Like you said; As time goes on, (i) is becoming very hard to ignore so the real debate concerning (ii) is about to get a hell of a lot more serious.

Although the majority of people will probably not realise what is going to happen until it hits them in the face, many new voices will start speaking up. It’s going to be nice to see what I think will be members of the non sci-fi loving demographic have to say about (ii).





China is already in a position to pull the plug on the US economy. Most seem now to assume that she is committed to a capitalist dynamic. Not so. Marxism intends to destroy capitalist economies as an element in global revolution. What grounds are there for imagining that China’s rulers are no longer Marxist?





I’m on board - thanks for the article smile





Mark Walker’s notion of “a new species of superintelligent and, at best, morally superior posthumans” has a dual impression.

The first part (intellectual superiority) I have no problem with: there is a relatively robust metric for grading intelligence and virtually every element of intelligence can be made to better to enhance overall performance.

The second part (moral superiority) sounds to me to be both nonsensical and dangerous. How exactly are we to measure this superiority? How can we improve morality? I am fundamentally skeptical of there being anything coherent in that notion, except perhaps eliminating “weakness of the will”, which only set the problem one step further. Moreover, history from ancient to recent, has shown that moral superiority is more often than not a dangerous notion than a beneficial one, prone as it is to giving justification for otherwise unconscionable behavior. Virtually every atrocity has been committed from the point of view of moral superiority. Which gets back to the central question: What constitutes moral superiority?

On a more general point, doesn’t the descriptive claim (i) effectively negate any significance that could be attached to the normative claim (ii). If GNR posthumanism is, in fact, inevitable, what sense is there in asking whether or not its a good thing? Supposing it is a most terrible development in the history of our species, of what significance is that judgment if such an outcome is inevitable? Suppose that our considered normative answer is that we should not enhance ourselves. So what? Claim (i) makes this judgment similar to saying “I ought not fall down when I fall off a ledge” and is similarly empty of meaningful content. The only way that the normative question has any importance, to my mind, is if the descriptive question is amiable to our control, that is if it is within our power to make it the case that GNR posthumanism will not happen.





Dmitri, you correctly point out the questionable implications of the syllogism. I think a more useful way to think of this is to see the singularity as not technically inevitable (it isn’t a law of physics, after all), but rather very hard to suppress, and treating it as if it were inevitable serves as ammunition against the bioconservative position.

Combating this position will be very important in the coming years/decades, because attempts to suppress this technology will have far more dire consequences than the supposed pitfalls of the technology itself, and such attempts will likely fail anyway.

Effectively suppressing these technologies would likely require a repressive government, not only foregoing possible gains such as cures for diseases, but also creating evils such as jailing and demonizing technoprogressives, driving research underground where harmful and dangerous consequences are likelier to occur, and, in the case of America, ceding the sure-to-be-massive gains from arriving at such technologies to other countries, which may not be as bound by democratic considerations and principles.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: When Robots Attack! Should We Fear a Singularity?

Previous entry: Dolphin Intelligence

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376