Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

The World Transhumanist Association (WTA)

Enhancing Virtues: Intelligence (Part 3): Pharmaceutical Cognitive Enhancement

Actually: You ARE the Customer, Not the Product

A message about the power of free expression

Secrets of the Mind: Can Science Explain Consciousness? (34 min)

Chalmers vs Pigliucci on the Philosophy of Mind-Uploading (2): Pigliucci’s Pessimism


ieet books

A History of Life-Extensionism in the Twentieth Century
Author
Ilia Stambler


comments

instamatic on 'Is Anarchy (as in Anarchism) the Golden Mean of the future?' (Sep 21, 2014)

Peter Wicks on 'Review of Ilia Stambler’s “A History of Life-Extensionism in the Twentieth Century"' (Sep 21, 2014)

Peter Wicks on 'Is Anarchy (as in Anarchism) the Golden Mean of the future?' (Sep 21, 2014)

Kris Notaro on 'Review of Ilia Stambler’s “A History of Life-Extensionism in the Twentieth Century"' (Sep 21, 2014)

Kris Notaro on 'Is Anarchy (as in Anarchism) the Golden Mean of the future?' (Sep 21, 2014)

Peter Wicks on 'Is Anarchy (as in Anarchism) the Golden Mean of the future?' (Sep 21, 2014)

Peter Wicks on 'Review of Ilia Stambler’s “A History of Life-Extensionism in the Twentieth Century"' (Sep 21, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month


Why and How Should We Build a Basic Income for Every Citizen?
Sep 16, 2014
(11362) Hits
(5) Comments

Enhancing Virtues: Caring (part 1)
Aug 29, 2014
(5330) Hits
(1) Comments

An open source future for synthetic biology
Sep 9, 2014
(4536) Hits
(0) Comments

MMR Vaccines and Autism: Bringing clarity to the CDC Whistleblower Story
Sep 14, 2014
(4370) Hits
(1) Comments



IEET > Security > SciTech > Rights > Neuroethics > Life > Access > Enablement > Innovation > Contributors > Phil Torres

Print Email permalink (9) Comments (9093) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


What are ‘biological limitations’ anyway?


Phil Torres
By Phil Torres
Ethical Technology

Posted: Feb 18, 2010

The express aim of enhancement technologies is to overcome our biological limitations: cognitive, emotional and healthspan-related. But what is almost always tacit in discussions of human enhancement is the issue of what exactly constitutes a biological limitation.

There are, of course, an indefinite number of things—actions that could be done, thoughts that could be had, etc.—that we humans cannot do or have. But simply not being able to accomplish something A does not mean a biological feature F that prevents us from doing A automatically constitutes a limitation. Or at least not in the sense of “limitation” used by transhumanists. The sea squirt, for example, can “eat its own brain,” but few would consider the human inability to “eat its own brain” to be limiting. Thus, how is it that we decide that this and not that feature constitutes a limitation arising from our human biology?
image
A couple of important points about the notion of biological limitations. First, it is normative: labeling something F a “biological limitation” is equivalent* to saying (something like) F is undesirable, that F ought to be overcome. This contrasts with, for example, the statement that “I am exactly five feet tall,” which purports to be a value-free report. But if one were to say “Being five feet tall is a limitation,” one would immediately suggest that this particular height is a problem in need of fixing. Maybe there are people who actually believe this (maybe, e.g., a basketball player), in which case being five feet tall would really be a limitation for them. This leads to a second important point: biological limitations are not objective properties that exist independent of any system of values. Rather, they only exist, and are thus only specifiable, relative to a particular value-system, or axiology.

(Note that it’s possible for something to be both normative and objective: moral realists, for example, hold that certain moral imperatives are both objective and normative.)

In the Socratic spirit, philosophers (and most other academics) are motivated to give reasons for the various beliefs they hold. It is not enough to simply believe that (e.g.) the Singularity is just around the corner; one must justify this claim. (Nick Bostrom’s Simulation Hypothesis provides a good example of an assertion that seems at first glance to be outrageously false, yet it becomes increasingly plausible as one considers the reasons Bostrom gives for accepting it.) The question thus is: Can the transhumanist give good reasons for or justify the system of values he or she holds dear and from which his or her particular list of biological limitations derives? (A list that includes “not being able to live longer than a tortoise = a limitation” but does not include “not being able to eat our own brain = a limitation.”)

Take a closer look at what sort of things transhumanists identify as falling within the extension of “biological limitations.” In my perusal of the literature, I have often come across transhumanists complaining about such things as: the slow speed of cerebration, the mind’s limited data-storage capacities, the unreliability of love and other interpersonal relations, our inability to “to visualize an [sic] 200-dimensional hypersphere or to read, with perfect recollection and understanding, every book in the Library of Congress,” and so on. While I am not (at least not necessarily) arguing against the claim that such features are limitations, I am urging special caution in labeling them as “limitations.” Why? Because, as far as I can tell, many of the values hiding behind the transhumanist’s list of limitations derive from (the domain of) technology itself—or at least it is not unreasonable to be suspicious of the origin of such values.

A number of critics** of technology, at least since the 1960s, have expressed concern that the “norms and standards” of technology might be insidiously infecting our human value-system in ways that are not always obvious to us.
image
Consider the difference between wanting to make love more reliable: a) because reliability conduces to greater human happiness (e.g., by decreasing the probability of a traumatic break-up, or so one argument might go); and b) because reliability is an attribute exemplified by machines, and as machines ourselves (albeit “squishy” ones) it therefore follows that we should be reliable too (and, of course, the way to do this is through enhancive techno-interventions). In the first case, the end is a wholly non-technological value, namely human happiness, while in the second the end is reliability itself—that is, reliability as a characteristic feature of technology. (Note that while technology may provide a model for reliability in the first case, it is still relegated to being a means rather than, as in the second case, being both a means and the end’s source.)

Some theorists use the term “normative determinism” to refer to the phenomenon whereby the norms of technology come to universally dominate (at most all, or at least most, of) the domains of human experience, thought and activity. Technology is not neutral in this (axiological) sense, as the common “tool-use” model suggests. In a different terminology, Langdon Winner calls this “reverse adaptation,” tersely defining it as “the adjustment of human ends to match the character of the available means.”

Winner further explicates:

Persons adapt themselves to the order, discipline, and pace of the organizations in which they work. But even more significant is the state of affairs in which people come to accept the norms and standards of technical processes as central to their lives as a whole. A subtle but comprehensive alteration takes place in the form and substance of their thinking and motivation. Efficiency, speed, precise measurement, rationality, productivity, and technical improvement become ends in themselves applied obsessively to areas of life in which they would previously have been rejected as inappropriate.

In my view, the fact that such values (“norms and standards”) might have been previously “rejected as inappropriate” in a given domain of human experience, thought or activity is completely immaterial. What matters are the reasons such (technologically-derived) norms and standards come to be constitutive, in some way, of one’s value-system, influencing means and/or ends. If these values, whatever their origin, serve human goals, then fine; if not, then we ought to think hard about whether or not we should accept them.

In closing, consider this: Benjamin Franklin and Thomas Jefferson were both Enlightenment progressionists (i.e., they believed that technology-driven progress is an historically real phenomenon). But “for them, progress meant the pursuit of technology and science in the interest of human betterment (intellectual, moral, spiritual) and material prosperity. Both men emphasized that prosperity meant little without betterment; a proper balance between them had to be maintained.”

image

In opposition to this technology-as-a-means position, there arose a more “technocratic” view—championed by the likes of Alexander Hamilton and Tench Coxe—in which technological innovation and economic dominance came to be seen more as ends-in-themselves. As one scholar puts it, Hamilton’s and Coxe’s position “openly attributed agency and value to the age’s impressive mechanical technologies and began to project them as an independent force in society,” thereby “[shifting] the emphasis away from human betterment and toward more impersonal societal ends, particularly the establishment of law and order in an unstable political economy.”

My thesis here is not that we shouldn’t use technology to increase the speed of cerebration, indefinitely extend our lifespans, or help us “read, with perfect recollection and understanding, every book in the Library of Congress,” etc. Rather, I merely want to emphasize that those of us on the transhumanist side of the biopolitical spectrum, those of us who hold permissive views about the development of “person-engineering” technologies, should be constantly reflecting on, and critical of, the (sometimes hidden) sources from which our values derive. After all, the kinds of technologies we develop will depend on which features of the human organism we identify as “limitations”—and which features we identify as “limitations” crucially depends on the system of values that we espouse, whether consciously or unconsciously. The point therefore is to be aware of these values and their origins, and then to scrutinize them.

Technology should, in my opinion, be a means to our own ends—to human or posthuman betterment. But what I often find in the transhumanist literature is evidence of Winnerian reverse adaptation—of an infatuation with technology leading to a list of biological limitations that, I believe, cannot always be justified.


* At least in the present context. There is clearly much more to be said about this issue: for example, one might call the giraffe’s neck a “limitation” when it’s not long enough to reach the leaves of a tall tree (as in Lamarck’s hackneyed example). In the present article, I ignore such evolutionary cases.

** See, for example, Jacques Ellul’s The Technological Society (1964).


Print Email permalink (9) Comments (9094) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Very thoughtful comments Philippe.  You crystallize what I believe is a general qualm about transhumanism - that the thrall of technology obscures the search for eudaimonia.





There is a tautology in many arguments of the Transhumanists that seems to equate to, “The ends justify the means which justifies the ends.”

Many Transhumanists ignore the human factor. They propose speculative futures where they will or might do various things to the rest of world with the use of technology. During such speculation, they speak as if seven billion other people on earth don’t have any input and don’t deserve to have any input.

Do I want to see various and numerous technological enhancements available to humanity? Of course. (I’d like to have my 22 y/o body returned to me, please. This 52 y/o version is a bit limited.)

...but if I want to live a non-technological lifestyle, I do hope the Transhumanists allow me the freedom do so. The way most of them present their ideas, I won’t have any choice in the matter. iow, the author is quite correct. Transhumanists haven’t presented a coherent philosophy for their ‘movement,’ nor have they fully explored the epistemology of their own movement in any real depth.

  Otherwise, bring it on. ‘Cause trying to read everything in what appears to be six-point font w/o corrective lenses is a real pain.





@Warren: ...but if I want to live a non-technological lifestyle, I do hope the Transhumanists allow me the freedom do so.

At least this transhumanist can only be happy for you, if you find a way to be happy without making others unhappy.

I respect everyone’s right to self-ownership. Provided, of course, they respect mine.





For me, a limitation is whatever I consider as a limitation. Not being able to live longer than a tortoise is a limitation, because I wish to live longer than a tortoise.  Not being able to eat my own brain is not a limitation, because I don’t wish to eat my own brain.

For you, a limitation is whatever you consider as a limitation, and we do not necessarily have to agree.

I support everyone’s right to try overcoming whatever it is that they consider as limitations, provided they don’t concretely harm others.





Phillipe, I think you make great points here about the warrantless claims of many transhumanists who decontextualize their claims of “biological limitations.” A similar point can be made about “disabilities” which are inabilities within a social and physical context (i.e. a person with dyslexia is not disabled in an illiterate society).

I take something of an issue with Giulio’s construction of “limitation” because it remains decontextualized. A solipsistic definition of “limitation” fails on both moral and practical grounds. On a moral ground, I could treat people I compete with for resources and thereby justify the means necessary to eliminate them. On a social ground, the hyper-subjectivity of “you believe what you believe, I believe what I believe” results in a failed social structure, whereby meaning, and therefore communication and social interaction, are radically undermined.

What makes Phillipe’s analysis so pertinent here is his introduction of axiology, which demands contexts and secure foundation to one’s construction of values. The limitation preventing one value, say health, must be judged in relation to others, such as justice and liberty. Our differences in values are what cause our different conceptions of limitation, but the interaction of values, both externally among others and within our own internal hierarchy, are what actually defines our understanding of that limitation as something to be fixed or something to be respected.

@ Warren: Awesome last name. It’s, dare I say, transhumanesque? My favorite short piece on freedom and transhumanism is Ronald Bailey’s “Transhumanism and the Limits of Democracy.” Check it out: <http://reason.com/archives/2009/04/28/transhumanism-and-the-limits-o>





I hadn’t read that particular article by Bailey. Thanks for pointing it out, Kyle.

Although I have doubts about some of Rawls work, parts of it are worth consideration. I have similar problems with some of Rousseau’s works. I’m more of a Hobbes/Locke man, really.

I have no problems with anyone embracing technology to any extent they choose.

As for my surname, it originates in the region north of the Caucasus Mountains. Old, ancient kinda stuff. Bohnenstiehl. alternatively, Bohnenstiehlen. The best translation I can work out is, “The Green’s Lady’s Castle.”





@Kyle: I take something of an issue with Giulio’s construction of “limitation” because it remains decontextualized. A solipsistic definition of “limitation” fails on both moral and practical grounds. On a moral ground, I could treat people I compete with for resources and thereby justify the means necessary to eliminate them. On a social ground, the hyper-subjectivity of “you believe what you believe, I believe what I believe” results in a failed social structure, whereby meaning, and therefore communication and social interaction, are radically undermined.

No, because I said the magic words: “provided they don’t concretely harm others”.

I don’t understand what you mean by “a failed social structure”. In my book, a failed social structure is one which negates the autonomy and self-ownership of individual members of society for no practical and useful reason. Contrary to what you say,  “you believe what you believe, I believe what I believe” is one of the main requirements for a healthy society.

As long as I don’t concretely harm you, what I believe is my own business. I insist on concretely because “moral indignation” and similar crap does not qualify as “harm”. If you don’t like my chosen lifestyle, sexual or religious preferences, just look the other way and live your own life. Same for what I choose to consider as a limitation, and what I choose to do to overcome it (provided, once again, nobody else is harmed as a result).





First, it is normative: labeling something F a “biological limitation” is equivalent* to saying (something like) F is undesirable, that F ought to be overcome.

No. It is equivalent to say that the speaker considers F undesirable, and (s)he wishes to overcome it.





An interesting, thought provoking essay.  Thank you.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Why I Don’t Believe in Technology Innovation

Previous entry: A Brief History of Pretty Much Everything

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376