What are ‘biological limitations’ anyway?
Phil Torres
2010-02-18 00:00:00

There are, of course, an indefinite number of things -- actions that could be done, thoughts that could be had, etc. -- that we humans cannot do or have. But simply not being able to accomplish something A does not mean a biological feature F that prevents us from doing A automatically constitutes a limitation. Or at least not in the sense of “limitation” used by transhumanists. The sea squirt, for example, can “eat its own brain,” but few would consider the human inability to “eat its own brain” to be limiting. Thus, how is it that we decide that this and not that feature constitutes a limitation arising from our human biology?
image
A couple of important points about the notion of biological limitations. First, it is normative: labeling something F a “biological limitation” is equivalent* to saying (something like) F is undesirable, that F ought to be overcome. This contrasts with, for example, the statement that “I am exactly five feet tall,” which purports to be a value-free report. But if one were to say “Being five feet tall is a limitation,” one would immediately suggest that this particular height is a problem in need of fixing. Maybe there are people who actually believe this (maybe, e.g., a basketball player), in which case being five feet tall would really be a limitation for them. This leads to a second important point: biological limitations are not objective properties that exist independent of any system of values. Rather, they only exist, and are thus only specifiable, relative to a particular value-system, or axiology.

(Note that it’s possible for something to be both normative and objective: moral realists, for example, hold that certain moral imperatives are both objective and normative.)

In the Socratic spirit, philosophers (and most other academics) are motivated to give reasons for the various beliefs they hold. It is not enough to simply believe that (e.g.) the Singularity is just around the corner; one must justify this claim. (Nick Bostrom’s Simulation Hypothesis provides a good example of an assertion that seems at first glance to be outrageously false, yet it becomes increasingly plausible as one considers the reasons Bostrom gives for accepting it.) The question thus is: Can the transhumanist give good reasons for or justify the system of values he or she holds dear and from which his or her particular list of biological limitations derives? (A list that includes “not being able to live longer than a tortoise = a limitation” but does not include “not being able to eat our own brain = a limitation.”)

Take a closer look at what sort of things transhumanists identify as falling within the extension of “biological limitations.” In my perusal of the literature, I have often come across transhumanists complaining about such things as: the slow speed of cerebration, the mind’s limited data-storage capacities, the unreliability of love and other interpersonal relations, our inability to “to visualize an [sic] 200-dimensional hypersphere or to read, with perfect recollection and understanding, every book in the Library of Congress,” and so on. While I am not (at least not necessarily) arguing against the claim that such features are limitations, I am urging special caution in labeling them as “limitations.” Why? Because, as far as I can tell, many of the values hiding behind the transhumanist’s list of limitations derive from (the domain of) technology itself -- or at least it is not unreasonable to be suspicious of the origin of such values.

A number of critics** of technology, at least since the 1960s, have expressed concern that the “norms and standards” of technology might be insidiously infecting our human value-system in ways that are not always obvious to us.
image
Consider the difference between wanting to make love more reliable: a) because reliability conduces to greater human happiness (e.g., by decreasing the probability of a traumatic break-up, or so one argument might go); and b) because reliability is an attribute exemplified by machines, and as machines ourselves (albeit “squishy” ones) it therefore follows that we should be reliable too (and, of course, the way to do this is through enhancive techno-interventions). In the first case, the end is a wholly non-technological value, namely human happiness, while in the second the end is reliability itself -- that is, reliability as a characteristic feature of technology. (Note that while technology may provide a model for reliability in the first case, it is still relegated to being a means rather than, as in the second case, being both a means and the end’s source.)

Some theorists use the term “normative determinism” to refer to the phenomenon whereby the norms of technology come to universally dominate (at most all, or at least most, of) the domains of human experience, thought and activity. Technology is not neutral in this (axiological) sense, as the common “tool-use” model suggests. In a different terminology, Langdon Winner calls this “reverse adaptation,” tersely defining it as “the adjustment of human ends to match the character of the available means.”

Winner further explicates:

Persons adapt themselves to the order, discipline, and pace of the organizations in which they work. But even more significant is the state of affairs in which people come to accept the norms and standards of technical processes as central to their lives as a whole. A subtle but comprehensive alteration takes place in the form and substance of their thinking and motivation. Efficiency, speed, precise measurement, rationality, productivity, and technical improvement become ends in themselves applied obsessively to areas of life in which they would previously have been rejected as inappropriate.


In my view, the fact that such values (“norms and standards”) might have been previously “rejected as inappropriate” in a given domain of human experience, thought or activity is completely immaterial. What matters are the reasons such (technologically-derived) norms and standards come to be constitutive, in some way, of one’s value-system, influencing means and/or ends. If these values, whatever their origin, serve human goals, then fine; if not, then we ought to think hard about whether or not we should accept them.

In closing, consider this: Benjamin Franklin and Thomas Jefferson were both Enlightenment progressionists (i.e., they believed that technology-driven progress is an historically real phenomenon). But “for them, progress meant the pursuit of technology and science in the interest of human betterment (intellectual, moral, spiritual) and material prosperity. Both men emphasized that prosperity meant little without betterment; a proper balance between them had to be maintained.”

image


In opposition to this technology-as-a-means position, there arose a more “technocratic” view -- championed by the likes of Alexander Hamilton and Tench Coxe -- in which technological innovation and economic dominance came to be seen more as ends-in-themselves. As one scholar puts it, Hamilton’s and Coxe’s position “openly attributed agency and value to the age’s impressive mechanical technologies and began to project them as an independent force in society,” thereby “[shifting] the emphasis away from human betterment and toward more impersonal societal ends, particularly the establishment of law and order in an unstable political economy.”

My thesis here is not that we shouldn’t use technology to increase the speed of cerebration, indefinitely extend our lifespans, or help us “read, with perfect recollection and understanding, every book in the Library of Congress,” etc. Rather, I merely want to emphasize that those of us on the transhumanist side of the biopolitical spectrum, those of us who hold permissive views about the development of “person-engineering” technologies, should be constantly reflecting on, and critical of, the (sometimes hidden) sources from which our values derive. After all, the kinds of technologies we develop will depend on which features of the human organism we identify as “limitations” -- and which features we identify as “limitations” crucially depends on the system of values that we espouse, whether consciously or unconsciously. The point therefore is to be aware of these values and their origins, and then to scrutinize them.

Technology should, in my opinion, be a means to our own ends -- to human or posthuman betterment. But what I often find in the transhumanist literature is evidence of Winnerian reverse adaptation -- of an infatuation with technology leading to a list of biological limitations that, I believe, cannot always be justified.




* At least in the present context. There is clearly much more to be said about this issue: for example, one might call the giraffe’s neck a “limitation” when it’s not long enough to reach the leaves of a tall tree (as in Lamarck’s hackneyed example). In the present article, I ignore such evolutionary cases.

** See, for example, Jacques Ellul’s The Technological Society (1964).