Rational Capitulationism
Phil Torres
2009-12-28 00:00:00

Transhumanism appears to be no exception: the exploding literature on existential risks (no pun intended) is largely due to transhumanist efforts, while the very same intellectuals working on issues of “techno-eschatology” are also responsible for (in my opinion) some rather implausibly utopian visions of the future. In my recent paper [PDF] in the Journal of Evolution and Technology, I attempt a critique of the latter current of transhumanist thought, while simultaneously defending the course of action prescribed by the core transhumanist project: that is, to both world- and person-engineer (in Mark Walker’s terms) according to a calculus that dually considers the potential risks and benefits of specific technological interventions.
 
imageMy critique, which consists of three distinct arguments, specifically targets the transhumanist notion of progress -- a bold, absolute kind of progress possibly governed by one or more cosmic “laws” (as in the “law of accelerating returns”). On my reading of transhumanism, progress constitutes a kind of default background assumption in transhumanism’s futurological picture of human evolution, where such “evolution” occurs through technological “enhancement” and aims to engender a novel species of posthumans. While I certainly grant that progress is easy to believe in, given the undeniable and profound change it has brought about, I can’t help but scratch a persistent itch of skepticism about whether such change should count as progress in any meaningful sense. As everyone agrees, “progress” is a value-laden term that requires, as its satisfaction conditions, change toward an improved state (a black box of sorts left open for an individual, group or society as a whole to fill in).
 
One argument that I provide against progressionism is in no way original; this is the anthropological argument that there exists no transhistorical correlation between technological development and human well-being. My goal in recapitulating this idea is that, while it has become constitutive of the standard anthropological paradigm of today, it remains conspicuously absent from transhumanist discussions of technology and progress. For many techno-progressionists, “primitive” is not a term of description but one of derogation. But as a large mass of anthropological data (since Marshall Sahlins’ thesis about the “original affluent society” was published in the 1960s) confirms, our ancestors from before the Neolithic revolution lived relatively healthy -- not to mention egalitarian, leisurely, low-social-stress, and so on -- lives. At the risk of oversimplifying, juxtapose their situation to that of modern humans: today, for example, we actually need an academic discipline dedicated entirely to studying the eschatological consequences of our own technological activity. At least from one perspective, then, it seems extremely odd to describe this change as “progressive.”
 
The second argument given focuses on how progressionists (both laypersons and scholars) typically present history. It is therefore a point of historiography. As far as I can tell, there exists a strong bias in such presentations towards technology’s ability to solve the multifarious problems hindering human well-being. But problems do not exist in a vacuum -- they have causes in addition to their effects. Without a doubt, technology is a magnificent problem-solver. But when one adopts a more panoptic (vs. myopic) view of history -- that is, one that includes within one’s field of vision not just the ways that clever technological fixes eliminate or mitigate the effects but the causes of these problems in the first place -- the history of technology appears rather less impressive. This is because, on my reading of history, a significant majority of the problems that technology has solved are themselves caused or at least enabled by prior technological systems. Such problems are in some sense “technogenic.”
 
For example, the New York Times has been reporting recently on the “worsening” problem of toxins in public water systems. Although the focus is the U.S., unhealthy water supplies are a problem of global proportions. The most obvious solution, but also the most difficult, would be for every citizen of Earth to immediately stop participating in any technological activity that produces water pollutants. (This would include not taking the many prescription pharmaceuticals most Americans ingest each day.) But, given the inextricable connection between modern society and technology, this possible solution fails for the obvious logistical reasons. More likely, specialized apparatuses allowing one to filter toxins out of drinking water will be designed and subsequently distributed around the world. While this would certainly be, in itself, a good thing, it would hardly count as genuine progress. After all, our ancient ancestors drank toxin-free water too, but they did it without expending huge amounts of energy inventing, manufacturing and distributing such corrective apparatuses.
 
Thus, by adopting a historiography that looks at the causes of our problems in addition to their solutions -- or so I argue -- a far less progressive picture of history emerges. This thesis leads directly to my final “futurological” argument (which actually appears first in the paper): not only does history appear non-progressive, but phenomena like the “existential risks” discussed by sober transhumanists actually suggest a regressive historical trend. (Note that being non-progressive does not entail being regressive, since history could merely be neutral with respect to this binarism.) But when one peruses the literature on existential risks, an ominous and indeed rather unambiguous pattern appears through the hazy mist of one’s techno-optimism: not only is the number of existential risks* increasing rapidly (beginning in 1945), but the probability of one being actualized is quickly growing too. This observation leads me to wonder about the possibility of an “existential risk singularity,” or “ERS,” which would (by stipulative definition) virtually guarantee the self-immolation of our species at some future point.
 
The ERS hypothesis is, of course, highly speculative (as is “the Singularity,” for that matter). But it appears to be at least relatively well-supported by the available data. Consider Bostrom’s 2002 paper in which he adds the technical notion of existential risk to the futurologist’s lexis: since WWII, Bostrom notes, the number of civilization-ending scenarios has significantly expanded from ~1 to ~23, and indeed the most worrisome risks seem to derive not from present-day artifacts but from anticipated future technologies -- especially those of the inchoate genetics, nanotechnology and robotics (GNR) revolution. Thus, it is not a large inferential leap from this ostensible fact to the hypothesis that, at some unspecified point in the future, a kind of historically unique “singularity” might be reached at which point the growth of existential risks would transcend what we “normals” could possibly comprehend -- or something like that.**
 
Now, the significance of the ERS hypothesis is fundamentally cautious in nature: if anything, this hypothesis should serve to underline the importance of proceeding down the road of GNR technologization with as large a dose of reflective thought and contemplative wisdom as instrumental rationality and puzzle-solving ingenuity. (I am reminded of Erich Fromm’s lament that we have the “know-how,” but not the “know-why” or “know-what-for.”) And this leads to the final thesis of my cluttered paper, namely that there is no contradiction in both rejecting the existence of an absolute kind of technological progress (leading to improved states of human existence) and endorsing the moral imperative of transhumanism to develop more “advanced” posthuman creatures.***
 
This is, in fact, the position that I tentatively adopt -- "rational capitulationism" -- since it seems to me not just that the future is (in some important sense) rather dismal, but also that transhumanism fares comparatively well with respect to the alternative futurological programs available. Relinquishing an entire domain of emerging technology, or imposing a blanket moratorium on person-engineering, seem entirely implausible to me. While transhumanism might be difficult to implement and vexed by certain philosophical problems (e.g., those concerning personal identity), it does not seem “entirely implausible.” Thus, by a process of elimination, I come to endorse the transhumanist option.

imageAnother way of thinking about this thesis is by way of a distinction between practical optimism and theoretical pessimism: while I am, intellectually, rather pessimistic about the prospects of (post)human survival, I am nonetheless enthusiastic about working assiduously to do everything possible to ameliorate our common existential situation (not unlike Dale Carrico’s position outlined in this post). And based on my analysis of the alternative programs, transhumanism seems to provide the best way of doing this -- even if the outcome turns out to be sub-optimal, or even (in some sense) undesirable.

The arguments I provide are incipient. My goal was to articulate a reasonable point of departure for further discussion about transhumanism, progress, and the future, rather than provide a definitive end (even if points are sometimes couched in definitive language). And such discussion should, I believe, be had in the fallibilistic spirit of what Bostrom has described as the “willingness to reexamine assumptions as we go along.” It is for precisely this reason that my endorsement of normative transhumanism -- although robust -- remains tentative. More work has yet to be done.


* Note an ambiguity with the term “existential risk,” which could refer either to a type of risk or a particular token risk. This dual signification could be of import in discussions about existential risks: for example, one might read that the number of existential risks has not increased, and therefore (ceteris paribus) infer that the likelihood of self-annihilation -- through error, terror, or whatever -- remains stable. But this would only be the case on the token reading of “existential risk.” It may well turn out to be the case that the number of existential risk types has not changed, but the total quantity of those type’s tokens has grown prodigiously. If this were the case, of course, then the likelihood of self-annihilation may well have significantly increased. Thus, in any serious discussion of existential risks, the term ought to be type-token disambiguated.

** One could, of course, accuse me of swinging the historiographic (or futurological) pendulum too far in the other direction by focusing exclusively on the creation of new problems, thus neglecting the possibility of technologically fixing them too. Of course, technologists may construct “nano-immune” systems to protect against ecophagy, just as Dean Kamen is developing an effective water purification system for citizens of developing nations (U.S. Patent 7,340,879), and so on. An exponential growth of existential risks may be matched by an exactly similar growth of defensive technologies sufficient for obviating disaster. Nonetheless, for the purposes of assessing the potential drawbacks of advanced technologies, and thus acting in an appropriate manner, I have asymmetrically fixated my attention on existential risk etiology rather than the possible courses of treatment.

*** I put “advanced” in scare quotes because, like “progress,” this term is value-laden (at least in the present context). Simply put, it makes no sense for one to talk about anything X being advanced simpliciter – one can only evaluate how advanced X is relative to some set of axiological criteria. (Of course, once you have such criteria it is an objective matter about whether X satisfies them or not.) And, furthermore, getting others to accept one’s own preferred criteria requires convincing argumentation. One should, therefore, always be at least initially suspicious of talk about “advancement,” as well as “limitations” and “enhancement,” until the particular values hiding behind these terms are made explicit.