Against Relinquishment
Franco Cortese
2013-07-22 00:00:00

But they both agree on the underlying premise that technologies can and likely will have profoundly transformative effects on self and society. They agree not only that we have the power to shape the outcomes such technologies can foster, that we have the power to affect and to a large extent determine the ultimate embodiment and repercussions of such technologies – but also that such technologies impel us to make concerted efforts towards determining such repercussions and embodiments!

It may not look that way from the inside-out, but they are fighting to realize their vision of Humanity’s brightest future. Until we reach the day when the majority of humanity has extensively acknowledged the expansive power such transformative technologies hold, Techno-optimists & Techno-pessimists, Transhumanists & Luddites, and Revolutionaries & Revivalists alike are on the same side! Both camps are on a campaign to alert planet earth of the titanic transformations rushing foreforth upon its horizons. Both agree on the underlying potential such technologies hold for changing the world and the self – whether encased as Prized Present or in Pandora’s Box – and both are weary for the world to wake up and smell the rising.

And besides, we’re all in it together, no? At least Techno-pessimists are thinking about such issues, and putting forth their appraisals. At least they’ve begun to consider what is at stake. Is a techno-pessimist closer to a Technoprogressivist or Transhumanist than one who doesn’t take a stance either way? Perhaps, even if it makes for a tough pill to swallow and a gruff pull to follow.

Not that the likes of Leon Kass, Francis Fukuyama and other such Neo-Luddites, Developmental Critics, or Anarcho-Primitivists are to be heralded or left to lie without rebuttal. Their pessimism still does cause palpable harm, as in the delays in Stem-Cell research caused by G.W. Bush’s “President’s Council on Bioethics” evidenced. Thus we shouldn’t simply smile politely and let them on their merry way… But neither should we automatically jump to out-snuff their wild-fires of panic.

We should instead let them whip up their frenzies, let them promulgate the sentiment that technology can and likely will have profoundly transformative affects on self and society, but be there waiting in the wings, to attest for Icarus’s insight and to offer Prometheus a light.

Let them have their say, because it increases public awareness of the transformative potential of high technology, because it clues people in to the fact that there many dangers are possible with these technologies (even if we disagree on the nature and extent of those dangers), but be sure to be there waiting, ready to refute their specific and untenable solutions, and not their call for fear in the first place. We are right to simultenaciously fear and hope for technology’s transformative potentials. But considering that both Neo-Luddites and Technoprogressives alike agree on the transformative and world-whirling capabilities of high technology, is it more likely that we can take them in hand and shape the course of their eventual realization by outright relinquishment, or by taking advantage of those very transformative potentialities so as to increase our ability to shape them, in a self-recursive feedback loop fitting for Man, the self-shaping shaper?

The very beliefs that Neo-Luddism share with Technoprogressivism and Transhumanism constitute one of the best reasons for arguing that their specific approach – outright relinquishment – is an untenable one. They seek to point out the massively transformative potential of technology, and then use this as an excuse to mitigate their dangers and ameliorate their potential downfalls. We take their approach, pat them on the back (not too heartily, of course) for their starting point, and then flip the course around. We seek to point out the massively transformative potential of technology, but instead of arguing for their relinquishment (using such transformative potentialities as justification), we argue that those same transformative potentialities actually increase our potential to successfully shape their outcome and mitigate their potentially problematizing aspects!

This appears to be a distinct and heretofore-marginalized, if not wholly-unacknowledged, position on the impact of high technology. The predominant opinion is that our potential to shape technology into forms that embody our values (e.g. ethicacy, safety) will decrease as we move into the future and as high technology (1) becomes more and more transformative and (2) develops at an increasingly accelerated rate. By contrast, this sentiment holds that the very same technologies that constitute the source of increasing unpredictability and increasing variability can be used to increase our own ability to shape the ultimate embodiments and consequences of such technologies.

We forget that the very same technological infrastructures that have the potential to bring about Singularitarian Strong AI (i.e. computers), the epitome of unpredictability, constitutes the very same technological infrastructure that we have since the computer’s inception used to track trends and to evaluate and extrapolate statistical correlations. We forget that the radically-disconcerting potential for existential risk constituted by the occurrence of an intelligence explosion al a I.J. Good can itself be mitigated by implementing a maximally-distributed intelligence-explosion, in which everyone is given the opportunity to amplify their intelligence at an equal rate so as to prevent the accumulation of too much power (i.e. capability to affect change in the world) in one mind.

We forget, in short, that it is because high technology has such transformative potentials that our ability to utilize it so as to increase our own ability to shape it is even possible.

What are the chances that as soon as it becomes possible to use technology in massively immoral ways, we also gain the ability to shape and affect the parameters of our own morality? That as soon as the potential to use technology in stupendously stupid ways (i.e. without consideration of consequences) we also gain the potential to amplify and augment our own intelligence?

What are the chances that as soon as technology seems to be building upon itself in an unending upward avalanche of momentous momentum, we also gain - through the use of those very same technologies - the ability to better forecast cascading causes and effects into the postmost outpost and to better track trends into the forward-flitting future? I’m not saying that this is inevitable or ipso facto the case – only that it is a conceivable notion, and possible to a much greater extent than has been heretofore realized I think. A closed circle can seem like just that, until adding a vertical dimension reveals that it was an upward spiral all along. We’ve turned upon ourselves to find (or realize) ourselves at least once before, when meat went meta and matter turned upon itself to make mind. Perhaps this was but echoes through time of that final feedback for forward freedom we stand to face, upright and with eyes sun-undaunted, in a future so near that it might as well be here (or so near that we’d better start acting as though it were), where the fat of fate is now kindled anew to light our own spindled fires aspiring ever higher, into parts and selves wholly unknown and holier for it.

I think that dichotomies like the techno-optimist vs. techno-pessimist distinction make it easy for those relatively immersed in the issues at-hand to assume that techno-optimists wish to realize the beneficial potentials of high technology without regard for their unbeneficial potentials and that techno-pessimists wish to negate the dangerous potentials of high technology without much care for their transformatively-beneficial potentials. Such two-bit distinctions lacks the praxis of perspective and are more misleading than they are illuminating.

Indeed, techno-optimism is to some extent a distinction that confuses more than it clarifies. Does it denote the sentiment that technology is biased to being beneficial on the whole rather than bad, or does it denote the sentiment that technology’s good potentials are not necessarily inevitable but can nonetheless still be fostered with proper foresight and deliberation? I know very few people that would endorse the former claim, and many that endorse the latter, which I associate more with technoprogressivism than with techno-optimism.

A Technoprogressive is not the same thing as a Techno-Optimist, if we accept the first characterization of “techno-optimism”. I wouldn’t endorse the claim that all technologies are freedom-expanding ipso facto. To do so would be to forget or ignore the moral ambiguity of technology – the fact that, generally speaking, most technologies can be used to foster good and bad, creation and destruction, expanding autonomy and constricting autonomy. I don’t think it’s a hard rule (i.e. some technologies are more biased towards destruction rather than creation or systemically embody a certain end-purpose or ideological bias, like guns; guns can be used to free or to disenfranchize, yes, but in the end they’re for killing people) but moral ambiguity seems generally applicable to most new technologies until proven otherwise.

In other words morally non-ambiguous technologies appear to be the statistical minority. I do not think all technologies are unambiguously good, but I do think that whether their beneficial or destructive potentialities are fostered depends on us and us alone, both in terms of our use of such technologies as well as in terms of our efforts to shape the ultimate embodiments of emerging, converging, disruptive and transformative (e.g. NBIC) technologies through deliberative discussion, and to some extent advocacy and awareness-raising.

In the end, the moral ambiguity of technology (i.e. that it can foster good or bad) is a virtue, not a vice. It merely represents the transformative, open-ended upwardness of technology. If it were unambiguously and inevitable one way or the other then we couldn’t do much in the way of shaping it. If its affects on society and its relationship to humanity were concertingly set in stone, even in the positive direction, then our ability to determine its extents, affects and embodiments would be deterred, not increased! The slipperiness of technology should not be agonized but exalted. Progress is not a thing, it is us! Progress cannot fail; only we can fail progress.

To be wary of the potentially-negative potentials of high technology is not to be a neo-luddite unless technological relinquishment is your chosen solution-paradigm. Indeed, if one looks they will find that most of the people making the biggest noise about concerns over the safety and ethicacy of emerging, converging and high technology are Technoprogressives. The safe and ethical use of technology has been a predominant aspect of Transhumanism, especially Democratic Transhumanism, since at least the 1998 Transhumanist Declaration, as well as prior to that in the works of FM-2030 as well as Max More and the Extropy community. Moreover, the people and organizations that are working the hardest to analyze existential risk and Global Catastrophic Risk, and strategize how best to mitigate them (the best current solution paradigm being Bostrom’s notion of Differential Technological Development), like The Future of Humanity Institute and Lifeboat Foundation, express H+ or TechProg inclinations or else come from H+ roots. They are also the ones speaking the most frequently and making the most salient points about X-Risk and GCR.

And I don’t think all Techno-Pessimists are Neo-Luddites, nor all people who are wary of the dangerous potentialities of emerging technologies. Neo-Luddites are the very specific subset of these larger categories that advocate or endorse technological relinquishment. This is the categorical qualifier. And the only reason that technological relinquishment is a big enough concern to warrant a category distinct from Techno-Pessimists is that it is simply an ineffectual solution-paradigm, ultimately bound to fail in securing its intended end-goal.

The central problem is that any technological relinquishment will not be global technological relinquishment unless we have global governance (which itself is problematic for a variety of reasons, one being the fact that if it ripens with corruption there isn’t any other global force to do anything about it and another being that one set of policies stifles not only innovation and progress itself but also robustness by eliminating evolutionary diversity, in the sense of Universal Darwinism and necessarily not genomic and phenotypic evolution per-se). If we ban a technology to avoid its destructive capabilities then it will likely be developed somewhere else (i.e. a foreign and potentially non-democratic country) where we will have less oversight, less developmental transparency, less control over its ultimate embodiment and less potential to shape its development into one that embodies our own desires and values. This is why differential technological development is a better solution paradigm than relinquishment even if your projected end-objective is ultimately to negate or deter its destructive potentialities rather than to foster its autonomy-expanding and empowering potentialities.

Outright relinquishment just won’t work because it’s never global relinquishment, which ultimately just gives us less oversight and capability to shape its outcome into a good one. Thus even if you seek only to mitigate high technology’s dangerous potentialities (as opposed to a real desire to foster its transformatively-beneficial potentialities), then outright relinquishment only works to undermine your objective. Increased deliberation of what forms we want out technology to take and an increased effort to (1) determine what we want from technology and (2) realize it safety and ethically, in conjunction with differential technological development, is the best solution-paradigm currently on the table, regardless of whether you’re a techno-optimist or a techno-pessimist, and indeed even regardless of whether you’re a technoprogressive or a neo-luddite.

So you can be a Techno-Pessimist Techno-Progressive. You can be very wary about NBIC technologies being used to facilitate harm or a loss or freedom, and still think that the best way forward, and the best method of mitigating emerging technology’s potentially destructive, disenfranchising or marginalizing effects is by deliberating upon (1) what constitutes the best embodiment of emerging technologies and (2) how to best shape emerging tech so as to embody our values and what we think are its best (i.e. safest, most ethical) ultimate embodiments. In such a case you’d be a Techno-Pessimist Techno-Progressive.

The emphasis on technology in H+ and TechProg communities does not come from disdain for our humanity (i.e. the silly “contempt-of-the-flesh” trope) or sheer technophilia. Rather it is because (1) we seek to better determine the determining conditions of world and of self (which are to some extent symbiontic and interconstitutive), leave more up to choice and less down to chance, and (2) because technology is simply Man’s foremost mediator of change, of effecting our affectation, and thus the best means of shaping the state of self and world for the better, whatever your definition of better happens to be.

Techno-pessimists, Neo-Luddites, Revivalists and Relinquishists alike are not wholly wrong, just mostly. Rather the backlash against technology’s profoundly transformative potentials represents one small step in the right direction, and one giant leap left-field. So let’s unite in their plight to ignite consideration of the dangerous potentialities of technology in the eyes of humanity, but fight them when they move to stop the motion with a wimpered halt, rather than to continue the discussion with daring determination and impassioned exalt of aug and of alt.

We should not shy away from the transformative essence of high technology but instead utilize it to transform the technological landscape into one that heeds our desire for safety and ethicacy. I have argued that relinquishment simply isn’t a tenable solution-paradigm to the dangerous potentialities of high technology because relinquishment here is never relinquishment everywhere. Even if your goal is to mitigate the dangerous potentialities of high technology, your most promising solution-paradigm is not relinquishment but rather deliberative, directed development.




i * For the purposes of this essay, neo-luddism will denote specifically those strains neo-luddism that make criticisms of high technology from ethical and safety grounds, which are legitimate concerns, as opposed to criticisms from moral or ontological grounds (e.g. that a static human nature must remain static to be “dignified”). Safety and ethical concerns are legitimate topics of debate that can move the rhetoric of technoethics and technopolitics forward; moral and ontological claims I argue have bogged down debate more than they have facilitated it.