Are We Passing Through a Bottleneck, or Will the Explosion of Existential Risks Continue?
Phil Torres
2015-01-25 00:00:00

The fact is that less than a century ago, there were only about three or four ways our species could have kicked the bucket. The possibilities included an asteroid / comet impact, a supervolcano eruption, and maybe a global pandemic. Today, the situation is strikingly different. It’s hard to count exactly how many contemporary existential threats there are, since some are quite speculative, but if one had to give a number, the x-riskologist might identify at least twenty distinct ways that “Earth-originating intelligent life” could end. The possibilities here include an engineered pandemic, the infamous grey goo scenario, a superintelligence takeover, and even a simulation shut down (which may be far more likely than one might at first think, or so I have argued here).



Thus, the past several decades have seen a sudden and rapid proliferation of existential risks. There are more ways for our species to skip from this side of the grave to the other than ever before in our 2.5 million year history on earth. Not surprisingly, as the number of annihilation scenarios has increased, so has the probability of perishing. For most of our history, the likelihood of an existential catastrophe was quite small: a species-killing impactor, for example, hits the earth every ~100,000 years, and a supervolcano erupts every ~50,000 years.i The probability of a global pandemic is harder to estimate, but it similarly insignificant.



And then 1945 inaugurated the Age of Anthropogenic Apocalypses. The best estimates we have right now from respectable scholars, such as Nick Bostrom and Sir Martin Rees, generally put the probability of annihilation in the present century somewhere between 25% and 50%. Thus, the past several decades have also seen a profound and startling increase in the likelihood of extinction, ever since the first atomic bomb was detonated in the deserts of New Mexico.



* * *



So, what should we make of this situation? What should we infer from the dramatic rise in both the number of existential risk scenarios and the probability that one will occur? Is the Age of Anthropogenic Apocalypses reversible or permanent?



We can distinguish between two general hypotheses about the future of existential risks. Let’s call one the “bottleneck hypothesis” (Figure A). According to this view, the recent increase in apocalyptic uncertainty is a blip. It is a passing phase. Once we get beyond this temporary era of heightened hazards, everything will be okay: technological progress will persist, but the threat of annihilation will subside. (For this paper, I’m bracketing a number of nontrivial problems with the concept of “progress.”)



The bottleneck hypothesis finds a home in phrases like: “Humanity is at a cross-roads” and “This is the most important century in human history.” It has been advocated by luminaries such as Nick Bostrom, Sir Martin Rees, and Ray Kurzweil. For example, Rees writes that “Our choices and actions could ensure the perpetual future of life (not just on Earth, but perhaps far beyond it, too). Or in contrast, through malign intent, or through misadventure, twenty-first century technology could jeopardise life's potential, foreclosing its human and posthuman future. What happens here on Earth, in this century, could conceivably make the difference between a near eternity filled with ever more complex and subtle forms of life and one filled with nothing but base matter.”



Bostrom echoes this idea: “One might argue,” he claims, “that the current century, or the next few centuries, will be a critical phase for humanity, such that if we make it through this period then the life expectancy of human civilization could become extremely high.” Similarly, Bostrom writes in this paper that “there are many reasons to suppose that the total such risk confronting humanity over the next few centuries is significant” (italics added). The bottleneck hypothesis can also be found in Bostrom’s “Letter from Utopia,” in which he describes a paradisiacal world where our posthuman descendants live with minimal suffering and a profusion of joy.



As for Kurzweil, the bottleneck hypothesis is more or less built-into his singularitarian eschatology: after the Singularity occurs, the universe will “wake up” and “the ‘dumb’ matter and mechanisms of the universe will be transformed into exquisitely sublime forms of intelligence, which will constitute the sixth [and final] epoch in the evolution of patterns of information.” Kurzweil adds that this desirable state “is the ultimate destiny of the Singularity and of the universe.”



In sum, the bottleneck hypothesis says that if only we can make it through the current squeeze, then things will settle down and our future will become secure than they are.





Figure A. Technological development continues, perhaps leading to some sort of techno-utopia (Bostrom, Kurzweil), but the threat of existential annihilation declines.



The alternative position rejects this as unwarranted optimism. Let’s call it the “parallel growth hypothesis” (Figure B). On this view, the post-1945 trend of proliferating existential risk scenarios is no less projectable into the distant future than advances in computer hardware, biotechnology, or nanomedicine. The idea here is that as long as technological development continues, the threat of existential annihilation will continue to grow more or less in proportion. In other words, the overall threat of extinction will roughly track the growth of future advanced technologies.



If this growth is linear, then the danger will expand linearly. If this growth is exponential, then we might expect something like an existential risk singularity, or a point at which new annihilation scenarios are introduced too fast for us to keep up with them and the probability of death approaches 1.





Figure B. The threat of existential annihilation parallels the growth of dual use technologies in the future, perhaps leading to something like an existential risk singularity.



The parallel growth hypothesis is predicated on the fact that virtually all of the biggest threats facing us today arise from advanced technologies, not nature. In particular, they arise from two specific properties of these technologies: first, their extraordinary power, and second, their dual usability.



Power means that these technologies are able to rearrange the world with unprecedented efficiency, and dual usability means that their space of possible application is morally ambiguous: they can be used to rearrange the world in good and bad ways. Crucially, being dual use is an all or nothing phenomenon: you can’t have the good applications without the bad ones, and to eliminate either is to eliminate both. So each new benefit of advanced technology is attended by a danger of often equal significance. Right now, the benefits of some artifacts are so great that their attendant dangers have ascended to the level of existential relevance.



The question, then, is whether future technologies will be dually usable like those around today. As far as I can tell, there aren’t any good arguments for thinking that the technologies of the twenty-second century will be different in kind than those populating the contemporary world. It appears to be an intrinsic feature of design that there will always be some wiggle-room with respect to a technology’s potential use: no matter how well-designed a laptop is, for example, it can always be dropped from a 10-story window to kill someone.ii Future technologies will be no less dually usable than the centrifuges that enrich uranium for nuclear power.



If this is true, it follows that the total number of existential risk scenarios will continue to grow in the future. There will be more possible ways to “go the way of the Dodo” in the twenty-second century than in the twenty-first, just as there are many more in the twenty-first century than when Charles Darwin was alive.



Many advocates of the bottleneck hypothesis seem to accept this, if they’ve thought about it. Kurzweil may be an exception here. He appears to think, on my reading of The Singularity is Near, that if we can contain the “panoply” of threats posed by the genetics, nanotechnology, and robotics (GNR) revolution, then we’ll be in the clear. But what reason is there for thinking that another revolution won’t follow this one, even after the Singularity occurs? If such a revolution were to occur – call it the XYZ revolution – then it could bring with it a new wave of apocalyptic worries. The urgent task of preventing an existential catastrophe thus wouldn’t end with the full realization of present-day emerging technologies; indeed, it may even become more acute in later epochs, as the power of technology spreads throughout the cosmos.



* * *



An ever-growing number of existential risk scenarios in the future, though, doesn’t necessarily entail that the overall probability of annihilation will go up. A recluse can have many options for going to a party without this making it more likely that she will go. This leads us to a central motivating idea behind the bottleneck hypothesis, namely the hope of posthumanity.



Perhaps Bostrom says it best when he writes, “One might believe that superintelligence will be developed within a few centuries, and that, while the creation of superintelligence will pose grave risks, once that creation and its immediate aftermath have been survived, the new civilization would have vastly improved survival prospects since it would be guided by superintelligent foresight and planning.” While Bostrom doesn’t explicitly endorse this idea in the paper quoted from, it appears to be among his repertoire of futurological beliefs. If superintelligence isn’t the worst thing to ever happen to us, it will be the absolute best.iii



Kurzweil says something similar. He writes that “the window of malicious opportunity for bioengineered viruses, existential or otherwise, will close in the 2020s when we have fully effective antiviral technologies based on nanobots. However, because nanotechnology will be thousands of times stronger, faster, and more intelligent than biological entities, self-replicating nanobots will present a greater risk and yet another existential risk. The window for malevolent nanobots will ultimately be closed by strong artificial intelligence.” Kurzweil adds that “not surprisingly, ‘unfriendly’ AI will itself present an even more compelling existential risk,” yet he remains thoroughly optimistic about the “ultimate destiny” of the universe.iv



So, one reason for thinking that our predicament will improve after this century or the next is that superintelligent posthumans (if they aren’t humanicidal) will come to the rescue and help us manage the risks. But the parallel growth advocate has a response. She or he can rejoin that this line of reasoning is severely undercut by Bostrom’s “orthogonality thesis,” according to which superintelligence can be combined with just about any goal or motivation, from eliminating poverty to fighting a holy war in the name of Allah. Avoiding existential risks requires more than intelligence, as most researchers in the cognitive sciences would define it.v One could even imagine situations in which superbeings with superior instrumental rationality actually exacerbate the situation of Earth-originating intelligent life, perhaps by becoming completely obsessed with paperclips (to their own detriment, even).



It follows that becoming – or begetting – a race of superintelligent posthumans doesn’t entail that the bad side of the dual use coin will be effectively neutralized. The exact opposite could be the case: a superintelligence could be even more self-destructive than we are. Fortunately, though, there are some additional reasons for thinking the bottleneck hypothesis might be true. Bostrom gives two.



First, he writes that “one might believe that self-sustaining space colonies may have been established within such a timeframe, and that once a human or posthuman civilization becomes dispersed over multiple planets and solar systems, the risk of extinction declines.” This seems reasonable. Most current existential risks – from nuclear war to a total collapse of the global ecosystem – are planetary rather than galactic in their spatial scope. Stepping outside a building is a good way of avoiding sick building syndrome, by analogy.



Interestingly, colonizing the galaxy might not be enough to obviate a handful of existential risks. For example, the “breakdown of a metastable vacuum state” could “result in an expanding bubble of total destruction that would sweep through the galaxy and beyond at the speed of light, tearing all matter apart as it proceeds.”vi Perhaps even more risks with catastrophic galactic consequences will arise in later centuries, as future technologies reach new levels of unimaginable power. Perhaps some existential catastrophes of the twenty-second century will be inescapable no matter where one hides in our civilization’s light cone. As it stands, though, we should count this as a point in favor of the bottleneck hypothesis.



Bostrom’s second reason is that technological progress itself could come to a halt: “One might also believe that many of the possible revolutionary technologies (not only superintelligence) that can be developed will be developed within the next several hundred years; and that if these technological revolutions are destined to cause existential disaster, they would already have done so by then.”



This could be the case, of course, but it doesn’t problematize the parallel growth hypothesis per se, since this hypothesis merely states that the trajectory of the existential threat from dual use technologies will parallel the development of those technologies. If this development declines, then so will the danger. (This is, in fact, the very basis of Bill Joy’s thesis of relinquishment).vii But most champions of the bottleneck hypothesis are also champions of technological progress, so the possibility of stagnation pushes against a primary value of many bottleneck advocates: to foment technological development indefinitely.



The arguments for the bottleneck hypothesis thus encounter a number of problems. On the other hand, there is at least one notable reason for thinking that the probability of annihilation will continue to rise in the future. Consider the fact that some emerging technologies are becoming not only more powerful, but more accessible as well. As a result, more destructive power is being concentrated in the hands of fewer individuals – at the extreme, in the hands of lone wolves working completely under the radar of societal awareness, regulation, and control.



This fear is amplified significantly by the statistical fact that there are far more terrorist groups than evil empires, and far more deranged individuals than terrorist groups. So the risks posed by the Ted Kaczynskis of the world are far greater than those posed by the Islamic States and North Koreas, however significant these risks may be. Making matters worse, the world population is growing. It is estimated to be 9 billion by 2050. It follows that the likelihood of a single individual ruining it for everyone will skyrocket in the coming centuries.



The key idea is that for us to be safe, it’s not enough that most people refrain from the misuse and abuse of such technologies. It must be the case that every last person in every dark corner of civilization perpetually chooses, every minute of every day, not to wreak existential havoc on the world. Perhaps in the end a solitary psychopath with a death wish for humanity will bring about our collective demise without us even knowing that he or she was ever a threat. This is the precarious predicament of the future.



* * *



​The bottleneck and parallel growth hypotheses specifically concern the future evolution of our existential risk predicament. They are not hypotheses about the future of humanity per se, although they have obvious implications for the topic. It could be the case, for example, that the bottleneck hypothesis is true yet we go exist in 2034. Or it could be that the parallel growth hypothesis leads to an interminable cycle of recurrent collapse. In other words, these hypotheses can be combined in different ways with the four narratives of humanity’s future presented in Bostrom’s “The Future of Humanity.”



The primary aim of this paper is to foreground an idea that’s often implicit in discussions of existential risks and the future of sentient life, but is almost never explicitly stated. Is there really good reason for thinking that we’re caught in an existential risk storm that will soon blow past us, and that if only we can take cover and not drown for the next century or so then fair weather will follow? I have argued that, at the very least, (a) it appears likely that future, increasingly powerful technologies will be dual use, and (b) if future technologies are dual use, they will continue to introduce brand new existential risk scenarios, some of which may be as unforeseeable to us as grey-goo was to Darwin.



Beyond this, it could be that posthumanity saves us from realizing the bad applications of new artifacts, which could take the form of either error or terror. Or alternatively, it could be that we approach something like an existential risk singularity, which would result in a situation of virtually inescapable annihilation. Perhaps the combination of technological growth, power, and dual usability explains why the skies are silent – why we hear not a murmur, much less a shout, for cosmic companionship in the coldness of space.




i Page 215 of Global Catastrophic Risks.





ii In other words, I’m advocating a kind of soft determinism with respect to us. Guns “push” us towards some uses and away from others by design, although they can still be used (in dual fashion) as paperweights or doorstops.





iii A “step risk,” as Bostrom puts it in Superintelligence, meaning that once the transition to superintelligence is over, the risk will subside.





iv Notice here that Kurzweil seems to think that the more powerful a technology is, the more risk it poses. This is actually what the parallel growth hypothesis – which leads to less than optimistic conclusions – states.





v According to Legg and Hutter’s paper “A Collection of Definitions of Intelligence,” many in the cognitive sciences see intelligence as measuring “an agent’s ability to achieve goals in a wide range of environments.”





vi From Bostrom’s seminal “Existential Risks” paper.





vii An unworkable thesis, I should add.