Top Three Strategies for Avoiding an Existential Risk
Phil Torres
2016-02-13 00:00:00

But our existential predicament in this morally indifferent universe underwent a significant transformation in 1945, when scientists detonated the first atomic bomb in the New Mexican desert of Jornada del Muerto (meaning “single day’s journey of the dead man”). This event inaugurated a qualitatively new epoch in which existential risks can derive from both nature and humanity. (In some cases, the same risk category, such as pandemics, can derive from both.) Today, existential riskologists have identified a swarm of risks looming ominously on the threat horizon of the twenty-first century — risks associated with climate change, biodiversity loss, biotechnology, synthetic biology, nanotechnology, physics experiments, and superintelligence. The fact is that there are more ways for our species to bite the dust this century than ever before in history — and extrapolating this trend into the future, we should expect even more existential risk scenarios to arise as novel dual-use technologies are developed. One might even wonder about the possibility of an “existential risk Singularity,” given the exponential development of dual-use technologies. 



But not only has the number of scenarios increased in the past 71 years, many riskologists believe that the probability of a global disaster has also significantly risen. Whereas the likelihood of annihilation for most of our species’ history was extremely low, Nick Bostrom argues that “setting this probability lower than 25% [this century] would be misguided, and the best estimate may be considerably higher.” Similarly, Sir Martin Rees claims that a civilization-destroying event before the year 02100 is as likely as getting a “heads” after flipping a coin. These are only two opinions, of course, but to paraphrase the Russell-Einstein Manifesto, my experience confirms that those who know the < most tend to be the most gloomy



In my forthcoming book on existential risks and apocalyptic terrorism, I argue that Rees’ figure is plausible. To adapt a maxim from the philosopher David Hume, wise people always proportion their fears to the best available evidence, and when one honestly examines this evidence, one finds that there really is good reason for being alarmed. But I also offer a novel — to my knowledge — argument for why we may be systematically underestimating the overall likelihood of doom. In sum, just as a dog can’t possibly comprehend any of the natural and anthropogenic risks mentioned above, so too could there be risks that forever lie beyond our epistemic reach. All biological brains have intrinsic limitations that constrain the library of concepts to which one has access. And without concepts, one can’t mentally represent the external world. It follows that we could be “cognitively closed” to a potentially vast number of cosmic risks that threaten us with total annihilation. This being said, one might argue that such risks, if they exist at all, must be highly improbable, since Earth-originating life has existed for some 3.5 billion years without an existential catastrophe having happened. But this line of reasoning is deeply flawed: it fails to take into account that the only worlds in which observers like us could find ourselves are ones in which such a catastrophe has never occurred. It follows that a record of past survival on our planetary spaceship provides no useful information about the probability of certain existential disasters happening in the future. The facts of cognitive closure plus the observation selection effect suggest that our probability conjectures of total annihilation may be systematically underestimated, perhaps by a lot.



However theoretically pessimistic one might be given such considerations, the practical optimist wants to know what can be done to maximize the probability of an “okay outcome” for humanity. In other words: What can we do to avoid a worse case scenario for the self-described “wise man” (Homo sapiens)? What strategies could help us navigate the expanding wilderness of risks before us? Obvious answers include making sure that NASA’s Near Earth Objects project (which aims to spy on possible assassins from the sky) is properly funded; implementing environmental regulations to reduce carbon emissions; improving our educational system and making Epistemology 101 a required course (a project that I very strongly advocate); supporting institutions like the IEET, FHI, and CSER; creating permanent bunkers underground; and staggering the development of advanced technologies like molecular manufacturing and superintelligence. (As I put it in the book, “in the end, there may be a way to slalom around the threats before us, since we’re whooshing downhill anyways.”)

But three strategies in particular stand our as especially promising, at least when considered in the context of what I call “Big Future” (the other half of “Big History”). I explore these in detail in The End, but for now will provide an abbreviated account of them, in no particular order:



(1) Superintelligence. To modify an often-cited phrase from IJ Good, the creation of a superintelligent mind could be the last problem we ever have to solve. This includes problems of the existential risk variety. The idea is that a friendly superintelligence who genuinely cares about our well-being and prosperity could help guide us through the bottleneck of heightened hazards that defines the current century. It could help us neutralize the explosion of anthropogenic — or more specifically, technogenic — risks that threaten civilization with a catastrophe of existential proportions. Even more, a “qualitative” superintelligence with different concept-generating mechanisms could potentially see cosmic risks to which humanity is conceptually blind. Just as a dog wandering the streets of Hiroshima on August 6, 1945 couldn’t have possibly understood that it was about to be vaporized — that is, given the conceptual limitations intrinsic to its evolved mental machinery — so too could we be surrounded by risks that are utterly inscrutable to us, as discussed above. A qualitative superintelligence could potentially identify such risks and warn us about them, even though such a warning would be unintelligible to even the most clever human beings.



Unfortunately, the creation of a superintelligence also poses perhaps the most formidable long-term risks to the future of our lineage. To paraphrase Stephen Hawking, if superintelligence isn’t the best thing to ever happen to us, it will probably be the worst. There are several issues worth mentioning here. First, the amity-enmity problem: the AI could dislike us for whatever reason, and therefore try to kill us. Second, the indifference problem: the AI could simply not care about our well-being, and thus destroy us because we happen to be in the way. And finally, the clumsy fingers problem: the AI could inadvertently nudge us over the cliff of extinction rather than intentionally pushing us. This possibility is based on what might be called the “orthogonality thesis of fallibility,” which states that higher levels of intelligence aren’t necessarily correlated with the avoidance of certain kinds of mistakes. (The avoidance of some mistakes, of course, would be a convergent instrumental value of intelligent agents.)



Consider the case of Homo sapiens. We have highly developed neocortices and much greater encephalization quotients than any other species, yet we’re also the culprits behind the slow-motion catastrophes of climate change and the sixth mass extinction event, both of which threaten our planet with environmental ruination. Even more, the fruits of our ingenuity — namely, dual-use technologies — have introduced brand new existential risk scenarios never before encountered by Earth-originating life. If intelligence is “a kind of lethal mutation,” as Ernst Mayr once intimated in a debate with Carl Sagan, then what might superintelligence be? Indeed, given the immense power that a superintelligence would wield in the world — perhaps being able to manipulate matter in ways that appear to us as pure magic — it could take only a single error for such a being to trip humanity into the eternal grave of extinction. What can we say? It’s only superhuman.



(2) Transhumanism. I would argue that we’re entering a genuinely unique period in human history in which the “instrumental rationality” of our means is advancing far more rapidly than the “moral rationality” of our ends. (I suggest that this means-ends mismatch could offer a specific solution to the Fermi Paradox in a forthcoming Humanist article.) Our ability to manipulate and rearrange the physical world is growing exponentially — perhaps according to Ray Kurzweil’s “law of accelerating returns” — as a result of advanced technologies like biotechnology, synthetic biology, nanotechnology, and robotics. Some of these technologies are also becoming more accessible to smaller and smaller groups. At the extreme, the dual trends of power and accessibility could enable terrorist organizations or lone wolves to wreak genuinely unprecedented havoc on society. The future is, it seems, one in which the power of individuals could eventually equal the power of the state itself, at least in the absence of highly invasive surveillance systems.



This being said, if the capacity to destroy the world becomes widely distributed among populations, what kind of person would want to press the “obliterate everything” button? A few possibilities come to mind: first, a deranged nutcase with a grudge against the world. A example of this mindset comes from Marvin Heemeyer, who committed suicide after building a “futuristic” tank and demolishing an entire village with which he had a dispute. Another possibility is an ecoterrorist group that’s convinced that Gaia would be better off without Homo sapiens. These are both major agential risks moving forward, and thus existential riskologists ought to keep their eyes fixed on them as the twenty-first century unfolds. But there’s yet another possibility that could pose an even greater overall threat in the future, namely apocalyptic religious cults. Unfortunately, few existential scholars are aware of the extent to which history is overflowing with apocalyptic groups that not only believed in an imminent end to the world, but enthusiastically celebrated it.



At the extreme, some of these groups have adopted what the scholar Richard Landes calls an “active cataclysmic” approach to eschatology, according to which they see themselves as active participants in an apocalyptic narrative that’s unfolding in realtime. For groups of this sort, the value of post-conflict group preservation that guided the actions of Marxist, anarchist, and nationalist-separatist in the past simply doesn’t apply. Active cataclysmic movements don’t merely want a fight, they want a fight to the death. On their view — held with the unshakable firmness of faith — the world must be destroyed in order to be saved, and sacrificing the worldly for the otherworldly is the ultimate good in the eyes of God. This is why I focus specifically on religion in my book: history is full of apocalyptic movements, and in fact (as I elaborate in a forthcoming Skeptic article) there are compelling historical, technological, and demographic reasons for believing that a historically anomalous number of such movements will arise in the future. For reasons such as these, apocalyptic activists constitute arguably the number one agential threat moving forward. Existential riskologists must not overlook this fact — we must not shy away from criticizing religion — because tools without agents aren’t going to initiate a global catastrophe.



The point is that such considerations make it hard to believe that our species can be trusted with advanced technologies. We’re no longer children playing with matches, we’re children playing with flamethrowers that could easily burn down the whole global village. Either we need a “parental” figure of some sort to watch over us — option “(1)” above — or we need to grow up as a species. And it’s here that transhumanism enters the picture in a huge way. The transhumanist wants to use technology to modify the human form, including our brains, in various desirable ways. Scientists have already designed pills that can modify our moral character by making us more empathetic. Perhaps there are cognitive enhancement technologies that can augment our mental faculties and, in doing so, inoculate us against certain kinds of delusions — and therefore neutralize the agential risk posed by apocalyptic extremism.



To put this idea differently, transhumanists actively hope for human extinction. But not the sort of extinction that terminated the dinosaurs or dodo. Rather, the aim is to catalyze a techno-evolutionary process of anagenetic cyborgization that results in our current population being replaced by a smarter, wiser, and more responsible form of posthuman: Posthumanus sapiens. It may be that, as I write in the book, what saves us from a “bad” extinction event is a “good” extinction event. Or, to put this idea in aphoristic form: to survive, we must go extinct.



(3) Space colonization. I would argue that this offers perhaps the most practicable strategy for avoiding an existential catastrophe, all things considered. It requires neither the invention of a superintelligence nor the sort of radical cognitive enhancements discussed above. The idea is simple: the wider we spread out in the world, the less chance there is that a single event will have worldwide consequences. A collapse of the global ecosystem on Earth wouldn’t affect colonies on Mars, nor would a grey goo disaster on (say) Gliese 667 Cc affect those living on spaceship Earth. Similarly, a disaster that wipes out the Milky Way in 1,000 years might be survivable if our progeny also resides in the Andromeda Galaxy.



As it happens, NASA recently announced that there will be Earth-independent colonies on Mars by the 2030s, and Elon Musk has said that he’s hoping to launch the first flight to Mars “in around 2025.” As Musk described his motivation in 2014, “there is a strong humanitarian argument for making life multi-planetary . . . in order to safeguard the existence of humanity in the event that something catastrophic were to happen.” This sentiment was echoed by the former NASA administrator, Michael Griffin, who claimed that “human expansion into the solar system is, in the end, fundamentally about the survival of the species.” Similarly, Hawking has opined that he doesn’t “think the human race will survive the next thousand years, unless we spread into space.” So, there’s growing momentum to distribute the human population throughout this strange universe in which we find ourselves, and numerous intellectuals have explicitly recognized the existential significance of space colonization. Given the minimal risks involved, the relatively minimal cost of colonization programs (for example, it requires neither “(1)” nor “(2)” to be realized), and the potential gains of establishing self-sustaining colonies throughout the galaxy, this strategy ought to be among the top priorities for existential risk activists. To survive, we must colonize.



It’s worth noting here that while colonization would insulate us against a number of potential existential risks, there are some risks that it wouldn’t stop. A physics disaster on Earth, for example, could have consequences that are cosmic in scope. For example, the universe might not be in its most stable state. Consequently, a high-powered particle accelerator could tip the balance, resulting in a “catastrophic vacuum decay, with a bubble of the true vacuum expanding at the speed of light.” (Again, perhaps a superintelligence could help us avoid a mistake of this sort.) Another possibility is that a bellicose extraterrestrial species with a powerful space military and a foreign policy that encourages preemptive war could conquer our astronautical descendants by destroying one inhabited planet at at time. At the extreme, no amount of spreading throughout the universe would protect us against an aggressive, rogue civilization like this. And finally, space colonization also won’t protect us against the ultimate Great Filter, namely the entropy death of the universe. This distant future event appears inevitable given the second law of thermodynamics, although some cosmologists have suggested that there may be a way to pull off the greatest prison escape of all by slipping into a neighboring universe.

Our situation in the universe has always been precarious, but it’s even more so this century. These are, in my view, the three most promising “big-picture” strategies for surviving the obstacle course of existential risks in front of us.