We May be Systematically Underestimating the Probability of Annihilation
Phil Torres
2015-05-27 00:00:00
URL

* * *

Many riskologists identify nuclear weapons as having introduced the first anthropogenic existential risk in human history. (Existential risks are the red and black dot events here, on the condition that the black dots are particularly severe.) Nick Bostrom, for example, writes that “The first manmade existential risk was the inaugural detonation of an atomic bomb.” [1] But this is probably incorrect. The Holocene extinction event, for example, probably began in the Pleistocene, when our ancestors started to “overkill” the megafauna. Global warming also began prior to the Atomic Age, and in fact scientists in the 1930s were the first to uncover a warming trend in temperature records dating back to 1865. [2] So there were at least two anthropogenic risks that began to unfold before the Trinity explosion at the Jornada del Muerto (meaning “journey of the dead man”) in 1945.

Let's take a closer look at global warming for a moment. Greenhouse gases (GHGs) have many sources in the contemporary world. The number one source in the US is the generation of electricity, followed by transportation, which is almost entirely reliant upon the combustion of fossil fuels. [3] As it happens, the story of modern transportation – specifically, of the automobile – is an interesting one, and marked by a fair bit of irony. By the end of the 1800s, many cities were facing an urban pollution nightmare: they were being overrun with horse manure, urine, and carcasses in the streets. According to the Times of London, 9 feet of excrement was projected to cover the streets of London by 1950; on the other side of the Atlantic, an observer suggested that the accumulation of crap would reach the third story of Manhattan's buildings by 1930. The result was an unbearable odor, major sanitation problems, congestion, gridlock, and swarms of flies (which studies have found were responsible for “deadly infectious maladies like typhoid and infant diarrheal diseases” in the nineteenth century [4]). The situation was so dire that the first ever urban planning conference, which hosted delegates from around the world, convened to fix it, but “stumped by the crisis, [they] declared [their] work fruitless and broke up in three days instead of the scheduled ten.” [5]

Enter the automobile, which offered a surprising solution to this rapidly worsening public health snafu. “As difficult as it may be to believe for the modern observer,” writes Eric Morris, a professor of City and Regional Planning at Clemson University, “at the time the private automobile was widely hailed as an environmental savior.” Indeed: no more manure, and the automobile proved to be just as effective a means of transporting goods and people from one place to another. It was, consequently, adopted en masse by both the upper and middle classes. What no one foresaw at the time, of course, was that the automobile's internal combustion engine, which converts fossilized plant matter into usable energy by igniting it on fire (and thereby releasing CO2), would become major contributor to one of the most significant big picture hazards of the following century.



Thus, we could say that global warming constitutes an unintended consequence of the automobile (although we are fully aware of the connection between automobiles and global warming today). The theorist Langdon Winner defines an unintended consequence as an effect that's “not not intended,” meaning “that there is seldom anything in the original plan that aimed at preventing them.” While unintended consequences have been ubiquitous throughout the great experiment that we call human civilization – a driving force of innovation, in fact – global warming is unique in that it was the first unintended consequences with existential implications.

But it almost certainly won't be the last. If history has taught us anything about purposive human behavior, it's that intended causes proliferate unintended effects. This leads to an absolutely crucial point: as advanced technologies become more and more powerful, we should expect the unintended consequences they spawn to become increasingly devastating in proportion. In other words, the future will almost certainly be populated by a growing number of big picture hazards that were not not intended by the “original plan,” as it were, and which are significant enough to threaten humanity with large-scale disasters, or even extinction.

Unintended consequences are a kind of “unknown unknown,” or a fact about which we are not only ignorant, but ignorant of our ignorance. The term “unknown unknown” is perhaps most famously associated with the former US Secretary of Defense, Donald Rumsfeld, who in a 2002 news briefing explained the concept as follows: “Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.”

(Somewhat amusingly, two years after Rumsfeld's comments, the Slovenian philosopher Slavoj Žižek published an article in which he notes that Rumsfeld forgot the fourth possibility: unknown knowns, or “the disavowed beliefs, suppositions and obscene practices we pretend not to know about.” If there's one other category that tends to pose difficulties for the US, I would argue, it's the unknown knowns. Global warming and biodiversity loss could be seen as unknown knowns with respect to the American public.)

I will refer to unknown unknowns somewhat playfully as monsters. They constitute an umbrella category, of which (as noted) unintended consequences are just one type. The monster category also includes (a) phenomena from nature that we are currently ignorant of, and which could potentially bring about a catastrophe. For example, there might be risky phenomena lurking about the universe that could destroy our Solar System in a whimper or a bang. These might be extremely rare, so we haven't yet observed one occur, or they might require advanced theories in quantum physics to infer – that is, theories we haven't yet developed. Our Solar System could thus be obliterated next Friday by an event that we're not only unaware of, but unaware of our being unaware.

Other monsters are (b) currently unimagined risks posed by future, not-yet-conceived-of technologies. After all, many of the risks discussed in the literature today were quite unimaginable to people only a few decades ago – and certainly to those living in the 1800s. If the development of dual use technologies continues into the coming centuries (as I've explored here), then we should expect there to arise brand new existential risk scenarios that, from our current vantage, we can't even glimpse. (To be clear, these aren't mere unintended consequences of future technologies – although this will pose a constant hazard – but threats arising from the moral ambiguity of future artifacts' dual usability.) Such scenarios are hidden beneath the horizon of our collective imagination, and as such are unknown unknowns. Perhaps a book on existential risks written in 2100 would contain a whole different and completely novel set of existential risks than those around today.

Finally, (c) while most of the risk scenarios discussed in the literature are presented as if each constitutes a discrete possible future, virtually all of them can be combined in various ways to produce complex scenarios. Indeed, in some cases, the realization of one scenario will positively increase the probability of others occurring. The cliché of a “domino effect” is apt here, as a recent document by the Global Challenges Foundation notes. [6] When two catastrophe scenarios happen simultaneously, their effects can either be additive or synergistic. For example, consider a scenario A that results in 1 billion deaths, and a scenario B that results in 2 billion. In the additive case, of course, the result of A and B together would be 3 billion casualties. If A were to also amplify the probability of B – perhaps in a reliable way – then we might want to prepare for 3 billion deaths just as soon as evidence for A is uncovered.

In the synergistic case, by contrast, A and B occurring together could result in greater than 3 billion deaths. Imagine that scenario A represents extreme biodiversity loss, but not to the point of initiating an irreversible collapse of the global ecosystem (as scientists in a 2012 Nature paper claim could soon happen). Now imagine that B involves a regional nuclear exchange between India and Pakistan. It could be that A and B together are synergistically interactive such that the nuclear exchange not only kills 2 billion people (ignoring populational details here), but ultimately pushes the global ecosystem past a “critical threshold.” The result is an irreversible collapse of the global ecosystem that causes a total of 6 billion deaths. Many other such interactions could be imagined involving nanotechnology and biodiversity loss, biodiversity loss and a pandemic, a pandemic and superintelligence, or even a nanoterrorist attack, biodiversity loss, a global pandemic, and a nuclear exchange (all in the same few months).

Unfortunately, the possible causal interactions between different risk scenarios unfolding in parallel is a subject that's woefully understudied by contemporary riskologists. Yet it appears likely that – the conjunction fallacy aside – multiple risks happening at once is more likely than only one occurring in isolation.

* * *


There are three important distinctions to be made that cut across the umbrella category of monsters. For example, imagine that meteorologists predicate that a hurricane will hit the Florida coast on August 1. Mr. Earle lives in Florida, but pays no attention to the news, and so doesn't know about the coming storm. Because of this, he goes out for a drive on August 1 and, consequently, gets caught in 130 mph winds, torrential rains, and flash floods. In this situation, the hurricane was an unknown unknown for Earle, although it was knowable in both practice and principle: Earle could have turned the unknown into a known, but he didn't, because of his own decisions, apathy, or perhaps laziness. In this sense, one might say that the unknown unknown was person-relative.

In contrast, the global climate consequences of dumping huge quantities of CO2 into the atmosphere were unknowable in practice to those who, in the early twentieth century, thought the automobile would ameliorate the problem of urban pollution. The fact is that climatology simply hadn't developed the theories necessary to have accurately predicted that large numbers of automobiles turning fossilized plant matter into greenhouse gases would threaten later generations with a global catastrophe. As NASA points out in an article on climate change, “prior to the mid 1960s, geoscientists believed that our climate could only change relatively slowly, on timescales of thousands of years or longer.” [7] It follows that, as the relevant theories were constructed, the connection between fossil fuel consumptions and global warming became known. In this sense, global warming may have been practically unknowable, but it was in principle knowable. In other words, this type of monster is knowledge-relative. [8]

The final category is of critical importance, yet has hardly been mentioned in the existential risks literature. It includes monsters that are unknowable in both principle and practice. (Notice that anything unknowable in principle must also be unknowable in practice.) As Noam Chomsky and many others have argued, there are intrinsic limitations to the mental machinery between our ears: just as the concept-generating mechanisms of the canine brain are inadequate for a dog to ever, in principle, understand the workings of an internal combustion engine, there are phenomena that we'll never, ever, be able to grasp because the concepts needed to understand such phenomena forever lie beyond our cognitive reach. How many such concepts – and therefore phenomena – might the human mind be cognitively closed to? Who knows. Perhaps an infinite number. It follows that there could be unknown unknowns that are permanently unknowable to the human mind; such monsters are, in this sense, mind-relative.

(Notice that all three distinctions are, terminology aside, ultimately relative to some state of knowledge: the first compares what the individual knows to what the collective as a whole knows; the second compares what the collective knows at one moment in time to what it knows at a later moment; and the third pertains to what the collective is capable of knowing in principle.)



The second and third categories are most relevant to existential risks. Thus, applying these to a concrete, real world example: consider the physics experiments being conducted right now in the Large Hadron Collider, the biggest and most powerful particle accelerator in the world. Given our current knowledge of physics – which is both extensive and highly sophisticated, to be sure – there’s no reason to think such experiments pose a risk to human existence. But this sense of security may be false. On the one hand, our theories in physics might not be complete, or flawless, enough to anticipate a potential catastrophe. Before the first atomic bomb was detonated, there was “concern that the explosion might start a runaway chain-reaction by ‘igniting’ the atmosphere.” [9] Further investigation revealed that this is physically impossible. Perhaps we’re in the exact opposite situation with respect to the LHC: perhaps we think there’s no risk of an experiment X destroying the earth, but further investigation – say, five years from now – shows that there really is. Perhaps experiment X destroys us in three years, though, before the relevant theory is developed.

On the other hand, there may be side-effects of such experiments that we couldn’t fathom even if God himself were to climb out of the sky and explain them in perfectly lucid detail. The fact is that quantum phenomena straddle the Chomskyan puzzle-mystery boundary of human intelligibility. There are many advanced physics concepts that not even the brightest minds can comprehend, such as what the fourth – not to mention the eleventh – spatial dimension is actually like. [10] We grasp such ideas mathematically, not conceptually. It is, therefore, entirely possible that a completely inscrutable unknowable pops out of the darkness when no one, not even the most brilliant physicists, expects. The result could be something like, for example, “an expanding bubble of total destruction that [sweeps] through the galaxy and beyond at the speed of light, tearing all matter apart as it proceeds” (to quote Bostrom).

A dog wandering through the streets of Hiroshima on that tragic day in August, 1945, couldn’t have possibly anticipated that he was about to be vaporized. Nuclear weaponry is an in principle (and therefore in practice) unknowable unknown for the canine mind. Perhaps we are this dog on the Japanese archipelago, and the bomb heading towards us is a monster wrapped in a mystery. The lesson of cognitive closure is, therefore, that we may be systematically underestimating the likelihood of disaster. This is the same lesson as the Doomsday Argument (which I won't explore here), except that the considerations above are far more robust than the reasoning behind the Doomsday Argument. In other words, the fact that our minds are conceptually limited should lead us to increase our prior probability estimates of annihilation, whatever they happen to be. Cognitive closure should make us more pessimistic about the future, however pessimistic we might (or might not) already be.

[This article excerpts significantly from my forthcoming book The End: What Religion and Science Tell Us About the Apocalypse, 2015, Pitchstone Publishing.]

[1] See Bostrom's paper "Existential Risks Analyzing Human Extinction Scenarios and Related Hazards, published in the Journal of Evolution and Technology

[2] See "The Modern Temperature Trend," published by the American Institute of Physics.

[3] See "Overview of Greenhouse Gases," published by the EPA (2015).

[4] See page 6 of Eric Morris' article "From Horse Power to Horsepower."

[5] The phenomenon of city pollution from horses is well-known. This paragraph relied heavily on a nice little paper called "From Horse Power to Horsepower," published in Access and written by Eric Morris.

[6] See page 20 of "12 risks that threaten human civilization."

[7] See "Taking a global perspective on Earth's climate," published by NASA.

[8] It yields a robust argument for funding and advancing science as much as possible. Thee may be risks right around the corner that we currently can't see!

[9] See http://www.nickbostrom.com/existential/risks.html

[10] As Richard Feynman once wrote, "I think I can safely say that nobody understands quantum mechanics."