A New Doomsday Argument
Phil Torres
2016-02-22 00:00:00

All biological brains have mechanisms that are responsible for generating concepts. Concepts are mental representations of different aspects of the world. For example, the concept of an electron is a mental entity that represents a particular aspect of reality, namely one of the subatomic particles in the atom. It follows that if the mechanisms in one’s brain can’t generate the concept of an electron, then one can’t mentally represent the electron, and if one can’t mentally represent the electron, then one can’t have knowledge about electrons.

By analogy, consider the chipmunk: it lacks the mental mechanism necessary to generate the concept of an electron. Consequently, it can never grasp, understand, or comprehend what an electron is, in principle. As philosophers put it, electrons are unknowable “mysteries” for the chipmunk (rather than knowable “puzzles,” even if not yet known). The chipmunk’s brain is “cognitively closed” to the concept of an electron.



This being said, why think that humans are any different? Why think that the computers behind our eyes can generate all the concepts needed to mentally represent every type of phenomenon in the universe? We’re animals, after all, and evolution is an open-ended process that hasn’t stopped with Homo sapiens (and our particular brains). No doubt there are aspects of reality that not merely unknown, but unknowable to us — no doubt there are features of the world with respect to which we are cognitive closed. Like a square peg trying to fit through a round hole, our minds are simply not designed to grasp the relevant concepts.



But what if some of these concepts correspond to phenomena in the universe that pose risks to our survival? To use the chipmunk analogy again: the universe is full of cosmic dangers — such as asteroid impacts, super volcanoes, solar flares, black hole explosions or mergers, supernovae, galactic center outbursts, and gamma-ray bursts, to name a few — that the chipmunk can’t possibly comprehend. It’s not merely ignorant of such risks, but it’s ignorant of its ignorance. It has no idea that it’s utterly oblivious that annihilation could take so many exotic forms.



So, once more: why should we think that our epistemic situation in the universe is any different? What are the chances that the nervous system of Homo sapiens can generate all the concepts needed to exhaustively comprehend every aspect of our existential risk predicament? I see no reason for thinking that there aren’t risks that fall outside the domain of human knowability, just as there are risks that fall outside the domain of chipmunk knowability. We could, indeed, be surrounded by a vast swarm of risks with respect to which we’re ignorant in the second-order sense mentioned above. We have no idea that we’re in danger — nor could we.



Now, one might agree that our biological brains are conceptually limited and that some conceptually unknowable phenomena could be catastrophically risky, but respond that if such risks exist, they must be highly improbable, since the universe has existed for 13.7 billion years, our planet for 4.5 billion years, Earth-originating life for 3.5 billion years, and human beings for 2 million years. There’s been plenty of opportunity across cosmic history for a cataclysm to have destroyed the universe, the solar system, our planet, Earth-originating life, our genus, or our species. The fact that we’ve survived surely implies that a cosmic cataclysm — whether knowable or unknowable — probably isn’t going to happen anytime soon.



But this line of reasoning is deeply flawed. The problem is that certain types of annihilation risks are incompatible with the existence of observers like us, meaning that a record of past survival provides no useful information about the probability of such risks happening in the future. As Nick Bostrom and Milan Cirkovic write in Global Catastrophic Risks, “We are bound to find ourselves in one of those places and belonging to one of those intelligent species which have not yet been destroyed, whether planet or species-destroying disasters are common or rare” (italics added). It follows that certain annihilation scenarios could be highly probable, even though one hasn’t yet materialized.



This leads to the central idea of the present article: when one combines the fact of cognitive closure with the observation selection effect, it becomes clear that our existential predicament could be far more precarious than we would otherwise suspect — or could possibly suspect. Just as the chipmunk (if you will) has been systematically underestimating the probability of annihilation given the intrinsic limitations of its evolved mental machinery, so too might we be systematically underestimating the probability as well. The universe could be teeming with highly probable human-unknowable hazards capable of eliminating our species — or even the universe — in a flash. This should lead us to boost our prior probability estimates of total annihilation, whatever they happen to be.



* * * * *



Before ending this article, it’s worth pointing out an ambiguity with the term “unknowable.” In my recent book on existential risks and apocalyptic terrorism, I distinguish between knowledge-relative and mind-relative instances of unknowability. The discussion above concerned only the latter, but the former is worth mentioning as well. By way of example, consider that, given the nascent state of climatological research in the early twentieth century, the causal link between burning fossil fuels and climate change was unknowable to scientists at the time. We simply lacked the theoretical apparatuses and pool of evidence necessary to make this connection. (In fact, many people at the time saw the automobile as an environmental panacea: it would help clean up cities overflowing with horse urine, feces, and carcasses.) This contrasts with situations in which we fail to grasp an idea not because the human enterprise of science is insufficiently advanced, but because human science could never reveal certain truths no matter how advanced it becomes.



For instance: at present, there could be existential risks associated with experiments at the Large Hadron Collider that can only be known once one has developed Theory X, but Theory X won’t be developed for another 10 years. Or, alternatively, there could be existential risks that can only be known once one has developed Theory Z, but Theory Z requires one to grasp concepts A, B, and C that aren’t included in the library of concepts accessible to human minds. The conclusion of these considerations is the following: knowledge-relativity provides a strong argument for accelerating the advancement of science however possible.

Rather than wait 10 years for Theory X, it would surely be better for us to construct it in 5, or 1 year. After all, perhaps the dangerous experiment is set to occur in 6 years, but would be halted if Theory X is developed beforehand.



Meanwhile, mind-relativity provides a strong argument for the creation of superintelligence. As I write in another article, a super intelligence whose mystery-puzzle boundary is drawn differently (and more expansively) than ours could potentially see risks with respect to which our minds are forever conceptually blind.

Again, such risks could be numerous and highly probable. In both cases, the situation could very well be urgent.