Existential Risks
Nick Bostrom
2002-03-01 00:00:00
URL

ABSTRACT




Because of accelerating
technological progress, humankind may be rapidly
approaching a critical phase in its career. In addition
to well-known threats such as nuclear holocaust, the
prospects of radically transforming technologies like
nanotech systems and machine intelligence present us
with unprecedented opportunities and risks. Our future,
and whether we will have a future at all, may well be
determined by how we deal with these challenges. In the
case of radically transforming technologies, a better
understanding of the transition dynamics from a human to
a �posthuman� society is needed. Of particular
importance is to know where the pitfalls are: the ways
in which things could go terminally wrong. While we have
had long exposure to various personal, local, and
endurable global hazards, this paper analyzes a recently
emerging category: that of existential risks.
These are threats that could cause our extinction or
destroy the potential of Earth-originating intelligent
life. Some of these threats are relatively well known
while others, including some of the gravest, have gone
almost unrecognized. Existential risks have a cluster of
features that make ordinary risk management ineffective.
A final section of this paper discusses several ethical
and policy implications. A clearer understanding of the
threat picture will enable us to formulate better
strategies.



 




1       

Introduction


It�s dangerous to be alive and
risks are everywhere. Luckily, not all risks are equally
serious. For present purposes we can use three dimensions to
describe the magnitude of a risk: scope, intensity,
and probability. By �scope� I mean the size of the
group of people that are at risk. By �intensity� I mean how
badly each individual in the group would be affected. And by
�probability� I mean the best current subjective estimate of
the probability of the adverse outcome.[1]



1.1        

A typology of risk


We can
distinguish six qualitatively distinct types of risks based
on their scope and intensity (figure 1). The third
dimension, probability, can be superimposed on the two
dimensions plotted in the figure. Other things equal, a risk
is more serious if it has a substantial probability and if
our actions can make that probability significantly greater
or smaller.









































Scope


 


 


 



global



Thinning of the ozone layer



X


 



local




Recession in a country



Genocide


 



personal



Your car is stolen



Death


 


 



endurable




terminal



Intensity


Figure 1.

Six risk categories


 


�Personal�, �local�, or �global�
refer to the size of the population that is directly
affected; a global risk is one that affects the whole of
humankind (and our successors). �Endurable� vs. �terminal�
indicates how intensely the target population would be
affected. An endurable risk may cause great destruction, but
one can either recover from the damage or find ways of
coping with the fallout. In contrast, a terminal risk is one
where the targets are either annihilated or irreversibly
crippled in ways that radically reduce their potential to
live the sort of life they aspire to. In the case of
personal risks, for instance, a terminal outcome could for
example be death, permanent severe brain injury, or a
lifetime prison sentence. An example of a local terminal
risk would be genocide leading to the annihilation of a
people (this happened to several Indian nations). Permanent
enslavement is another example.



1.2        
Existential risks


In this paper we
shall discuss risks of the sixth category, the one marked
with an X. This is the category of global, terminal
risks. I shall call these existential risks.



Existential risks are distinct
from global endurable risks. Examples of the latter kind
include: threats to the biodiversity of Earth�s ecosphere,
moderate global warming, global economic recessions (even
major ones), and possibly stifling cultural or religious
eras such as the �dark ages�, even if they encompass the
whole global community, provided they are transitory (though
see the section on �Shrieks� below). To say that a
particular global risk is endurable is evidently not to say
that it is acceptable or not very serious. A world war
fought with conventional weapons or a Nazi-style Reich
lasting for a decade would be extremely horrible events even
though they would fall under the rubric of endurable global
risks since humanity could eventually recover. (On the other
hand, they could be a local terminal risk for many
individuals and for persecuted ethnic groups.)


 I shall use the following
definition of existential risks:






Existential risk
� One
where an adverse outcome would either annihilate
Earth-originating intelligent life or permanently
and drastically curtail its potential.




An existential
risk is one where humankind as a whole is imperiled.
Existential disasters have major adverse consequences for
the course of human civilization for all time to come.


2       
The unique challenge of existential risks



Risks in this
sixth category are a recent phenomenon. This is part of the
reason why it is useful to distinguish them from other
risks. We have not evolved mechanisms, either biologically
or culturally, for managing such risks. Our intuitions and
coping strategies have been shaped by our long experience
with risks such as dangerous animals, hostile individuals or
tribes, poisonous foods, automobile accidents, Chernobyl,
Bhopal, volcano eruptions, earthquakes, draughts, World War
I, World War II, epidemics of influenza, smallpox, black
plague, and AIDS. These types of disasters have occurred
many times and our cultural attitudes towards risk have been
shaped by trial-and-error in managing such hazards. But
tragic as such events are to the people immediately
affected, in the big picture of things � from the
perspective of humankind as a whole � even the worst of
these catastrophes are mere ripples on the surface of the
great sea of life. They haven�t significantly affected the
total amount of human suffering or happiness or determined
the long-term fate of our species.


With the exception of a
species-destroying comet or asteroid impact (an extremely
rare occurrence), there were probably no significant
existential risks in human history until the mid-twentieth
century, and certainly none that it was within our power to
do something about.


The first manmade existential
risk was the inaugural detonation of an atomic bomb. At the
time, there was some concern that the explosion might start
a runaway chain-reaction by �igniting� the atmosphere.
Although we now know that such an outcome was physically
impossible, it qualifies as an existential risk that was
present at the time. For there to be a risk, given the
knowledge and understanding available, it suffices that
there is some subjective probability of an adverse
outcome, even if it later turns out that objectively there
was no chance of something bad happening. If we don�t
know whether something is objectively risky or not, then it
is risky in the subjective sense
. The subjective sense
is of course what we must base our decisions on.[2]
At any given time we must use our best current subjective
estimate
of what the objective risk factors are.[3]



A much greater existential risk
emerged with the build-up of nuclear arsenals in the US and
the USSR. An all-out nuclear war was a possibility with both
a substantial probability and with consequences that
might
have been persistent enough to qualify as global
and terminal. There was a real worry among those best
acquainted with the information available at the time that a
nuclear Armageddon would occur and that it might annihilate
our species or permanently destroy human civilization.[4] 
Russia and the US retain large nuclear arsenals that could
be used in a future confrontation, either accidentally or
deliberately. There is also a risk that other states may one
day build up large nuclear arsenals. Note however that a
smaller nuclear exchange, between India and Pakistan for
instance, is not an existential risk, since it would not
destroy or thwart humankind�s potential permanently. Such a
war might however be a local terminal risk for the cities
most likely to be targeted. Unfortunately, we shall see that
nuclear Armageddon and comet or asteroid strikes are mere
preludes to the existential risks that we will encounter in
the 21st century.


The special
nature of the challenges posed by existential risks is
illustrated by the following points:





       


Our approach to existential risks cannot be one of
trial-and-error. There is no opportunity to learn
from errors. The reactive approach � see what
happens, limit damages, and learn from experience �
is unworkable. Rather, we must take a proactive
approach. This requires foresight to
anticipate new types of threats and a willingness to
take decisive preventive action and to bear
the costs (moral and economic) of such actions.



       

We cannot
necessarily rely on the institutions, moral norms,
social attitudes or national security policies that
developed from our experience with managing other
sorts of risks. Existential risks are a different
kind of beast. We might find it hard to take them as
seriously as we should simply because we have never
yet witnessed such disasters.[5]
Our collective fear-response is likely ill
calibrated to the magnitude of threat.


       



Reductions in existential risks are global public
goods
[13] and may therefore
be undersupplied by the market [14].
Existential risks are a menace for everybody and may
require acting on the international plane. Respect
for national sovereignty is not a legitimate excuse
for failing to take countermeasures against a major
existential risk.


       



If we take into account the welfare of future
generations, the harm done by existential risks is
multiplied by another factor, the size of which
depends on whether and how much we discount future
benefits [15,16].




In view of its
undeniable importance, it is surprising how little
systematic work has been done in this area. Part of the
explanation may be that many of the gravest risks stem (as
we shall see) from anticipated future technologies that we
have only recently begun to understand. Another part of the
explanation may be the unavoidably interdisciplinary and
speculative nature of the subject. And in part the neglect
may also be attributable to an aversion against thinking
seriously about a depressing topic. The point, however, is
not to wallow in gloom and doom but simply to take a sober
look at what could go wrong so we can create responsible
strategies for improving our chances of survival. In order
to do that, we need to know where to focus our efforts.


3       
Classification of existential risks



We shall use the following four
categories to classify existential risks[6]:




Bangs
� Earth-originating intelligent life goes extinct in
relatively sudden disaster resulting from either an
accident or a deliberate act of destruction.



Crunches
� The potential of humankind to
develop into posthumanity[7]
is permanently thwarted although human life
continues in some form.




Shrieks
� Some form of
posthumanity is attained but it is an extremely
narrow band of what is possible and desirable.



Whimpers
� A posthuman
civilization arises but evolves in a direction that
leads gradually but irrevocably to either the
complete disappearance of the things we value or to
a state where those things are realized to only a
minuscule degree of what could have been achieved.





Armed with this
taxonomy, we can begin to analyze the most likely scenarios
in each category. The definitions will also be clarified as
we proceed.


4       
Bangs


This is the most obvious kind of
existential risk. It is conceptually easy to understand.
Below are some possible ways for the world to end in a bang.[8]
I have tried to rank them roughly in order of how probable
they are, in my estimation, to cause the extinction of
Earth-originating intelligent life; but my intention with
the ordering is more to provide a basis for further
discussion than to make any firm assertions.




4.1        
Deliberate misuse of nanotechnology


In a mature form, molecular
nanotechnology will enable the construction of
bacterium-scale self-replicating mechanical robots that can
feed on dirt or other organic matter [22-25].
Such replicators could eat up the biosphere or destroy it by
other means such as by poisoning it, burning it, or blocking
out sunlight. A person of malicious intent in possession of
this technology might cause the extinction of intelligent
life on Earth by releasing such nanobots into the
environment.[9]


The technology to
produce a destructive nanobot seems considerably easier to
develop than the technology to create an effective defense
against such an attack (a global nanotech immune system, an
�active shield� [23]). It is therefore
likely that there will be a period of vulnerability during
which this technology must be prevented from coming into the
wrong hands. Yet the technology could prove hard to
regulate, since it doesn�t require rare radioactive isotopes
or large, easily identifiable manufacturing plants, as does
production of nuclear weapons [23].



Even if effective
defenses against a limited nanotech attack are developed
before dangerous replicators are designed and acquired by
suicidal regimes or terrorists, there will still be the
danger of an arms race between states possessing
nanotechnology. It has been argued [26]
that molecular manufacturing would lead to both arms race
instability and crisis instability, to a higher degree than
was the case with nuclear weapons. Arms race instability
means that there would be dominant incentives for each
competitor to escalate its armaments, leading to a runaway
arms race. Crisis instability means that there would be
dominant incentives for striking first. Two roughly balanced
rivals acquiring nanotechnology would, on this view, begin a
massive buildup of armaments and weapons development
programs that would continue until a crisis occurs and war
breaks out, potentially causing global terminal destruction.
That the arms race could have been predicted is no guarantee
that an international security system will be created ahead
of time to prevent this disaster from happening. The nuclear
arms race between the US and the USSR was predicted but
occurred nevertheless.



4.2        
Nuclear holocaust


The US and Russia
still have huge stockpiles of nuclear weapons. But would an
all-out nuclear war really exterminate humankind? Note that:
(i) For there to be an existential risk it suffices that we
can�t be sure that it wouldn�t. (ii) The climatic effects of
a large nuclear war are not well known (there is the
possibility of a nuclear winter). (iii) Future arms races
between other nations cannot be ruled out and these could
lead to even greater arsenals than those present at the
height of the Cold War. The world�s supply of plutonium has
been increasing steadily to about two thousand tons, some
ten times as much as remains tied up in warheads ([9],
p. 26). (iv) Even if some humans survive the short-term
effects of a nuclear war, it could lead to the collapse of
civilization. A human race living under stone-age conditions
may or may not be more resilient to extinction than other
animal species.




4.3        
We�re living in a simulation and it gets shut down


A case can be
made that the hypothesis that we are living in a computer
simulation should be given a significant probability
[27]. The basic idea behind this
so-called �Simulation argument� is that vast amounts of
computing power may become available in the future (see e.g.
[28,29]), and that it could be used,
among other things, to run large numbers of fine-grained
simulations of past human civilizations. Under some
not-too-implausible assumptions, the result can be that
almost all minds like ours are simulated minds, and that we
should therefore assign a significant probability to being
such computer-emulated minds rather than the (subjectively
indistinguishable) minds of originally evolved creatures.
And if we are, we suffer the risk that the simulation may be
shut down at any time. A decision to terminate our
simulation may be prompted by our actions or by exogenous
factors.


While to some it
may seem frivolous to list such a radical or �philosophical�
hypothesis next the concrete threat of nuclear holocaust, we
must seek to base these evaluations on reasons rather than
untutored intuition. Until a refutation appears of the
argument presented in [27], it would
intellectually dishonest to neglect to mention
simulation-shutdown as a potential extinction mode.




4.4        
Badly programmed superintelligence


When we create
the first superintelligent entity [28-34],
we might make a mistake and give it goals that lead it to
annihilate humankind, assuming its enormous intellectual
advantage gives it the power to do so. For example, we could
mistakenly elevate a subgoal to the status of a supergoal.
We tell it to solve a mathematical problem, and it complies
by turning all the matter in the solar system into a giant
calculating device, in the process killing the person who
asked the question. (For further analysis of this, see
[35].)




4.5        
Genetically engineered biological agent


With the fabulous
advances in genetic technology currently taking place, it
may become possible for a tyrant, terrorist, or lunatic to
create a doomsday virus, an organism that combines long
latency with high virulence and mortality [36].


Dangerous viruses
can even be spawned unintentionally, as Australian
researchers recently demonstrated when they created a
modified mousepox virus with 100% mortality while trying to
design a contraceptive virus for mice for use in pest
control [37]. While this particular virus
doesn�t affect humans, it is suspected that an analogous
alteration would increase the mortality of the human
smallpox virus. What underscores the future hazard here is
that the research was quickly published in the open
scientific literature [38]. It is hard to
see how information generated in open biotech research
programs could be contained no matter how grave the
potential danger that it poses; and the same holds for
research in nanotechnology.



Genetic medicine
will also lead to better cures and vaccines, but there is no
guarantee that defense will always keep pace with offense.
(Even the accidentally created mousepox virus had a 50%
mortality rate on vaccinated mice.) Eventually, worry about
biological weapons may be put to rest through the
development of nanomedicine, but while nanotechnology has
enormous long-term potential for medicine [39]
it carries its own hazards.



4.6        
Accidental misuse of nanotechnology (�gray goo�)


The possibility
of accidents can never be completely ruled out. However,
there are many ways of making sure, through responsible
engineering practices, that species-destroying accidents do
not occur. One could avoid using self-replication; one could
make nanobots dependent on some rare feedstock chemical that
doesn�t exist in the wild; one could confine them to sealed
environments; one could design them in such a way that any
mutation was overwhelmingly likely to cause a nanobot to
completely cease to function [40].
Accidental misuse is therefore a smaller concern than
malicious misuse [23,25,41].



However, the distinction between
the accidental and the deliberate can become blurred. While
�in principle� it seems possible to make terminal
nanotechnological accidents extremely improbable, the actual
circumstances may not permit this ideal level of security to
be realized. Compare nanotechnology with nuclear technology.
From an engineering perspective, it is of course perfectly
possible to use nuclear technology only for peaceful
purposes such as nuclear reactors, which have a zero chance
of destroying the whole planet. Yet in practice it may be
very hard to avoid nuclear technology also being used to
build nuclear weapons, leading to an arms race. With large
nuclear arsenals on hair-trigger alert, there is inevitably
a significant risk of accidental war. The same can happen
with nanotechnology: it may be pressed into serving military
objectives in a way that carries unavoidable risks of
serious accidents.


In some
situations it can even be strategically advantageous to
deliberately
make one�s technology or control systems
risky, for example in order to make a �threat that leaves
something to chance� [42].



4.7        

Something unforeseen


We need a
catch-all category. It would be foolish to be confident that
we have already imagined and anticipated all significant
risks. Future technological or scientific developments may
very well reveal novel ways of destroying the world.


Some foreseen
hazards (hence not members of the current category) which
have been excluded from the list of bangs on grounds that
they seem too unlikely to cause a global terminal disaster
are: solar flares, supernovae, black hole explosions or
mergers, gamma-ray bursts, galactic center outbursts,
supervolcanos, loss of biodiversity, buildup of air
pollution, gradual loss of human fertility, and various
religious doomsday scenarios. The hypothesis that we will
one day become �illuminated� and commit collective suicide
or stop reproducing, as supporters of VHEMT (The Voluntary
Human Extinction Movement) hope [43],
appears unlikely. If it really were better not to exist (as
Silenus told king Midas in the Greek myth, and as Arthur
Schopenhauer argued [44] although for
reasons specific to his philosophical system he didn�t
advocate suicide), then we should not count this scenario as
an existential disaster. The assumption that it is not worse
to be alive should be regarded as an implicit assumption in
the definition of Bangs. Erroneous collective suicide
is an existential risk albeit one whose probability seems
extremely slight. (For more on the ethics of human
extinction, see chapter 4 of [9].)




4.8        
Physics disasters


The Manhattan
Project bomb-builders� concern about an A-bomb-derived
atmospheric conflagration has contemporary analogues.


There have been
speculations that future high-energy particle accelerator
experiments may cause a breakdown of a metastable vacuum
state that our part of the cosmos might be in, converting it
into a �true� vacuum of lower energy density
[45]
. This would result in an expanding bubble of
total destruction that would sweep through the galaxy and
beyond at the speed of light, tearing all matter apart as it
proceeds.



Another
conceivability is that accelerator experiments might produce
negatively charged stable �strangelets� (a hypothetical form
of nuclear matter) or create a mini black hole that would
sink to the center of the Earth and start accreting the rest
of the planet [46].


These outcomes
seem
to be impossible given our best current physical
theories. But the reason we do the experiments is precisely
that we don�t really know what will happen. A more
reassuring argument is that the energy densities attained in
present day accelerators are far lower than those that occur
naturally in collisions between cosmic rays
[46,47]
. It�s possible, however, that factors other
than energy density are relevant for these hypothetical
processes, and that those factors will be brought together
in novel ways in future experiments.



The main reason
for concern in the �physics disasters� category is the
meta-level observation that discoveries of all sorts of
weird physical phenomena are made all the time, so even if
right now all the particular physics disasters we have
conceived of were absurdly improbable or impossible, there
could be other more realistic failure-modes waiting to be
uncovered. The ones listed here are merely illustrations of
the general case.



4.9        
Naturally occurring disease


What if AIDS was
as contagious as the common cold?


There are several
features of today�s world that may make a global pandemic
more likely than ever before. Travel, food-trade, and urban
dwelling have all increased dramatically in modern times,
making it easier for a new disease to quickly infect a large
fraction of the world�s population.




4.10     
Asteroid or comet impact



There is a
real but very small risk that we will be wiped

out by the impact of an asteroid or comet [48].



In order to cause the extinction
of human life, the impacting body would probably have to be
greater than 1 km in diameter (and probably 3 - 10 km).
There have been at least five and maybe well over a dozen
mass extinctions on Earth, and at least some of these were
probably caused by impacts ([9], pp.
81f.). In particular, the K/T extinction 65 million years
ago, in which the dinosaurs went extinct, has been linked to
the impact of an asteroid between 10 and 15 km in diameter
on the Yucatan peninsula. It is estimated that a 1 km or
greater body collides with Earth about once every 0.5
million years.[10]
We have only catalogued a small fraction of the potentially
hazardous bodies.


If we were to
detect an approaching body in time, we would have a good
chance of diverting it by intercepting it with a rocket
loaded with a nuclear bomb [49].



4.11     

Runaway global warming


One scenario is
that the release of greenhouse gases into the atmosphere
turns out to be a strongly self-reinforcing feedback
process. Maybe this is what happened on Venus, which now has
an atmosphere dense with CO2 and a temperature of
about 450O C. Hopefully, however, we will have
technological means of counteracting such a trend by the
time it would start getting truly dangerous.


5       
Crunches



While some of the
events described in the previous section would be certain to
actually wipe out Homo sapiens (e.g. a breakdown of a
meta-stable vacuum state) others could potentially be
survived (such as an all-out nuclear war). If modern
civilization were to collapse, however, it is not completely
certain that it would arise again even if the human species
survived. We may have used up too many of the easily
available resources a primitive society would need to use to
work itself up to our level of technology. A primitive human
society may or may not be more likely to face extinction
than any other animal species. But let�s not try that
experiment.


If the primitive
society lives on but fails to ever get back to current
technological levels, let alone go beyond it, then we have
an example of a crunch. Here are some potential causes of a
crunch:



5.1        
Resource depletion or ecological destruction



The natural
resources needed to sustain a high-tech civilization are
being used up. If some other cataclysm destroys the
technology we have, it may not be possible to climb back up
to present levels if natural conditions are less favorable
than they were for our ancestors, for example if the most
easily exploitable coal, oil, and mineral resources have
been depleted. (On the other hand, if plenty of information
about our technological feats is preserved, that could make
a rebirth of civilization easier.)



5.2        
Misguided world government or another static social
equilibrium stops technological progress


One could imagine
a fundamentalist religious or ecological movement one day
coming to dominate the world. If by that time there are
means of making such a world government stable against
insurrections (by advanced surveillance or mind-control
technologies), this might permanently put a lid on
humanity�s potential to develop to a posthuman level. Aldous
Huxley�s Brave New World is a well-known scenario of
this type [50].



A world government may not be
the only form of stable social equilibrium that could
permanently thwart progress. Many regions of the world today
have great difficulty building institutions that can support
high growth. And historically, there are many places where
progress stood still or retreated for significant periods of
time. Economic and technological progress may not be as
inevitable as is appears to us.



5.3        
�Dysgenic� pressures


It is possible
that advanced civilized society is dependent on there being
a sufficiently large fraction of intellectually talented
individuals. Currently it seems that there is a negative
correlation in some places between intellectual achievement
and fertility. If such selection were to operate over a long
period of time, we might evolve into a less brainy but more
fertile species, homo philoprogenitus (�lover of many
offspring�).



However, contrary
to what such considerations might lead one to suspect, IQ
scores have actually been increasing dramatically over the
past century. This is known as the Flynn effect; see e.g.
[51,52]. It�s not yet settled whether
this corresponds to real gains in important intellectual
functions.


Moreover, genetic
engineering is rapidly approaching the point where it will
become possible to give parents the choice of endowing their
offspring with genes that correlate with intellectual
capacity, physical health, longevity, and other desirable
traits.


In any case, the
time-scale for human natural genetic evolution seems much
too grand for such developments to have any significant
effect before other developments will have made the issue
moot [19,39].



5.4        

Technological arrest


The sheer technological
difficulties in making the transition to the posthuman world
might turn out to be so great that we never get there.



5.5        
Something unforeseen[11]


As before, a
catch-all.



Overall, the
probability of a crunch seems much smaller than that of a
bang. We should keep the possibility in mind but not let it
play a dominant role in our thinking at this point. If
technological and economical development were to slow down
substantially for some reason, then we would have to take a
closer look at the crunch scenarios.


6       
Shrieks


Determining which
scenarios are shrieks is made more difficult by the
inclusion of the notion of desirability in the
definition. Unless we know what is �desirable�, we cannot
tell which scenarios are shrieks. However, there are some
scenarios that would count as shrieks under most reasonable
interpretations.




6.1        
Take-over by a transcending upload


Suppose uploads
come before human-level artificial intelligence. An upload
is a mind that has been transferred from a biological brain
to a computer that emulates the computational processes that
took place in the original biological neural network
[19,33,53,54]. A successful uploading
process would preserve the original mind�s memories, skills,
values, and consciousness. Uploading a mind will make it
much easier to enhance its intelligence, by running it
faster, adding additional computational resources, or
streamlining its architecture. One could imagine that
enhancing an upload beyond a certain point will result in a
positive feedback loop, where the enhanced upload is able to
figure out ways of making itself even smarter; and the
smarter successor version is in turn even better at
designing an improved version of itself, and so on. If this
runaway process is sudden, it could result in one upload
reaching superhuman levels of intelligence while everybody
else remains at a roughly human level. Such enormous
intellectual superiority may well give it correspondingly
great power. It could rapidly invent new technologies or
perfect nanotechnological designs, for example. If the
transcending upload is bent on preventing others from
getting the opportunity to upload, it might do so.


The posthuman
world may then be a reflection of one particular egoistical
upload�s preferences (which in a worst case scenario would
be worse than worthless). Such a world may well be a
realization of only a tiny part of what would have been
possible and desirable. This end is a shriek.



6.2        

Flawed superintelligence


Again, there is
the possibility that a badly programmed superintelligence
takes over and implements the faulty goals it has
erroneously been given.



6.3        
Repressive totalitarian global regime


Similarly, one
can imagine that an intolerant world government, based
perhaps on mistaken religious or ethical convictions, is
formed, is stable, and decides to realize only a very small
part of all the good things a posthuman world could contain.



Such a world
government could conceivably be formed by a small group of
people if they were in control of the first
superintelligence and could select its goals. If the
superintelligence arises suddenly and becomes powerful
enough to take over the world, the posthuman world may
reflect only the idiosyncratic values of the owners or
designers of this superintelligence. Depending on what those
values are, this scenario would count as a shriek.



6.4        
Something unforeseen.[12]


The catch-all.


These shriek
scenarios appear to have substantial probability and thus
should be taken seriously in our strategic planning.



One could argue
that one value that makes up a large portion of what we
would consider desirable in a posthuman world is that it
contains as many as possible of those persons who are
currently alive. After all, many of us want very much not to
die (at least not yet) and to have the chance of becoming
posthumans. If we accept this, then any scenario in
which the transition to the posthuman world is delayed for
long enough that almost all current humans are dead before
it happens (assuming they have not been successfully
preserved via cryonics arrangements [53,57])
would be a shriek. Failing a breakthrough in life-extension
or widespread adoption of cryonics, then even a smooth
transition to a fully developed posthuman eighty years from
now would constitute a major existential risk, if we
define �desirable� with special reference to the people who
are currently alive. This �if�, however, is loaded with a
profound axiological problem that we shall not try to
resolve here.


7       
Whimpers



If things go
well, we may one day run up against fundamental physical
limits. Even though the universe appears to be infinite
[58,59], the portion of the universe that
we could potentially colonize is (given our admittedly very
limited current understanding of the situation) finite
[60], and we will therefore eventually
exhaust all available resources or the resources will
spontaneously decay through the gradual decrease of
negentropy and the associated decay of matter into
radiation. But here we are talking astronomical time-scales.
An ending of this sort may indeed be the best we can hope
for, so it would be misleading to count it as an existential
risk. It does not qualify as a whimper because humanity
could on this scenario have realized a good part of its
potential.


Two whimpers
(apart form the usual catch-all hypothesis) appear to have
significant probability:



7.1        
Our potential or even our core values are eroded by
evolutionary development



This scenario is
conceptually more complicated than the other existential
risks we have considered (together perhaps with the �We are
living in a simulation that gets shut down� bang scenario).
It is explored in more detail in a companion paper
[61]. An outline of that paper is
provided in an Appendix.


A related
scenario is described in [62], which
argues that our �cosmic commons� could be burnt up in a
colonization race. Selection would favor those replicators
that spend all their resources on sending out further
colonization probes [63].


Although the time
it would take for a whimper of this kind to play itself out
may be relatively long, it could still have important policy
implications because near-term choices may determine whether
we will go down a track [64] that
inevitably leads to this outcome. Once the evolutionary
process is set in motion or a cosmic colonization race
begun, it could prove difficult or impossible to halt it
[65]. It may well be that the only
feasible way of avoiding a whimper is to prevent these
chains of events from ever starting to unwind.




7.2        
Killed by an extraterrestrial civilization



The
probability of running into aliens any time soon appears to
be very small (see section on evaluating probabilities
below, and also [66,67]).




If things go
well, however, and we develop into an intergalactic
civilization, we may one day in the distant future encounter
aliens. If they were hostile and if (for some unknown
reason) they had significantly better technology than we
will have by then, they may begin the process of conquering
us. Alternatively, if they trigger a phase transition of the
vacuum through their high-energy physics experiments (see
the Bangs section) we may one day face the consequences.
Because the spatial extent of our civilization at that stage
would likely be very large, the conquest or destruction
would take relatively long to complete, making this scenario
a whimper rather than a bang.



7.3        
Something unforeseen


The catch-all
hypothesis.


The first of
these whimper scenarios should be a weighty concern when
formulating long-term strategy. Dealing with the second
whimper is something we can safely delegate to future
generations (since there�s nothing we can do about it now
anyway).



8       
Assessing the probability of existential risks



8.1        
Direct versus indirect methods


There are two complementary ways
of estimating our chances of creating a posthuman world.
What we could call the direct way is to analyze the
various specific failure-modes, assign them probabilities,
and then subtract the sum of these disaster-probabilities
from one to get the success-probability. In doing so, we
would benefit from a detailed understanding of how the
underlying causal factors will play out. For example, we
would like to know the answers to questions such as: How
much harder is it to design a foolproof global nanotech
immune system than it is to design a nanobot that can
survive and reproduce in the natural environment? How
feasible is it to keep nanotechnology strictly regulated for
a lengthy period of time (so that nobody with malicious
intentions gets their hands on an assembler that is not
contained in a tamperproof sealed assembler lab

[23]
)? How likely is it that superintelligence will
come before advanced nanotechnology? We can make guesses
about these and other relevant parameters and form an
estimate that basis; and we can do the same for the other
existential risks that we have outlined above. (I have tried
to indicate the approximate relative probability of the
various risks in the rankings given in the previous four
sections.)


Secondly, there is the
indirect way
. There are theoretical constraints that can
be brought to bear on the issue, based on some general
features of the world in which we live. There is only small
number of these, but they are important because they do not
rely on making a lot of guesses about the details of future
technological and social developments:


(Keep reading at Dr Bostrom's site)