Why Steven Pinker’s Optimism About the Future of Humanity is Misguided
Phil Torres
2015-12-13 00:00:00

Yet if one actually looks at the statistics, the world is steadily becoming more peaceful. This is the conclusion of Steven Pinker’s monumental 2011 book The Better Angels of Our Nature, as well as Michael Shermer’s excellent 2015 follow-up The Moral Arc (essentially a “sequel” of Pinker’s tome). The surprising, counterintuitive fact is that the global prevalence of genocides, homicides, infanticide, domestic violence, and violence against children is declining, while democratization, women’s rights, gay rights, and even animals rights are on the rise. The probability that any one of us dies at the hands of another human being rather than from natural causes is perhaps the lowest it’s ever been in human history, even before the Neolithic Revolution. If that’s not Progress with a capital ‘p’, then I don’t know what is.



The oceanic evidence that Pinker and Shermer present is robust and cogent. Yet I think there’s another story to tell — one that hints at a possible future marked by unprecedented human suffering, global catastrophes, and even our extinction. The fact is that while the enterprise of human civilization has been making significant ethical strides forward in multiple domains, a range of emerging technologies are, by nearly all accounts, poised to introduce brand new existential risks never before encountered by our species (see Figure a).





Our species has of course always been haunted by a small number of improbable hazards, such as pandemics, super volcanoes, and asteroid/comet impacts. In my forthcoming book on existential risks and apocalyptic terrorism, I refer to these as our cosmic risk background. But since 1945, the number of existential risk scenarios has increased (far) beyond historical norms. These new risks are anthropogenic in nature. Obvious examples include global warming and biodiversity loss, which scientists say could lead to the sixth mass extinction event in life’s entire 3.5 billion year history on Earth, or even turn Earth into an uninhabitable cauldron like our planetary neighbor Venus (which succumbed to a runaway greenhouse effect).



But the most worrisome threats are not merely anthropogenic, they’re technogenic. They arise from the fact that advanced technologies are (a) dual-use in nature, meaning that they can be employed for both benevolent and nefarious purposes; (b) becoming more powerful, thereby enabling humans to manipulate and rearrange the physical world in new ways; and (c) in some cases, becoming more accessible to small groups, including, at the limit, single individuals. This is notable because just as there are many more terrorist groups than rogue nations in the world, there are far more deranged psychopaths than terrorist groups. Thus, the number of possible offenders armed with catastrophic weaponry is likely to increase significantly in the future.



It’s not clear how the trends that Pinker and Shermer identify could save us from this situation. Even if 99% of human beings in the year 02100 were peaceable, the remaining 1% could find themselves with enough technological power at their fingertips to initiate a disaster of global proportions. Or, forget 1% — what about a single individual with a death wish for humanity, or a single apocalyptic group hoping to engage in the ultimate mass suicide event? In a world cluttered with doomsday machines, exactly how long could we expect to survive?



The trends that Pinker and Shermer identify also won’t protect us against the looming threat of super intelligence, which I take to be the most significant (known) threat to our longterm future. (It’s important to include the word “known” because it appears highly likely that future artifacts currently hidden beneath the horizon of our technological imaginations will introduce brand new existential risk scenarios that we can’t currently anticipate.) Even if civilization were to become a moral utopia in which war, homicide, and other forms of violence are non-existent, we could still be destroyed by a super intelligent machine that prefers harvesting the atoms in our bodies over ensuring a prosperous future for our children. Indeed, the Oxford philosopher Nick Bostrom argues in his book Superintelligence that we should recognize the “default outcome” of a successfully engineered super intelligence to be “doom.”



It's considerations like these that have led many riskologists to conclude that the probability of an existential disaster happening this century is shockingly high. For example, the 2006 Stern Review on the Economics of Climate Change, led by the economist Sir Nicholas Stern, assigns a 9.5% probability of human extinction before 2100

Similarly, a survey taken during a Future of Humanity Institute (FHI) conference on global catastrophic risks places the likelihood of annihilation this century at 19%. Bostrom argues in a 2002 paper that it “would be misguided” to assign a probability of less than 25%, adding that “the best estimate may be considerably higher.”  And the Astronomer Royal and cofounder of the Centre for the Study of Existential Risk (CSER), Sir Martin Rees, states in his 2003 book Our Final Hour that our species has a mere 50/50 chance of surviving into the next century — a pitiful coin toss! I myself would put the probability around 50%, mostly due to a phenomenon that I term monsters in my forthcoming book.



To put these figures in perspective, prior to the Atomic Age, the probability of human extinction from a natural catastrophe was extremely small — perhaps even negligible on a timescale of centuries. It follows that the past 70 years has witnessed a sudden and rapid increase in both the number of existential risk scenarios and the probability of an irreversible tragedy occurring.

Projecting such trends into the future, I don’t think it’s crazy to wonder whether the rate at which future technologies will introduce brand new existential risks might be exponential — perhaps tracking the Moorean trend of exponential growth found in fields like computer science, biotechnology, synthetic biology, and nanotechnology. If so, we might expect something like an existential risk singularity, in Ray Kurzweil’s sense of the term “Singularity,” at which point doom would become practically inescapable.



While the historical trajectory of moral progress revealed by Pinker’s and Shermer’s studies suggest a rather sanguine picture of the future, the fact is that population-level statistics are ultimately irrelevant to the longer-term prospects of human survival, given the increasing power and accessibility of dual-use technologies. Even if the fringe of lone psychopaths, apocalyptic cults, terrorist groups, and rogue states were to decrease considerably in the future, just one misstep could be enough to catapult us back into the Stone Age, or worse. (As Einstein once said, “I know not with what  weapons World War III will be fought, but World War IV will be fought with sticks and stones.”) The current trends – both technological and sociological – suggest that the world is simultaneously become safer and far more dangerous.





There are a few reasons for hope. Perhaps future humans will manage to solve the “amity-enmity problem” and engineer a friendly super intelligence. As Stephen Hawking has suggested, if super intelligence isn’t the worst thing to happen to humanity, it could very well be the best. Or perhaps space colonization will significantly reduce the risk of an existential disaster, since the wider we spread out in the world, the less chance there is that a single event will have worldwide consequences. It could also be the case that we successfully integrate technology into our cognitive wetware, or use a process like iterated embryo selection to create more intelligent, sagacious, and responsible posthumans. As I put it in my book, it might be the case that to survive we must go extinct — that is, through a techno-evolutionary process of anagenetic cyborgization, as transhumanists advocate. This may sound somewhat fantastical, but we may have no other choice.

The fact is that we’re no longer children playing with matches, we’re children playing with flamethrowers. We either need parents to watch over us, or to grow up ourselves.



In the absence of such futuristic solutions, it could be that the world really is going to hell – despite the hopeful noises made by smart people like Pinker and Shermer.

For more on my forthcoming book, The End: What Science and Religion Tell Us About the Apocalypse (Pitchstone Publishing), go here. Also check out the literature on the X-Risks Institute website, here.



Figure a: A typology of risks, organized according to the properties of spatial scope, temporal scope, and intensity. Existential risks are (a) all red-dot events, and (b) any black-dot event that's sufficiently severe. (Cf. with Bostrom's 2002 typology.)



Figure b: A Bostrom-style graph showing the dual trends of (a) a decline in global violence, and (b) an increase in the capacity of state and, especially, nonstate actors to wreak unprecedented havoc on society.