Reducing the risk of major, permanent global catastrophe is arguably the most important priority for humanity today. The reason is simple: Such a catastrophe threatens countless members of future generations. Indeed, it is the difference between success or failure for human civilization. If humanity succeeds at avoid catastrophe, it can go on to achieve amazing things across the universe. If humanity fails, everyone could all die. Clearly, reducing the risk of such global catastrophe is a worthy goal. But, in practical terms, what are the best ways to reduce the risk?
Back in 2012, I was invited to spend a few weeks visiting at the Research Institute for Humanity and Nature (RIHN), a federally funded Japanese research institute based in the beautiful city of Kyoto. I was invited by my colleague Itsuki Handoh of RIHN. During my visit, Handoh and I came up with an idea for how to fuse two important lines of research on major global threats.
It could be difficult for human civilization to survive a global catastrophe like rapid climate change, nuclear war, or a pandemic disease outbreak. But imagine if two catastrophes strike at the same time. The damages could be even worse. Unfortunately, most research only looks at one catastrophe at a time, so we have little understanding of how they interact.
The Syrian civil war has already caused over 100,000 deaths. As tragic as this is, it is miniscule compared to the massive and potentially permanent global destruction that could come from the gigaton gorilla lurking in the background: nuclear war between the United States and Russia. While the U.S. and Russia find themselves on opposite sides in Syria, their diplomacy over Syria's chemical weapons could help build the trust and confidence needed to reduce the risk of nuclear war.
Emerging technologies like bioengineering, nanotechnology, artificial intelligence, and geoengineering have great promise for humanity, but they also come with great peril. They could revolutionize everything from pollution control to human health—imagine a bioengineered microbe that converts CO2 into automobile-worthy liquid fuels, or nanotechnologies that target cancer cells.
But they also pose the potential to cause a global catastrophe in which millions or even billions of people die.
This past December I was at the 2012 Annual Meeting of the Society for Risk Analysis. Several sessions focused on emerging technologies governance. Each presentation nominally focused on one technology, mainly synthetic biology and nanotechnology. But most of the ideas discussed applied equally well to any emerging technology.
This paper develops a mathematical modeling framework using fault trees and Poisson processes for analyzing the risks of inadvertent nuclear war from U.S. or Russian misinterpretation of false alarms in early warning systems, and for assessing the potential value of inadvertence risk reduction options. The model also uses publicly available information on early-warning systems, near-miss incidents, and other factors to estimate probabilities of a U.S.-Russia crisis, the rates of false alarms, and the probabilities that leaders will launch missiles in response to a false alarm. The paper discusses results, uncertainties, limitations, and policy implications.
Perceived failure to reduce greenhouse gas emissions has prompted interest in avoiding the harms of climate change via geoengineering, that is, the intentional manipulation of Earth system processes. Perhaps, the most promising geoengineering technique is stratospheric aerosol injection (SAI), which reflects incoming solar radiation, thereby lowering surface temperatures.
Mankind has really popped the planet in the jaw the last few centuries: six million hectares is lost to deforestation every year; the ocean is increasingly acidic and void of fish; the planet’s sixth mass extinction seems to be underway; and human-caused climate change is already raising sea levels, aggravating droughts, and increasing the frequency and intensity of extreme weather events like Hurricane Sandy.