Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Bio-Cryptoeconomy: Nanorobotic DACs for Cell Repair and Enhancement

Augmented Reality: Pokémon GO Is Only the Beginning

No Mans Sky: A Deist Simulated Universe

Interview with Gerd Leonhard and his New Book TECHNOLOGY vs. HUMANITY

Physics vs. Human Perception: Which Represents Reality?

This Is Where Europe’s Upcoming Rover Mission Will Explore Mars


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

almostvoid on 'Augmented Reality: Pokémon GO Is Only the Beginning' (Aug 24, 2016)

almostvoid on 'No Mans Sky: A Deist Simulated Universe' (Aug 24, 2016)

instamatic on 'Is Dropping Out of College Throwing Your Life Away?' (Aug 23, 2016)

Rick Searle on 'Our emerging culture of shame' (Aug 22, 2016)

Alexey Turchin on 'Why Running Simulations May Mean the End is Near' (Aug 22, 2016)

RJP8915 on 'How VR Gaming will Wake Us Up to our Fake Worlds' (Aug 22, 2016)

almostvoid on 'IEET Affiliate Scholar Phil Torres Publishes New Paper in JET' (Aug 22, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Op-ed: Climate Change Is the Most Urgent Existential Risk
Aug 7, 2016
(4956) Hits
(4) Comments

Consciousness, Reality, and the Simulation Hypothesis
Aug 4, 2016
(4390) Hits
(15) Comments

Shedding Light on Peter Thiel’s Dark Enlightenment
Aug 15, 2016
(4093) Hits
(2) Comments

Cognitive Buildings!
Aug 1, 2016
(3371) Hits
(1) Comments



IEET > Security > Biosecurity > Cyber > Eco-gov > Military > SciTech > SpaceThreats > Vision > Artificial Intelligence > Nanotechnology > Affiliate Scholar > Phil Torres

Print Email permalink (1) Comments (2236) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Introducing the Subfield of Agential Riskology


Phil Torres
By Phil Torres
Ethical Technology

Posted: Feb 26, 2016

The field of Existential Risk Studies has, to date, focused largely on risk scenarios involving natural phenomena, anthropogenic phenomena, and a specific type of anthropogenic phenomenon that one could term “technogenic.” The first category includes asteroid/comet impacts, supervolcanoes, and pandemics. The second encompasses climate change and biodiversity loss. And the third deals with risks that arise from the misuse and abuse of advanced technologies, such as nuclear weapons, biotechnology, synthetic biology, nanotechnology, and artificial intelligence.

What I want to draw attention to is that any analysis of existential risks will be incomplete without a careful study of what sorts of people might want to use advanced technologies to initiate a catastrophe. The fact is that tools aren’t going to induce global disasters without some agents wielding them. (Although perhaps there are one or two exceptions to this rule.) So we really need a taxonomy of agents to fully understand the threat situation before us. One can’t accurately estimate the probability of an existential catastrophe — as Sir Martin Rees, Nick Bostrom, and others have done, all focusing on the risks posed by advanced dual-use technologies, which are seen as the most urgent and formidable — without a clear account of the “agential risks” involved in such scenarios. A world full of perfectly moral beings and highly dangerous technologies, for example, might very well survive to a posthuman state, whereas a world full of bellicose beings and, say, only a single “existential” technology could self-annihilate early in its career. The agents who occupy the world are just as important to consider as the technologies available to them.

Unfortunately, Bostrom’s seminal 2002 paper on existential risks reveals this bias among scholars for focusing exclusively on technology, as it makes only the vaguest references to different types of malicious agents in the world. (It includes multiple taxonomies of all sorts of risks, but not agential risks.)

So, let’s take a look at the types of agential risks in the world. To begin, note that the error/terror distinction made by Sir Martin Rees and other riskologists only makes sense in the context of an agent. With respect to the first category (error): imagine a world cluttered with doomsday machines, or technologies powerful enough to bring about an existential disaster. Now, what sort of agent could induce such a catastrophe through a mistake, lapse of judgment, accident, miscalculation, misunderstanding, or slip-up? The answer depends, in part, on how accessible certain future technologies are, most notably biotechnology, synthetic biology, nanotechnology, and possibly artificial intelligence. At least in theory, virtually any kind of agent could be a culprit. For example, a state could misinterpret an early-warning system as indicating that an enemy has launched a nuclear weapon, and therefore respond in kind, initiating a nuclear conflict that devastates the globe. Or well-intentioned “biohackers” could inadvertently release a pathogen that’s even more lethal and contagious than the Ebola virus, resulting in a worldwide pandemic. Indeed, bioterrorism experts have worried that as biotechnology becomes “democratized,” the risk of accidental contamination will likely skyrocket, for straightforward statistical reasons. The history of bio-mistakes made in highly regulated government or university laboratories is already quite dismal, as I detail in my book. What should we expect if millions and millions of  people are experimenting in their basements or garages?

The more interesting situation, though, concerns the category of terror (intended harm). Here we need to distinguish between global catastrophic risks and existential  risks. For the present purposes, we can say that the latter results in severe and irreversible consequences for humanity, whereas the former doesn’t. This distinction is important because the range of agents that could initiate a global catastrophic risk is much broader than the range of agents who might be interested in engaging in omnicide (the extermination of our species) or bringing about a permanent collapse of civilization. For example, North Korea might be motivated by a grandiose sense of nationalism to start a nuclear war with the United States, thereby resulting in a global catastrophe. Similarly, there appears to be a new Cold War heating up between the US and Russia, the nuclear nations of India and Pakistan continue to worry many observers, and the late Osama bin Laden called it his “religious duty” to use weapons of mass destruction, including nuclear weapons, against the West. The Islamic State has also pondered the possibility of weaponizing the bubonic plague, and even infecting its own members with Ebola and then putting them on an airplane. Furthermore, there are lone wolves like Anders Breivik who strive to bring about social change through the use of extreme violence. All of these agents could potentially induce a catastrophe of global proportions, especially if empowered by advanced dual-use technologies.

But the examples here mentioned have something special in common: they all care about what we might call post-conflict group preservation. That is to say, none of them are suicidal — at least not with respect to their intentions, although the consequences of their actions could inadvertently lead to their own demise. Breivik, for example, wanted to prevent Norway from succumbing to a takeover by Muslim immigrants, referring to himself as an “inordinately loving” person. Meanwhile, North Korea would presumably hope for world domination. And al-Qaeda’s explicit goal was to purge “Muslim lands” from Western forces, such as those left behind in Saudi Arabia after the 1990 Gulf War. For such entities, the use of advanced technology — and this applies to future artifacts as well, such as nanotechnology — is a means for accomplishing an end that involves its own survival, or at least the survival of the movement or ideology of which it’s a part. It follows that, unless terror slides into error, these entities aren’t likely to induce an existential catastrophe (although one could argue that North Korea taking over the world would be a catastrophe of near-existential proportions, depending on the long-term consequences for our species). The danger with respect to these agents is the possibility of a global disaster that kills large numbers of people and temporarily damages civilization in significant ways.

So, what sorts of agents might want to actually kill off our species or permanently disable civilization? I can think of at least four possibilities:

(1) Artificial intelligence. If we succeed in creating a machine with artificial general intelligence, it would have agency in the world no less than we do, even if it’s not conscious. On the one hand, it could prefer enmity over amity, and consequently destroy humanity for the same reason that we destroy cockroaches. On the other hand, it could be merely indifferent to our plight, and consequently destroy us because we’re made out of chemical elements that it could use for something else. Either of these possibilities could involve any number of advanced technologies, such as nuclear weapons, biotechnology, synthetic biology, and molecular manufacturing. For example, as Nick Bostrom notes in Superintelligence, if nano factories don’t exist at the time an artificial intelligence could create them, and then employ these devices to produce “nerve gas or target-seeking mosquito-like robots [that] might then burgeon forth simultaneously from every square meter of the globe.” The result could be total human annihilation.

(2) Suicidal groups or individuals with a death wish for humanity, or a desire to engage in the ultimate mass suicide event. A Colorado man named Marvin Heemeyer provides an example of the mindset required here. Due to an ongoing dispute with his local town, Heemeyer secretly built a “futuristic tank” out of a bulldozer and then proceeded to demolish the entire town, killing himself inside the tank after it became stuck underneath a collapsed building. In this case, Heemeyer wanted to engage in a kind of mass destruction and he didn’t care to survive. It was the ultimate way of “going out with a bang,” so to speak. Other examples come from school/college shootings involving adolescents or young adults who want to kill as many human beings as possible before being shot dead themselves. There’s no ultimate aim here — unlike the case of suicide terrorism, which is typically done for the good of the group. Such individuals simply want to kill and then die. Now, imagine if such nutcases had access to advanced nanotech weapons (say, rapacious self-replicating nanobots) or engineered pathogens that could take out not merely the bullies at school or the damn bureaucrats in town, but the entire human race. Surely there are people out there with grudges against humanity. Perhaps one of these individuals will someday attempt to expunge everyone alive by going out with a bang — or whimper.

(3) Ecoterrorists who believe that Gaia would be better off without Homo sapiens sandwiched between the earth and the heavens. The fact is that our species is almost entirely responsible for the sixth mass extinction event in life’s 3.5 billion year history. But rather than see this as a reason to support sustainable policies even more vigorously, some environmentalists believe that our species simply isn’t compatible with a healthy biosphere. We are exploitative creatures, and our actions have resulted in the loss of biological diversity around the world that rivals the past “Big Five” events. Those who hold this persuasion might thus attempt to exploit increasingly powerful future technologies to selectively annihilate Homo sapiens for the “greater good” of Earth-originating life (an engineered germ might be especially effective for this goal), or to permanently dismantle civilization, as Ted Kaczynski hoped. In her fascinating book Understanding Apocalyptic Terrorism, Frances Flannery writes that the ecoterrorism threat will probably grow more significant later this century, due to the ongoing, slow-motion catastrophes of climate change and biodiversity loss. In her words: “As the environmental situation becomes more dire, eco-terrorism will likely become a more serious threat in the future.”

(4) Finally, apocalyptic religious groups who believe that the world must be destroyed in order to be saved. History is overflowing with apocalyptic groups that believed the world was about to end. Few people — including existential risks scholars — are even remotely aware of this incredibly protracted and frightening history. Today, groups like the Islamic State, Aum Shinrikyo, the Eastern Lightning, and “dispensationalists” in America (including many in the US government), all believe that the end of the world is rapidly approaching. At the extreme, some groups — such as the Islamic State, Aum Shinrikyo, and the “Armageddon lobby” — have seen themselves as active participants in an apocalyptic narrative that’s unfolding in realtime. For such groups, civilization as we know it must be obliterated before a new, paradisiacal world — Heaven on Earth — can be supernaturally implemented. As Sam Harris has observed, it’s genuinely the case that a mushroom-shaped cloud casting a shadow over the Middle East (or even New York City) would be a cause for eschatological elation, since it would mean that the Rapture, Tribulation, and Millennium are imminent. In the future, a group of religious fanatics who believe, with the unshakable firmness of faith, that the only way to make things right is to destroy this weary, sinful world will constitute a major agential risk to human survival. (I discuss this phenomenon in detail in my book.) Furthermore, as I argue in a forthcoming article for Skeptic magazine, there are specific historical, demographic, and technological reasons for thinking that the size and frequency of apocalyptic movements will actually increase as the twenty-first century unfolds.

In my view, Existential Risk Studies needs a subfield dedicated to studying the various agents most likely to induce global catastrophic risks and existential risks, since these agents have distinct properties and each could require its own set of prophylactic measures. For example, 2076 will likely see a spike in apocalyptic fervor in the Islamic world. (I won’t here explain why, but suffice it to say that 2076 corresponds to the year 1500 in the Islamic calendar, and the last turn of the century in the Islamic calendar was 1979, during which apocalyptic terrorists took over the Grand Mosque in Mecca — with some 100,000 people held hostage — and the Ayatollah Khomeini lead an “apocalyptic” revolution in Iran. Who knows what destructive technologies will exist in 2076, so we ought to be ready.) If scholars aren’t attuned to the particular features of each of these risk categories, we could leave ourselves vulnerable to an avoidable agential threat that results in unprecedented human suffering.

I would suggest that institutes dedicated to studying existential risks expand their focus to include not just understanding the possibility of super intelligence, but exploring how the other agents above could evolve over time, employ future technologies to achieve their goals, and be effectively neutralized by defensive technologies or government regulations.


Phil Torres is an author and artist. His forthcoming book is called The End: What Science and Religion Tell Us About the Apocalypse (Pitchstone Publishing). You can contact him here: philosophytorres@gmail.com.
Print Email permalink (1) Comments (2237) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


My 2 cents))

It would be interesting to add simple math model which gives probability of catastrophe depending of number of independent agents capable to create it. It would show that if number of agents is thousands or higher, even small probability of craziness of one agent would result in existential catastrophe. Millions of people commit suicide every year and hackers release millions of virus.

One other class of x-risks agents are rational agents who use x-risks weapon for global blackmail. North Korea may do it, demanding that every other country surrender to it. But if two countries create doomsday weapon with contradicting conditioning, we are doomed.

I would also add “arrogant scientists” who proceeds with dangerous experiments because they personally benefit from positive outcome. It includes LHC and recent creation of new strain of bird flu virus.

I would also cover a topic of desire for the end of the world which most people have - that is why we see apocalyptic movies.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Ken Hayworth - Verifiable Brain Preservation

Previous entry: Longévité, IA, risques existentiels : dangers et opportunités (partie B)

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org