Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

The First Nuclear Power Plant in Belarus Is a Dangerous Fiasco

The Science of Fear-Mongering: How to Protect Your Mind from Demagogues

Rising Sea Levels Threaten Nearly a Trillion Dollars Worth of US Homes

Rachel O’Dwyer on Bitcoin, Blockchains and the Digital Commons

Augmented Reality: Pokémon GO Is Only the Beginning

No Mans Sky: A Deist Simulated Universe


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

instamatic on 'No Mans Sky: A Deist Simulated Universe' (Aug 26, 2016)

RJP8915 on 'No Mans Sky: A Deist Simulated Universe' (Aug 26, 2016)

instamatic on 'No Mans Sky: A Deist Simulated Universe' (Aug 24, 2016)

almostvoid on 'Augmented Reality: Pokémon GO Is Only the Beginning' (Aug 24, 2016)

almostvoid on 'No Mans Sky: A Deist Simulated Universe' (Aug 24, 2016)

instamatic on 'Is Dropping Out of College Throwing Your Life Away?' (Aug 23, 2016)

Rick Searle on 'Our emerging culture of shame' (Aug 22, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Op-ed: Climate Change Is the Most Urgent Existential Risk
Aug 7, 2016
(5076) Hits
(4) Comments

Consciousness, Reality, and the Simulation Hypothesis
Aug 4, 2016
(4535) Hits
(15) Comments

Shedding Light on Peter Thiel’s Dark Enlightenment
Aug 15, 2016
(4390) Hits
(2) Comments

Cognitive Buildings!
Aug 1, 2016
(3492) Hits
(1) Comments



IEET > Security > Biosecurity > Eco-gov > SciTech > SpaceThreats > Vision > Futurism > Affiliate Scholar > Phil Torres

Print Email permalink (10) Comments (13883) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Why Steven Pinker’s Optimism About the Future of Humanity is Misguided


Phil Torres
By Phil Torres
Ethical Technology

Posted: Dec 13, 2015

It’s easy to be seduced by the news headlines into thinking that the world is going to hell. The Syrian war is an international tangle of state and non state actors, some of whom are genuinely motivated by apocalyptic narratives in which they see themselves as active participants. In fact, a growing number of observers have suggested that the Syrian conflict could be the beginning of a Third World War. Here in the US, there are daily mass shootings, campus rapes, racial discrimination, and police brutality, to name just a few causes for moral alarm. In Europe, the past month has seen multiple terrorist attacks in Paris and London and the worst refugee crisis since World War II. And so on.

Yet if one actually looks at the statistics, the world is steadily becoming more peaceful. This is the conclusion of Steven Pinker’s monumental 2011 book The Better Angels of Our Nature, as well as Michael Shermer’s excellent 2015 follow-up The Moral Arc (essentially a “sequel” of Pinker’s tome). The surprising, counterintuitive fact is that the global prevalence of genocides, homicides, infanticide, domestic violence, and violence against children is declining, while democratization, women’s rights, gay rights, and even animals rights are on the rise. The probability that any one of us dies at the hands of another human being rather than from natural causes is perhaps the lowest it’s ever been in human history, even before the Neolithic Revolution. If that’s not Progress with a capital ‘p’, then I don’t know what is.

The oceanic evidence that Pinker and Shermer present is robust and cogent. Yet I think there’s another story to tell — one that hints at a possible future marked by unprecedented human suffering, global catastrophes, and even our extinction. The fact is that while the enterprise of human civilization has been making significant ethical strides forward in multiple domains, a range of emerging technologies are, by nearly all accounts, poised to introduce brand new existential risks never before encountered by our species (see Figure a).

Our species has of course always been haunted by a small number of improbable hazards, such as pandemics, super volcanoes, and asteroid/comet impacts. In my forthcoming book on existential risks and apocalyptic terrorism, I refer to these as our cosmic risk background. But since 1945, the number of existential risk scenarios has increased (far) beyond historical norms. These new risks are anthropogenic in nature. Obvious examples include global warming and biodiversity loss, which scientists say could lead to the sixth mass extinction event in life’s entire 3.5 billion year history on Earth, or even turn Earth into an uninhabitable cauldron like our planetary neighbor Venus (which succumbed to a runaway greenhouse effect).

But the most worrisome threats are not merely anthropogenic, they’re technogenic. They arise from the fact that advanced technologies are (a) dual-use in nature, meaning that they can be employed for both benevolent and nefarious purposes; (b) becoming more powerful, thereby enabling humans to manipulate and rearrange the physical world in new ways; and (c) in some cases, becoming more accessible to small groups, including, at the limit, single individuals. This is notable because just as there are many more terrorist groups than rogue nations in the world, there are far more deranged psychopaths than terrorist groups. Thus, the number of possible offenders armed with catastrophic weaponry is likely to increase significantly in the future.

It’s not clear how the trends that Pinker and Shermer identify could save us from this situation. Even if 99% of human beings in the year 02100 were peaceable, the remaining 1% could find themselves with enough technological power at their fingertips to initiate a disaster of global proportions. Or, forget 1% — what about a single individual with a death wish for humanity, or a single apocalyptic group hoping to engage in the ultimate mass suicide event? In a world cluttered with doomsday machines, exactly how long could we expect to survive?

The trends that Pinker and Shermer identify also won’t protect us against the looming threat of super intelligence, which I take to be the most significant (known) threat to our longterm future. (It’s important to include the word “known” because it appears highly likely that future artifacts currently hidden beneath the horizon of our technological imaginations will introduce brand new existential risk scenarios that we can’t currently anticipate.) Even if civilization were to become a moral utopia in which war, homicide, and other forms of violence are non-existent, we could still be destroyed by a super intelligent machine that prefers harvesting the atoms in our bodies over ensuring a prosperous future for our children. Indeed, the Oxford philosopher Nick Bostrom argues in his book Superintelligence that we should recognize the “default outcome” of a successfully engineered super intelligence to be “doom.”

It’s considerations like these that have led many riskologists to conclude that the probability of an existential disaster happening this century is shockingly high. For example, the 2006 Stern Review on the Economics of Climate Change, led by the economist Sir Nicholas Stern, assigns a 9.5% probability of human extinction before 2100

Similarly, a survey taken during a Future of Humanity Institute (FHI) conference on global catastrophic risks places the likelihood of annihilation this century at 19%. Bostrom argues in a 2002 paper that it “would be misguided” to assign a probability of less than 25%, adding that “the best estimate may be considerably higher.”  And the Astronomer Royal and cofounder of the Centre for the Study of Existential Risk (CSER), Sir Martin Rees, states in his 2003 book Our Final Hour that our species has a mere 50/50 chance of surviving into the next century — a pitiful coin toss! I myself would put the probability around 50%, mostly due to a phenomenon that I term monsters in my forthcoming book.

To put these figures in perspective, prior to the Atomic Age, the probability of human extinction from a natural catastrophe was extremely small — perhaps even negligible on a timescale of centuries. It follows that the past 70 years has witnessed a sudden and rapid increase in both the number of existential risk scenarios and the probability of an irreversible tragedy occurring.

Projecting such trends into the future, I don’t think it’s crazy to wonder whether the rate at which future technologies will introduce brand new existential risks might be exponential — perhaps tracking the Moorean trend of exponential growth found in fields like computer science, biotechnology, synthetic biology, and nanotechnology. If so, we might expect something like an existential risk singularity, in Ray Kurzweil’s sense of the term “Singularity,” at which point doom would become practically inescapable.

While the historical trajectory of moral progress revealed by Pinker’s and Shermer’s studies suggest a rather sanguine picture of the future, the fact is that population-level statistics are ultimately irrelevant to the longer-term prospects of human survival, given the increasing power and accessibility of dual-use technologies. Even if the fringe of lone psychopaths, apocalyptic cults, terrorist groups, and rogue states were to decrease considerably in the future, just one misstep could be enough to catapult us back into the Stone Age, or worse. (As Einstein once said, “I know not with what  weapons World War III will be fought, but World War IV will be fought with sticks and stones.”) The current trends – both technological and sociological – suggest that the world is simultaneously become safer and far more dangerous.

There are a few reasons for hope. Perhaps future humans will manage to solve the “amity-enmity problem” and engineer a friendly super intelligence. As Stephen Hawking has suggested, if super intelligence isn’t the worst thing to happen to humanity, it could very well be the best. Or perhaps space colonization will significantly reduce the risk of an existential disaster, since the wider we spread out in the world, the less chance there is that a single event will have worldwide consequences. It could also be the case that we successfully integrate technology into our cognitive wetware, or use a process like iterated embryo selection to create more intelligent, sagacious, and responsible posthumans. As I put it in my book, it might be the case that to survive we must go extinct — that is, through a techno-evolutionary process of anagenetic cyborgization, as transhumanists advocate. This may sound somewhat fantastical, but we may have no other choice.

The fact is that we’re no longer children playing with matches, we’re children playing with flamethrowers. We either need parents to watch over us, or to grow up ourselves.

In the absence of such futuristic solutions, it could be that the world really is going to hell – despite the hopeful noises made by smart people like Pinker and Shermer.

For more on my forthcoming book, The End: What Science and Religion Tell Us About the Apocalypse (Pitchstone Publishing), go here. Also check out the literature on the X-Risks Institute website, here.

Figure a: A typology of risks, organized according to the properties of spatial scope, temporal scope, and intensity. Existential risks are (a) all red-dot events, and (b) any black-dot event that’s sufficiently severe. (Cf. with Bostrom’s 2002 typology.)

Figure b: A Bostrom-style graph showing the dual trends of (a) a decline in global violence, and (b) an increase in the capacity of state and, especially, nonstate actors to wreak unprecedented havoc on society.


Phil Torres is an author and artist. His forthcoming book is called The End: What Science and Religion Tell Us About the Apocalypse (Pitchstone Publishing). You can contact him here: philosophytorres@gmail.com.
Print Email permalink (10) Comments (13884) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Phil, I agree with your analysis. We’re facing an increasingly powerful and increasingly decentralized threat to our survival: technology. If we survive, it will be because we either find a way to centralize the risk (a really bad idea, in my opinion, as it may introduce far worse risks than extinction) or to mitigate the risk in a decentralized way, which I believe will require unprecedented degrees of global cooperation, good will, friendliness, and even genuine compassion for each other.





I’ll add that I think technology also presents substantial opportunity to assist with that global increase in decentralized compassion, as illustrated to some extent by the effects social networking, which I believe has enhanced our collective ability for empathy.





I didn’t like Pinker’s analysis because the trends he discusses are dependent upon energy. People are domesticated and completely at the mercy a highly complex and fragile civilization system with heavy policing mechanisms. If part of this breaks down because of energy shortages or any number of climate related issues, then we’ll see a ton of violence latent within this configuration. Humans have not changed in a fundamental way. Energy-dependent stability has allowed us to constrain our behavior, and this probably also has a lot of basis in capitalistic exploitation. Well-behaved people are easier to profit off of in several ways. What happens if the constraints come off in a world of over 7 billion people?





On the other hand: for every techno-supervillain, there will be thousands, or even millions of techno-superheroes.  The superheroes will have the advantages of large-scale networking and cooperation, while the Transparent Society/Participatory Panopticon we’re creating (cameras and sensors everywhere) gives the emerging superhero-civilization ever-increasing abilities to spot supervillains before they attack.  The superheroes will dominate the major centers of research and development. 

So, if an evil biohacker uses CRISPR/cas9 to develop a gene drive super-bioplague, there will be many more white-hat biohackers, the CDC and its equivalents in other nations (or whatever replaces nations by then), etc. fighting to stop it.  Gray Goo nanotechnology (assuming such is physically possible) will be countered by “Blue Goo” nanotechnology (very tiny cops taking down very tiny criminals).  So far as I know, there’s no reason to think that a lone whackjob will ever be able to generate an Earth-Shattering Kaboom (instantaneous global destruction) due to physical limits on energy sources and energy density. 

This means the most dangerous existential threats are still societal/systemic in nature, rather than originating from small groups or lone individuals: catastrophic climate change, ol’ fashion Nucular Combat Toe to Toe With the Russkies, anthropogenic collapse of the biosphere (mass extinction), and the like.  Well, except for the possibility that the first AGI could be created as a weapon of Full-Spectrum Dominance (who decides to keep the FSD to hirself rather than conferring it upon hir creators) or a high-frequency stock trading program that gobbles up the global economy/ecosystem and invests the whole shebang in paperclip production.  Hopefully we will tread very carefully and very transparently there.





KevinC: You make an excellent point. Your comment reminds me of a superb book called The Future of Violence, in which the authors write: “In one sense, the distribution of defensive capacity is a blessing for those worried about the distribution of offensive capacity. … Indeed, the proliferation of defensive capability is — at least in theory — exponentially greater than the proliferation of offensive capability because the good guys vastly outnumber the bad guys.”

A few very quick thoughts about this. First, I think it’s the best counterpoint to the argument that I present above. I’m glad you made it explicit. Second, it’s not clear to me that the defensive capability will be “exponentially” greater (to quote the authors above), although I do think it will certainly be greater. (If I have time later, I’ll try to elaborate on this point.) Third, it’s often been the case historically that defensive technologies have been invented *after* the relevant offensive technologies. It follows that, given the trends of exponential growth and power, even an incredibly short lag between gray goo and blue goo could nonetheless be long enough for an irreversible catastrophe to occur. But maybe we’ll be able to develop strategies in the future for ensuring that, say, nanoimmune systems are built before the relevant destructive technologies — a version of differential, or staggered, technological development.

And this leads to the final point: in the foreseeable future, it could quite plausibly be the case that even a *single* failure to prevent the supervillians from initiating a disaster will be all it takes for civilization to collapse. (As I’m sure you know, existential risks are by definition one-time, non-repeatable events in a species’ history.) Thus, even if we were to install ubiquitous sensors and cameras throughout society, unless this system were 100% effective at catching potential offenders before they perpetrated a crime 100% of the time, doom would be more or less certain over a long enough period. As mentioned in the article, advanced technologies will very likely increase the existential stakes immensely, and herein lies the core concern.

Just some hastily written thoughts…





Yes, there is a potential for catastrophe, but the one you selected for special mention—superintelligence—is NOT even remotely close to any of the others, either in terms of amount of danger, or likelihood of occurring.

The last two articles I wrote for IEET (as far as I know anyhow:  my writings have turned up here without my knowledge) were about that very issue.  The first (http://ieet.org/index.php/IEET/more/loosemore20110105) was called “Don’t let the Bastards Get You From Behind” and in that I warned that the “sexy” catastrophes were getting all the attention while the boring ones were being ignored ... and it is the boring ones that contain all the real danger.

My second article was The Fallacy of Dumb Superintelligence (http://ieet.org/index.php/IEET/more/loosemore20121128), and in that I discussed the reasons why Bostrum’s book (not yet published at that point) is based on a naive and completely erroneous conception of how AI systems work.





Great article Phil. I think our best hope, given the high probability of our extinction, is intellectual and moral enhancement. While intellectual augmentation and artificial intelligence have gotten most of the headlines, I think moral enhancement along the lines of the ideas of Ingmar Persson and Julian Savulescu is crucial to our survival.





Will keep it short, as the comments have already said it all.
Pinker is more or less correct, the trend is towards peace albeit a trend as you all know can be reversed. One can only go by the outlook Now.
What we are going through today is nothing next to WWII and even ‘Nam.  What you have to do first off, is provide hard evidence for climate change; until such evidence is forthcoming, will be skeptical. You don’t know what the outcome will be: it might be the melting of the polar icecaps and a lowering of temperatures.





I find KevinC’s position close to my own, which I summarize thus:

Problem 1: There are bad guys aplenty in the world.
Problem 2: Superman can’t be everywhere at once.
Solution: Let everyone willing to abide by a certain code of behavior wield a Green Lantern ring.

I have also expressed this as “It’s hard for the Terminator to get going when he is immediately surrounded by 2 dozen Iron Men.”

Of course, this idea goes back to even older aphorisms. “An armed society is a polite society.” and “God may have created men, but Sam Colt made them equal.”





“An armed society is a polite society.” and “God may have created men, but Sam Colt made them equal.”

Law enforcement and criminals can defend themselves yet civilians can only do so while taking risk in court. That is, defending oneself in court is much more arduous than defending oneself on the street.

And Hollywood, which is alleged to be progressive, produces films stirring up base excitements in viewers.
No one who knows anything about freedom of expression would suggest mandatory policing of films; it is in fact up to the producers/directors of films to rein in their own interests in producing ultra-violent films.
Perhaps they could cut down on the tomato juice used in depicting blood. Say, a film could use 50 quarts of tomato juice rather than 100. Twenty five actors being shot or stabbed to death instead of forty.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Should You Be Concerned About Gene Drives?

Previous entry: Socrates Deconstructs Singularity University

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org