Bostrom talk on X-Risk and AI transcribed

Sep 8, 2007

The People Database Project has transcribed a talk by Nick Bostrom on “Longevity Escape Velocity and the Singularity” from the 2006 Singularity Summit hosted by the Singularity Institute for Artificial Intelligence.. Video and audio are also online.

bostrom_pic_1_tn.jpg

bostrom_1.png Artificial Intelligence and Existential Risks The title of my talk is “Existential Risks and Aritifical Intelligence.”  It will mainly be on existential risks, which I think will be sort of my distinctive perspective on today’s issue.  I have quite a few slides here, so let’s get to them.  We can characterize risk in terms of three dimensions: Scope, which would be the number of people affected.  Intensity, which would be how badly each affected person would be.  And probability, the likelihood of its occuring given all the available information.  So if we plot this in a diagram, we can fill it in with a number of familiar types of phenomena.  Say your car is stolen, for instance.  That would be an ‘endurable’ personal risk, it affects one person, it’s more than imperceptible.  But you can still have a good life even after your car has been stolen, and you can recover from it.  A ‘terminal’ risk in this nomenclature would be one that could either be fatal or cause permanent harm: brain damage or lifetime imprisonment.  Something like that.  It dramatically and seriously curtails your potential for achieving a good life.

bostrom_2.png Generally, as we move upwards and to the right in this diagram, we get more serious risks that have larger scope and higher intensity.  If we take an imperceptible transgenerational risk, say the loss of one species of beetle, nobody would really notice, but it would be more than global in the sense that it would not only affect the current state of the world, but it would be a permanent loss.  A thinning of the ozone layer might be an endurable global risk.  What can we put here is the terminal global category?  Maybe aging will kill us all if we don’t solve it, so it’s a terminal risk.  It’s not a transgenerational one in that even if we fail to solve it now, there is still a chance that it could be fixed later. So that leaves one thing up here in the right corner.  This cluster of four boxes there we can call ‘global catastrophic risk.’  These are really big risks.  What I have put in the boxes here is just for illustration.  These are not the biggest or most representative examples of each kind.  So that leaves us with this top box here, which I call ‘existential risks.’  These are transgenerational, terminal risks.  That means it is impossible to recover from them.  They will affect not just everyone alive today, but the whole future of humanity.  So this is the definition:

An existential risk is one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
We have a lot of experience with other kinds of risks: dangerous animal, hostile tribes and individuals, toxic berries and mushrooms, automobile accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, droughts, tsunamis, wars, epidemics of different diseases, influenza, smallpox, AIDS.  Those kinds of disasters have occurred many, many times throughout human history.  And our attitudes towards risks have been shaped by trial and error in dealing with those types of risks.  But even the worst of these catastrophes, as tragic as they might have been for the people immediately affected, and millions have been affected by some of these examples, if you zoom out and look at these from the point-of-view of humanity as a whole, even the worst of these disasters would be mere ripples on the great sea of life.  They have not significantly affected the total amount of happiness or suffering there will have been.  They have not determined or significantly shaped the future of humanity. Global catastrophic risks are a different kettle of fish altogether.  Here are some recent opinions that suggest that some people at least believe that existential risk is worth taking seriously.  Sir Martin Rees might be the most well-known of the recent people to have written about this in his book, which was called in England Our Final Century, but adapted for what the publishers thought was the shorter attention span of a U.S. audience, so here it was called Our Final Hour.  Professor John Leslie wrote a book in the late ‘90, where he came up with a figure of 30% for the next 500 years.  Sir Martin Rees’s figure was 50% for the next 100 years.  Judge Richard Posner, one of the preeminent legal scholars of this country, wrote a book on this a couple of years ago where he doesn’t give a number, but he says that there is a significant risk. I wrote a paper a few years ago, where I said 25%, and all of these numbers are really grabbed from a hat, to some extent.  They are very subjective.  Nevertheless, these are pretty much the only four books that have been written on this topic in the last decade and they all concur that the risk is alarmingly high.  Now, if you believe the risk is greater, then you are more likely to write and publish books about it.  But nevertheless, there is a consensus among all those who have studied it that the risk is very high.  A lot of other people have been concerned in various ways and to different degrees.  Bill Joy, of course.  Some of the people on the panel here, have also been concerned with this.  Now, even if the risk were much smaller than this.  Say it were not 50% or 20% but even just 2% or 1%, that might seem very little.  But in light of the enormous consequences it would still be very worth being concerned about. There are different ways you can carve up existential risks into different categories.  One distinction one can make is anthropogenic risks that emerged in some way due to human activity.  I believe that the real issue is the anthropogenic risks.  Without going into details, our species has survived the non-anthropogenic risks for hundreds of thousands of years.  They haven’t wiped out the human species yet, so this includes things like meteors, earthquakes and other things.  So, if they haven’t managed to do this in the last 100,000 years, then it’s probably not going to happen in the next hundred years.  The non-anthropogenic risks, on the other hand, resulting from human activity might very well pose a much greater threat over the next hundred years, because we are now doing a lot of things that we have never done before, and we will be doing a lot more of those things in the coming century, including particularly new technologies.  So this is what I think should be the primary focus of efforts to reduce existential risk. Now there is good news and bad news in this.  On the one hand, since anthropogenic risks arise from human activity, they also in principle are within our power and ability to do something about.  On the other hand, it is often extremely complex and difficult to change these risks that emerge from our human behavior.  Here is another way you can divide different types of existential risks.  Bangs: earth-originating intelligent life goes extinct in a relatively sudden disaster.  So this is what comes most immediately to mind when one thinks of extinction scenarios.  It is not the only type of existential risk.  The point of using this terminology here is that it draws attention to the other, perhaps subtler ways in which we could suffer an existential disaster.  Crunches:  humanity’s potential to develop into post-humanity is permanently lost, although human life continues in some form.  So here it is just useful to have a term for the potential state of flourishing that human civilization might one day be able to attain if everything goes well.  I use the term post-humanity for this, just to have a label for it.  Shrieks:  a limited form of post-humanity is attained, but it is an extremely narrow band of what is possible and desirable.  Whimpers: our post-human civilization is temporarily attained but then it evolves in a direction which leads gradually to either the complete disappearance of things we value, or limited to a minuscule degree of what could have been achieved. Now I will breeze through in the next four slides a couple of examples of each of these categories.  The bangs, so here we have 1. nanotechnological weapons system, used either deliberately to destroy the world or maybe in some accident scenario, or an arms race where an accident becomes more and more likely.  2. Badly programmed superintelligence.  If you have a very rapid take-off singularity scenario, then it might be easier to see that this could happen, but even within a softer take-off this could be a threat.  3.  We are living in a computer simulation and it gets shut down.  There is a paper here that you can look at if you are interested in that.  It might seem weird to show in all these bizarre risks with very concrete risks like a nuclear holocaust. So, maybe I should say that a criterion for listing something here, first of all, it is to some extent arbitrary.  But second, it is not necessarily the greatest risks right now.  So, for example, nanotechnological weapons systems cannot kill us now because we don’t have any nanotechnological weapons systems.  But if something were going to cause an existential disaster ever, things like superintelligence and nanotechnological weapons systems would rank high up in that list, at least in the bangs category. Nuclear holocausts are probably not going to happen with the arsenals we have today.  But then again, there could be future nuclear arms races that could involve much larger arsenals than during the Cold War.  Biological weapons and non-weapon biological risks raises quite different issues.  Natural pandemic runaway global warming, supervolcano eruptions, physics disasters.  I recently published a paper in Nature with Max Tegmark on that last one, where we managed to place a very reassuring upper bound on that risk.  Impact hazards from asteroids and comets, that’s one of the risks that is real but knowably small.  It is one of the things we can rigorously calculate.  And space radiation is lumped together with different kinds of radiation hazards from space.  Gamma ray bursts and other things like that. Crunches, ranging from resource depletion or ecological destruction.  Misguided world government or another source of social equilibrium that stops technological progress.  So, again, in judging the possibility of something like that, don’t think of whether right now at this moment or during the last few years the world has been moving toward global governance or further away from it, but think from a much bigger time perspective whether something like this could happen.  We started out by being hunter-gatherer tribes, and then cities and city states, nation states and now regional forms of governance in different parts of the world might not be such a far-fetched idea.  And again, listing bad risk here does not mean that one should either be in favor or opposed to international forms of governance.  A lot of the things that are listed here that have beneficial effects could help produce risks overall.  Dysgenic pressures, this would be evolutionary selection that could kick into effect at various points in our future and lead in the wrong direction.  It’s not true that evolution through some natural law must always lead to better, more valuable, higher complexity.  It can also evolve things in the other direction. Shrieks: these would be things that limit post-humanity.  So, maybe some sort of flawed superintelligence.  A repressive totalitarian governing regime.  A take-over by a transcending upload: so this would be a particular scenario where a superintelligence would arise out of a particular human brain being uploaded and enhanced iteratively.  That would preserve one little element of humanity perhaps, but it should have been all of us rather than just one.  And finally whimpers: our potential or even core values are eroded by evolutionary development and self-modification.  One aspect of this threat is that as we gain increasing abilities to modify ourselves, if we use that unwisely we might end up going down a path where if we had seen the destination in advance, we would never have wanted to go there.  The end point might be totally dehumanized, while each little step might have seemed attractive.  Killed by an extraterrestrial civilization: again, not an imminent threat, I think.  But if you look at the scenario where we survive to spread through the galaxy, who knows, a billion years from now this might be one of the things that keep people awake at night. In the last little part of my talk, let me just mention some of the challenges that confront us here.  The cognitive difficulty of thinking about existential risks in a rational way is very difficult for people, it appears.  The tendency rather is either to ignore existential risks altogether, or to be alarmist about them, or to use them as inspiration for selling science fiction movies.  It is very difficult for people to address them, at least with the same rationality that you would apply to buying a car, say.  And this is reflected as well in the point that I made earlier that four books written for the general audience, not scientific treatises, is the sum total of effort that humanity has managed to devote to thinking about its own survival, pretty much.  I’m exaggerating slightly, but not as much as one would like to think. More specifically, there are various biases that in effect human cognition that are more severe in approaching existential risks than in other fields.  Let me just pick out one here, which is scope neglect.  Here is one experiment.  Various people were asked about how much they would be willing to pay to save either 2,000 birds, 20,000 birds, or 200,000 birds from dying in uncovered oil ponds.  The result was to save 2,000 birds, on average people were willing to pay $80.  To save 20,000 birds, $78.  And 200,000 birds, $88.  Not a statistically significant difference.  Looking at these results and comparing them to the complete neglect of existential risks, compared with how much effort is devoted to reducing much smaller hazards, it is difficult to think that there is not an analogy there. This type of finding is also something that philanthropists in attendance here today would do well to keep in mind.  Reducing existential risks even by the tiniest, tiniest amount is still worth a lot.  It’s arguably more valuable than anything else you could do, at least if you have some sort of utilitarian perspective.  So, what should be done?  The primary finding is that we don’t really know.  It’s very complex to figure out what should be done.  It is easy to come up with one idea of how to change the world to reduce existential risk, but sometimes what seems to be a good idea will actually increase risk.  So, in light of this ignorance, it might be a good idea to do more research on the methodology of existential risk studies, more research into specific risk categories (like those from nanotech, AI, and so on), to build institutions, scientific and policy communities of scholars and policy makers that are concerned with existential risks, specific efforts to reduce specific threats, like pandemic disease surveillance networks, near-earth object surveillance to map out asteroids and meteors, nanotech safety, Friendly AI research.  I’m not saying these are the best possible things to do, but they at least spring to mind as possible things to consider if we are concerned with existential risks.