Existential Risks Now to Yield AI Enrichment, Not Destruction
Daniel Faggella
2015-10-17 00:00:00

In July 2014, Bostrom published Superintelligence: Paths, Dangers, Strategies, which delves into the enormous potential of AI to enrich society, and the significant risk that accompanies it. This risk seems to command much of Bostrom’s attention, and he’s not alone. In July 2015, Bostrom joined figures like Elon Musk and Stephen Hawking in signing the Future of Life  Institute’s open letter about the dangers of uncapped artificial intelligence in autonomous weapons.





Bostrom says that the concept of an existential risk “directs our attention to those things that could make a permanent difference to our long term future”, i.e. something that could lead to our extinction or the disruption of society’s future development. In short, these are not minor kinks that we can work out as we go along, but issues that must be considered now.



As the founder of the Future of Humanity Institute, Bostrom has examined the distant future outcomes for humanity over the last decade. “It is,” Bostrom told me, “intrinsically, a multidisciplinary endeavor.” There’s no single approach one must take when considering so-called existential risks, as they’re unprecedented and difficult to quantify through mathematics or science.



There’s very little in the way of data that we can use to predict where artificial intelligence will take us. Bostrom contrasts this with the example of asteroids, explaining how we can use astronomical data and crater impacts to better understand the risk of humanity being wiped out by one of these speeding rocky bodies (fortunately, the risk is relatively small).



Likewise, climatic issues, the spread of disease, and even military action generally have some historical data associated with them that give us some chance of coming up with hypotheses about their future impact.



“The really big existential risks are not in any direct way susceptible to this rigorous quantification,” he says. Which makes sense, of course; the risks of AI are man-made, and they’re almost entirely new. “There’s just no way to calculate the probability of us being destroyed by super-intelligent machines.”



Admitting Biases, Finding Solutions



Bostrom suggests that one of the best chances we have is identifying biases we can avoid. He comments that a reward bias - this could be a financial incentive such as a grant; the impact of working within a particular discipline; or the desire to win an award, publish a study etc. - on the part of researchers can have a huge impact on direction of research.



Another potential stumbling block is collaboration across countries. “Global coordination is a tricky problem,” says Bostrom. Unifying individuals with different nationalities, who may have different political agendas and are working within different disciplines to work towards a common goal is incredibly difficult – inevitably, different individuals have different priorities.



This type of collaboration is an admirable ambition, but it’s one that might feel far away in the current geopolitical climate. Bostrom states that “we’re (the Future of Humanity Institute at Oxford) trying to change global perceptions that have traditionally been neglected...even a very small impact on that would be extremely worthwhile to pursue.”



He suggests that, up until this point, “we’ve been very lucky with the technology that’s been developed, in that it’s basically created our current modern condition of prosperity.” Inventors, technicians and R&D departments can’t let themselves be brought to a halt by these existential risks, but it’s clear that we need to do as much as possible to safeguard against developments that may do more harm than good.



In the case of AI, that means sharing knowledge across disciplines and taking as many precautions as possible. The odds are too great that we will not get a second chance to right any mistakes that occur the first time around.