The future is going to be wonderful (If we don't get whacked by the existential risks)

2014-05-30 00:00:00

In this episode of Singularity 1 on 1 Socrates talks with Stuart Armstrong (a research fellow at the Future of Humanity Institute at Oxford) about existential risks to humanity and the earth.



​Podcast: Play in new window | Download


Stuart Armstrong is a James Martin research fellow at the Future of Humanity Institute at Oxford where he looks as issues such as existential risks in general and Artificial Intelligence in particular. Stuart is also the author of Smarter Than Us: The Rise of Machine Intelligence ​and, after participating in a fun futurist panel discussion with him - Terminator or Transcendence, I knew it is time to interview Armstrong on Singularity 1 on 1.

During our conversation with Stuart we cover issues such as: his transition from hard science into futurism; the major existential risks to our civilization; the mandate of the Future of Humanity Institute; how can we know if AI is safe and what are the best approaches towards it; why experts are all over the map; humanity’s chances of survival… 

My favorite quote from this interview with Stuart Armstrong is: “If we don’t get whacked by the existential risks, the future is probably going to be wonderful.”

(You can listen to/download the audio file above or watch the video interview in full. If you want to help me produce more episodes like this one please make a donation!)

Who is Stuart Armstrong?

Stuart Armstrong was born in St Jerome, Quebec, Canada in 1979. His research at the Future of Humanity Institute centers on formal decision theory, the risks and possibilities of Artificial Intelligence, the long term potential for intelligent life, and anthropic (self-locating) probability. Stuart is particularly interested in finding decision processes that give the “correct” answer under situations of anthropic ignorance and ignorance of one’s own utility function, ways of mapping humanity’s partially defined values onto an artificial entity, and the interaction between various existential risks. He aims to improve the understanding of the different types and natures of uncertainties surrounding human progress in the mid-to-far future.

Armstrong’s Oxford D.Phil was in parabolic geometry, calculating the holonomy of projective and conformal Cartan geometries. He later transitioned into computational biochemistry, designing several new ways to rapidly compare putative bioactive molecules for virtual screening of medicinal compounds.



Other Future of Humanity Institute Interviews





In this episode of Singularity 1 on 1 Socrates talks with Stuart Armstrong (a research fellow at the Future of Humanity Institute at Oxford) about existential risks to humanity and the earth.



​Podcast: Play in new window | Download


Stuart Armstrong is a James Martin research fellow at the Future of Humanity Institute at Oxford where he looks as issues such as existential risks in general and Artificial Intelligence in particular. Stuart is also the author of Smarter Than Us: The Rise of Machine Intelligence ​and, after participating in a fun futurist panel discussion with him - Terminator or Transcendence, I knew it is time to interview Armstrong on Singularity 1 on 1.

During our conversation with Stuart we cover issues such as: his transition from hard science into futurism; the major existential risks to our civilization; the mandate of the Future of Humanity Institute; how can we know if AI is safe and what are the best approaches towards it; why experts are all over the map; humanity’s chances of survival… 

My favorite quote from this interview with Stuart Armstrong is: “If we don’t get whacked by the existential risks, the future is probably going to be wonderful.”

(You can listen to/download the audio file above or watch the video interview in full. If you want to help me produce more episodes like this one please make a donation!)

Who is Stuart Armstrong?

Stuart Armstrong was born in St Jerome, Quebec, Canada in 1979. His research at the Future of Humanity Institute centers on formal decision theory, the risks and possibilities of Artificial Intelligence, the long term potential for intelligent life, and anthropic (self-locating) probability. Stuart is particularly interested in finding decision processes that give the “correct” answer under situations of anthropic ignorance and ignorance of one’s own utility function, ways of mapping humanity’s partially defined values onto an artificial entity, and the interaction between various existential risks. He aims to improve the understanding of the different types and natures of uncertainties surrounding human progress in the mid-to-far future.

Armstrong’s Oxford D.Phil was in parabolic geometry, calculating the holonomy of projective and conformal Cartan geometries. He later transitioned into computational biochemistry, designing several new ways to rapidly compare putative bioactive molecules for virtual screening of medicinal compounds.



Other Future of Humanity Institute Interviews





http://www.singularityweblog.com/stuart-armstrong-existential-risks/