with co-authors Anders Sandberg and Jason G. Matheny
In the early morning of September 10, the Large Hadron Collider will be tested for the first time amid concern that the device could create a blackhole that will destroy the Earth. If you’re reading this afterwards, the Earth survived. Still, the event provides an opportunity to reflect on the possibility of human extinction. Since 1947, the Bulletin has maintained the Doomsday Clock, which “conveys how close humanity is to catastrophic destruction—the figurative midnight—and monitors the means humankind could use to obliterate itself.” The Clock may have been the first effort to educate the general public about the real possibility of human extinction.
A new pandemic is sweeping the planet. Police fired on secessionist demonstrators in Oregon. The Chinese government is trying (unsuccessfully) to suppress news of eco-terrorists bombing multiple coal-fired power plants. We’re looking at climate refugees numbering in the tens of millions. The human race will go extinct by 2042.
So much to say about the current financial mess, so little time. I’ll leave investors to fend for themselves this week. I’ve given enough of that CNBC-style advice lately, contrarian though it may be. I’d rather spend these precious minutes explaining why the financial meltdown is not a bad thing for a lot of us.
You have my permission to slap the next futurist (foresight thinker, scenario strategist, or trend-spotter) who uses the expression “this changes everything” seriously. Slap them hard. Maybe a shin-kick, too, if you’re into it.
Topics include: artificial intelligence, the future of civilization, transhumanism, the singularity, mind uploading, human extinction risks including the Toba super volcano, his simulation argument, and much more.
Hosted by Stephen Euin Cobb, this is the September 10, 2008 episode of The Future And You. [Running time: 74 minutes]
Doctor Nick Bostrom is a philosopher at Oxford University, and is the Director of the Oxford Future of Humanity Institute. In 1998, he co-founded (with David Pearce) the World Transhumanist Association, and in 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies (IEET).
In addition to his writing for academic and popular press, Nick Bostrom makes frequent media appearances in which he talks about transhumanism-related topics such as artificial intelligence, nanotechnology, cloning, mind uploading, cryonics, and the simulation argument.
He has a background in cosmology, computational neuroscience, mathematical logic, philosophy, and artificial intelligence, and is the author of the book Anthropic Bias: Observation Selection Effects in Science and Philosophy.
His research interests include the philosophy of science, probability theory, and the ethical and strategic implications of anticipated technologies. He has been a consultant for the Central Intelligence Agency in the U.S., and for the European Commission and the European Group on Ethics in Brussels. (MP3)
I felt a bit nauseous watching the Republican convention last night. I’m very much a give-the-benefit-of-the-doubt kind of guy, so I try to listen to the arguments people make even when they’re made in over-the-top or patronizing ways.
Fred Baumann, a professor of political science at Kenyon College, gave fourteen lectures, distributed by Recorded Books, on Plato’s Republic, More’s Utopia, Bacon’s New Atlantis, Rousseau, Marx, BF Skinner and transhumanism. We talk about his lectures and ideas about utopias.
Dr. Geortzel makes clear that the goal of Artificial General Intelligence (AGI) – a real thinking machine that can achieve a variety of complex goals, and can understand what it is, that it is, and that there are other beings that it can interact with – differs significantly from typical narrow AI application systems. AGI needs to be able understand what it has learned in one context and transfer it to another context.
How do you create an AGI? Goertzel considers one possible path to AGI is through the use of virtual worlds as incubators for nascent artificial intelligence systems. This frees developers from the constant challenges presented by sensors and actuators in the physical world, but more importantly, virtual worlds offer an environment for large-scale collaboration among AI researchers.
The Metaverse Roadmap Overview, an exploration of imminent 3D technologies, posited a number of different scenarios of what a future “metaverse” could look like. The four scenarios—augmented reality, life-logging, virtual worlds, and mirror worlds—each offered a different manifestation of an immersive 3D world. Of the four, I suspect that augmented reality is most likely to be widespread soon; moreover, when it hits, it’s going to have a surprisingly big impact. Not just in terms of “making the invisible visible”—showing us flows and information that we otherwise wouldn’t recognize—but also in terms of the opposite: making the visible invisible.
Dr. Phineas Waldolf Steel is a mentally twisted but awe-inspiring figure whose interests span the production of propaganda, the construction of chronically malfunctioning robots, puppet shows, and an ongoing attempt to become World Emperor for the purpose of turning this planet into a Utopian Playland.
Ben Goertzel, noted scientist, author, futurist and pioneer in the field of Artificial Intelligence, is today’s featured guest. Topics he discusses include: Artificial General Intelligence (AGI), the singularity, transhumanism, human immortality and how long he expects to live, and why (like your host) he is a founding member of the Order of Cosmic Engineers.
Highlights of the interview include: The mechanism of human empathy seems to have been identified, and so can be reproduced in AI; even AI that is radically different in its thinking from human beings. Doctor Goertzel explains that this empathy is not based on emotion, and he emphasizes that he does not want to create an AI which is governed by its emotions.
He stresses that the human mind does not qualify as a completely ‘General Intelligence’ but lies somewhere on the spectrum between AGI on one end and ‘Narrow AI’ on the other. This is one of several reasons why he does not expect AGI to be achieved by mimicking the workings of the human brain.
He describes how our brains fool us into believing that we understand our actions and decisions when we don’t. And why modeling an AI too closely on the human brain might make it too, vulnerable to false notions.
He also says, ‘I think virtual worlds are going to be absolutely critical to the development of Artificial General Intelligence.’ As well as ‘Right now connecting AI’s to virtual worlds is probably the best way to get an AI to have a general human-like embodied experience.’
Hosted by Stephen Euin Cobb, this is the August 13, 2008 episode of The Future And You. [Running time: 74 minutes]
Arthur Caplan discusses Is it Immoral to Want to Live Longer, Be Smarter and Look Better? The Ethics of Using Biomedicine to Enhance Ourselves and Our Children as a part of The Ethical Frontiers of Science during the 2008 Chautauqua Institution morning lecture series.
Vernor Vinge, science fiction author, computer scientist, and retired mathematics professor, is the inventor of the term 'singularity' as applied to a future point of unprecedented technological progress, caused in part by the ability of machines to improve themselves using artificial intelligence.
Nick Bostrom has been awarded the title of (full) Professor of Applied Ethics at Oxford University, effective October first, 2008, in recognition of research “of outstanding quality” and his “significant international reputation.”
During his talk Jamais Cascio introduced 4 possible scenarios where our world is heading regarding climate change: Jamais: The four boxes represent a variety of “response” scenarios, each embracing elements of the prevention, mitigation, and remediation approaches to solving the climate crisis. Certain approaches may receive greater emphasis in a given scenario, but all three types of responses can be seen in each world.