Jupiter is often referred to as a “failed star,” leading some futurists to wonder if our descendants might set it ablaze in a process called planetary stellification. A new study suggests this is indeed theoretically possible—and that we should be on the hunt for galactic aliens who have already converted their gas giants into stellar objects.
Phil Torres’ new book The End: What Science and Religion Tell Us about the Apocalypse, is one of the most important books recently published. It offers a fascinating study of the many real threats to our existence, provides multiple insights as to how we might avoid extinction, and it is carefully and conscientiously crafted.
They’re big-eyed and slight of build. They’re the grim, greenish beings that every moviegoer recognizes as aliens—the inscrutable inhabitants of a distant world. Playing supporting roles in countless films and TV shows, these hairless homunculi have become iconic.<
But we've never seen a real alien. Indeed, we don't even know if real aliens exist.
The field of Existential Risk Studies has, to date, focused largely on risk scenarios involving natural phenomena, anthropogenic phenomena, and a specific type of anthropogenic phenomenon that one could term “technogenic.” The first category includes asteroid/comet impacts, supervolcanoes, and pandemics. The second encompasses climate change and biodiversity loss. And the third deals with risks that arise from the misuse and abuse of advanced technologies, such as nuclear weapons, biotechnology, synthetic biology, nanotechnology, and artificial intelligence.
I want to elaborate briefly on an issue that I mentioned in a previous article for the IEET, in which I argue (among other things) that we may be systematically underestimating the overall probability of annihilation. The line of reasoning goes as follows:
Since the first species of Homo emerged in the grassy savanna of East Africa some 2 million years ago, humanity has been haunted by a small constellation of improbable existential risks from nature. We can call this our cosmic risk background. It includes threats posed by asteroid/comet impacts, super volcanic eruptions, global pandemics, solar flares, black hole explosions or mergers, supernovae, galactic center outbursts, and gamma-ray bursts. While modern technology could potentially protect us against some of these risks — such as asteroids that could induce an “impact winter” — the background of existential dangers remains more or less unchanged up to the present.
IEET Affiliate Scholar Phil Torres has published a book on Existential Risks, titled The End: What Science and Religion Tell Us About the Apocalypse. The Foreword was written by IEET Fellow Russell Blackford.
Humanity faces a range of threats to its viability as a civilization and its very survival. These catastrophic threats include natural disasters such as supervolcano eruptions and large asteroid collisions as well as disasters caused by human activity such as nuclear war and global warming. The threats are diverse, but their would-be result is the same: the collapse of global human civilization or even human extinction.
It’s easy to be seduced by the news headlines into thinking that the world is going to hell. The Syrian war is an international tangle of state and non state actors, some of whom are genuinely motivated by apocalyptic narratives in which they see themselves as active participants. In fact, a growing number of observers have suggested that the Syrian conflict could be the beginning of a Third World War. Here in the US, there are daily mass shootings, campus rapes, racial discrimination, and police brutality, to name just a few causes for moral alarm. In Europe, the past month has seen multiple terrorist attacks in Paris and London and the worst refugee crisis since World War II. And so on.
Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. If developed recklessly and improperly, it could pose a significant risk. Typically, we try to manage this risk/reward ratio through various regulatory mechanisms. But AI poses significant regulatory challenges. In a previous post, I outlined eight of these challenges. They were arranged into three main groups. The first consisted of definitional problems: what is AI anyway? The second consisted of ex ante problems: how could you safely guide the development of AI technology? And the third consisted of ex post problems: what happens once the technology is unleashed into the world? They are depicted in the diagram above.
IEET Fellow David Brin has co-edited (with Matthew Woodring Stover) a book published on November 3, 2015, titled: Star Wars on Trial: The Force Awakens Edition: Science Fiction and Fantasy Writers Debate the Most Popular Science Fiction Films of All Time
What will the future look like? The further upwards one moves from the basement domain of physics, the harder it often gets to predict long-term trends. Nonetheless, we have some fairly good clues about what to expect moving forward.
The Doomsday argument (DA) is controversial idea that humanity has a higher probability of extinction based purely on probabilistic arguments. The DA is based on the proposition that I will most likely find myself somewhere in the middle of humanity’s time in existence (but not in its early time based on the expectation that humanity may exist a very long time on Earth.)
The U.S. Department of Energy has green-lit the construction of a 3.2-gigapixel digital camera for the Large Synoptic Survey Telescope (LSST). Once complete, the instrument will be used by astronomers to study everything from the Big Bang to the motions of nearby asteroids.
Is interstellar travel by bio-humanity even possible? Not according to my dear bro and esteemed colleague Kim Stanley Robinson. Whose new novel AURORA follows one of the first… and possibly last… efforts to send a generation starship to a neighboring star. Naturally, any KSR book is worth rushing out to purchase… though like many of his other works, there is a very strong sense that the author has a point to make.
We’re looking outward… toward the vast, vast majority of all there is. And after decades of doldrums, it seems we truly are regaining some momentum in space exploration. Have any of you been keeping track on a scorecard?
Hang on till the end, to read the news from NASA NIAC!
With the launch of Sputnik in 1957, humankind extended its presence from the Earth’s surface towards outer space. Since that time, thousands of other objects have been sent into Earth orbit, including weather satellites, communications equipment and military hardware. Wherever people go, they tend to leave their mark, mostly harmful, on the natural environment, and space is no exception. There are many pieces of space junk – the remains of discarded, malfunctioning or obsolete devices – that now whiz around the earth and pose threats to current space projects.
We stand at the cusp of guaranteeing the survival of fundamental purpose in the universe, reality, and existence by insuring the continuation of consciousness. This is a far grander calling than merely enabling individual life extension. Existential metaphysical purpose is our foremost responsibility as conscious beings, and computer intelligence is the method of achieving it.
Excitement is building for the New Horizons Mission and its hurried swing past Pluto on July 14. What a terrific way to celebrate Bastille Day! Watch this terrific video - Fast and Light to Pluto - about New Horizons, created by the NY Times.
My plan below needs to be perceived with irony because it is almost irrelevant: we have only a very small chance of surviving the next 1000 years. If we do survive, we have numerous tasks to accomplish before my plan can become a reality.
Additionally, there’s the possibility that the “end of the universe” will arrive sooner, if our collider experiments lead to a vacuum phase transition, which begins at one point and spreads across the visible universe.
The dangers that face Earth and its inhabitants are diverse and intricate. The solutions, if any exists per particular danger, are equally complex and nuanced. Below you will find a shortlist of threats that range from conventional to bizarre.
On this day 245 years ago – July 1, 1770 – humanity had its closest known encounter with extinction (with the possible exception of the Cuban Missile Crisis).
Two weeks before that date the French astronomer Charles Messier had discovered a faint comet in the constellation Sagittarius, which thereafter rapidly brightened and began moving swiftly across the sky. At its peak it was naked-eye, and its coma, according to various observers, the apparent size of from 5 to 16 full moons across. Lexell’s Comet, so named after another astronomer who subsequently calculated its orbit, was then under one-and-a-half million miles from Earth, or less than six times the distance of the Moon, and thus the nearest a comet has ever approached us in recorded history. (Kronk n.d.)
We started to discuss Stevenson’s probe — a hypothetical vehicle which could reach the earth’s core by melting its way through the mantle, taking scientific instruments with it. It would take the form of a large drop of molten iron – at least 60,000 tons – theoretically feasible, but practically impossible.
Since their inception 60 years ago, satellites have gone on to become an indispensable component of our modern high-tech civilization. But because they’re reliable and practically invisible, we take their existence for granted. Here’s what would happen if all our satellites suddenly just disappeared.
The idea that all the satellites — or at least good portion of them — could be rendered inoperable is not as outlandish as it might seem at first. There are at least three plausible scenarios in which this could happen.
Chapter 1 - The Origin and State of the First Intelligent Species
The following statement is something we all understand, but it bears repeating because it is perhaps the coolest, most interesting scientific fact that we know about our universe and human existence:
Hydrogen, given sufficient time, turns into people.
It is an amazing statement if you think about it. A collection of simple atoms swirling around in the early universe, combined with the ordinary laws of nature like gravity, created human beings living here on planet earth over the course of billions of years.
Planetary Resources, founded by Peter Diamandis and Eric Anderson, aims to pave the way to humanity mining asteroids for vast wealth… as the B612 Foundation hopes to detect and track asteroids that threaten civilization’s survival… a real case of synergy of purpose. (I’ve been helping both.)
This article examines the risks posed by “unknown unknowns,” which I call monsters. It then introduces a taxonomy of the unknowable, and argues that one category of this taxonomy in particular should lead us to inflate our prior probability estimates of annihilation, whatever they happen to be. The lesson here is ultimately the same as the Doomsday Argument, except the reasoning is far more robust.
IEET Blog |
email list |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.
East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA
Email: director @ ieet.org phone:
West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org