Navigation

 ·   Wiki Home
 ·   Wiki Help
 ·   Categories
 ·   Title List
 ·   Uncategorized Pages
 ·   Random Page
 ·   Recent Changes
 ·   RSS
 ·   Atom
 ·   What Links Here

Active Members:

Search:

 

Create or Find Page:

 

View Existential risks

An existential risk is a risk that is both global (affects all of humanity) and terminal (destroys or irreversibly cripples the target). Nick Bostrom defines an existential risk as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” The term is frequently used in transhumanist and Singularitarian communities to describe disaster and doomsday scenarios caused by bioterrorism, non-friendly Artificial general intelligence, misuse of molecular nanotechnology, or other sources of danger.

Among the grimmest warnings of existential risks from advanced technology are those of computer scientist Bill Joy, who envisages the possibility of global destruction as new technologies become increasingly powerful and uncontrollable, and Martin Rees who has written about an extensive range of risks to human survival.

While transhumanism advocates the development of advanced technologies to enhance human physical and mental powers, transhumanist thinkers typically acknowledge that the same technologies could bring existential risks. Generally, transhumanism holds that the potential benefits are at least equal in scope and magnitude to the existential risks (or that the risky technology would be impossible to prevent regardless), and many transhumanists, including Bostrom, are actively engaged in consideration of how these risks might best be reduced or mitigated.

Emerging technologies present many new existential risks, but the universe already contains many. Gamma ray bursts, large asteroids, supervolcanoes, the possibility of extraterrestrial intelligence, and unanticipated catastrophes are all potential existential risks.

However, the risks from humans themselves are usually seen as a greater existential threat. These risks include the misuse of Biotechnology or nanotechnology, a Singularity with negative results, some runaway Global warming scenarios, and Nuclear war.

Although they are still of concern, risks that would not affect all of humanity are not considered existential. For instance, natural influenza Pandemics, global financial collapse, or less serious climate change scenarios do not qualify as existential risks as only some the human species would die.


Our approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach – see what happens, limit damages, and learn from experience – is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate new types of threats and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions.”
—Nick Bostrom


IEET Links:

Existential Risks List
Existential risks, AI, genetic engineering and space exploration

External links

Center for Responsible Nanotechnology
The Lifeboat Foundation
Astronomical Waste: The Opportunity Cost of Delayed Human Development—Nick Bostrom’s paper on the ethical importance of existential risk

Sources:

Existential Threats and Risks: We Can’t Escape Impermanence
Existential Risks in Journal of Evolution and Technology
Wikipedia on Existential Risk

Category://Encyclopedia

Categories: