Subscribe Join Our Facebook Group
MULTIMEDIA: Alexey Turchin Topics

Subscribe to IEET Lists Daily News Feed
Longevity Dividend List
Catastrophic Risks List
Biopolitics of Popular Culture List
Technoprogressive List
Trans-Spirit List

Alexey Turchin Topics
SciencePolicy > Artificial Intelligence > Advisory Board > Alexey Turchin
Turchin et al Publish New Paper on the Simulation’s Termination Risk ArXiv

The main idea is that If we are in a simulation, the most probable type of it either a “Fermi simulation, that is, a simulation which is created by aliens to solve Fermi paradox via simulating possible global risks, or in a “Singularity simulation”,  – a simulation where future AI models its own origin. It means that our simulation will be turned off soon, as at least one of three conditions will be reached:

- It will model a global catastrophe, which is subjectively equal to the termination. - it will reach unkn...

Advisory Board > Alexey Turchin > HealthLongevity > Enablement > Innovation > Neuroscience
Alexey Turchin
A map of currently available life extension methods by Alexey Turchin

Extremely large payoff from life extension

We live in special period of time when radical life extension is not far. We just need to survive until the moment when all the necessary technologies will be created.

The positive scenario suggests it could happen by 2050 (plus or minus 20 years), when humanity will create an advanced and powerful AI, highly developed nanotechnologies and a cure for aging.

Vision > Virtuality > Advisory Board > Alexey Turchin > Philosophy > Artificial Intelligence
Alexey Turchin
Simulations Map: what is the most probable type of the simulation in which we live? by Alexey Turchin

There is a chance that we may be living in a computer simulation created by an AI or a future super-civilization. The goal of the simulations map is to depict an overview of all possible simulations. It will help us to estimate the distribution of other multiple simulations inside it along with their measure and probability. This will help us to estimate the probability that we are in a simulation and – if we are – the kind of simulation it is and how it could end.

Vision > Advisory Board > Alexey Turchin > HealthLongevity > Minduploading > Enablement > Brain–computer-interface
Alexey Turchin
Digital Immortality Map: Reconstruction of the Personality Based on its Information Traces by Alexey Turchin

If someone has died it doesn’t mean that you should stop trying to return him to life. There is one clear thing that you should do (after cryonics): collect as much information about the person as possible, as well as store his DNA sample, and hope that future AI will return him to life based on this information.


GlobalDemocracySecurity > Vision > Galactic > Advisory Board > Alexey Turchin > HealthLongevity > SpaceThreats
Alexey Turchin
The Doomsday Argument by Alexey Turchin

The Doomsday argument (DA) is controversial idea that humanity has a higher probability of extinction based purely on probabilistic arguments. The DA is based on the proposition that I will most likely find myself somewhere in the middle of humanity’s time in existence (but not in its early time based on the expectation that humanity may exist a very long time on Earth.)

GlobalDemocracySecurity > Vision > Advisory Board > Alexey Turchin > Artificial Intelligence > Cyber
Alexey Turchin
Mapping Approaches to AI Safety by Alexey Turchin

When I started to work on this map of AI safety solutions, I wanted to illustrate the excellent 2013 article “Responses to Catastrophic AGI Risk: A Survey” by Kaj Sotala and IEET Affiliate Scholar Roman V. Yampolskiy, which I strongly recommend. However, during the process I had a number of ideas to expand the classification of the proposed ways to create safe AI.

GlobalDemocracySecurity > Vision > Advisory Board > Alexey Turchin > HealthLongevity > Artificial Intelligence > Cyber > SciTech
Alexey Turchin
Human Extinction Risks due to Artificial Intelligence Development - 55 ways we can be obliterated by Alexey Turchin

This map shows that AI failure resulting in human extinction could happen on different levels of AI development, namely:

1. before it starts self-improvement (which is unlikely but we still can envision several failure modes), 2. during its take off, when it uses different instruments to break out from its initial confinement, and 3. after its successful takeover of the world, when it starts to implement its goal system which could be unfriendly, or its friendliness may be flawed. 

GlobalDemocracySecurity > Vision > Galactic > Advisory Board > Alexey Turchin > Futurism > Technoprogressivism > Artificial Intelligence > SpaceThreats > SciTech > Resilience
Alexey Turchin
How to Survive the End of the Universe by Alexey Turchin

My plan below needs to be perceived with irony because it is almost irrelevant: we have only a very small chance of surviving the next 1000 years. If we do survive, we have numerous tasks to accomplish before my plan can become a reality.

Additionally, there’s the possibility that the “end of the universe” will arrive sooner, if our collider experiments lead to a vacuum phase transition, which begins at one point and spreads across the visible universe.

GlobalDemocracySecurity > Vision > Advisory Board > Alexey Turchin > HealthLongevity > Futurism > SpaceThreats > Biosecurity > Eco-gov > Military > SciTech
Alexey Turchin
How many X-Risks for Humanity? This Roadmap has 100 Doomsday Scenarios by Alexey Turchin

In 2008 I was working on a Russian language book “Structure of the Global Catastrophe”.  I showed it to a friend to review, the geologist Aranovich, an old friend of my late mother’s husband.

We started to discuss Stevenson’s probe — a hypothetical vehicle which could reach the earth’s core by melting its way through the mantle, taking scientific instruments with it. It would take the form of a large drop of molten iron – at least 60,000 tons – theoretically feasible, but practically impossible....

GlobalDemocracySecurity > Vision > Galactic > Contributors > HealthLongevity > Alexey Turchin > SpaceThreats > Biosecurity
Alexey Turchin
I am an International Radical Life Extension Activist by Alexey Turchin

In the last three years, I’ve traveled the world, performing street actions. My goal is to increase public awareness in the following issues:

1) fighting aging 2) elevating the possibility of radical life extension 3) saving the world from global catastrophes.