Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

CyborgCamp ‘14

Extreme Weather

CDC Confirms Patient In Dallas Has Ebola Virus

Personal Drones: Are They a Public Hazard?

Last Things: Cold Comfort in the Far Future

What is the Future of the Sharing Economy?


ieet books

A Taxonomy and Metaphysics of Mind-Uploading
Author
Keith Wiley


comments

hankpellissier on 'Supertasking and Mindfulness' (Sep 30, 2014)

bubble13 on 'How Do You Filter Content in an Age of Abundance?' (Sep 29, 2014)

Dick Burkhart on 'The Obvious Relationship Between Climate and Family Planning—and Why We Don’t Talk About' (Sep 29, 2014)

instamatic on 'Dawkins and the "We are going to die" -Argument' (Sep 29, 2014)

Taiwanlight on 'Dawkins and the "We are going to die" -Argument' (Sep 27, 2014)

Farrah Greyson on 'Are Technological Unemployment and a Basic Income Guarantee Inevitable or Desirable?' (Sep 27, 2014)

instamatic on 'Dawkins and the "We are going to die" -Argument' (Sep 26, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month


Why and How Should We Build a Basic Income for Every Citizen?
Sep 16, 2014
(14679) Hits
(7) Comments

MMR Vaccines and Autism: Bringing clarity to the CDC Whistleblower Story
Sep 14, 2014
(5366) Hits
(1) Comments

An open source future for synthetic biology
Sep 9, 2014
(5102) Hits
(0) Comments

Steven Pinker’s Guide to Classic Style
Sep 11, 2014
(4150) Hits
(0) Comments



IEET > Security > Rights > FreeThought > Personhood > Life > Access > Vision > Contributors > Stefan Pernar

Print Email permalink (3) Comments (2088) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Diffusing the ‘Doomsday’ Argument and Other Futuristic Boogeymen


Stefan Pernar
By Stefan Pernar
Rational Morality

Posted: Aug 23, 2013

It is a long-standing trend in futurists circles to paint the future as bleak and dangerous as possible with only a handful of elite ‘rationalists’ able to even understand, let alone adequately address the problem. In this tradition there exist a number of more or less well-known, more or less scary as well as more or less publicised concepts that all have a number of characteristics in common.

They rest on a set of premises that when thought through to their final conclusion appear to lead to bizarre/horrific concepts of reality yet when examined with a cool head dissolve into what they really are: scaremongering hokum. In the context of this article I will shed some light on the erronious assumptions and lapses in logic that led to several of the more prominent futuristic boogeymen.

 

The Doomsday Argument – More like the Transcension Argument

The Doomsday Argument goes something like this:

“The Doomsday argument (DA) is a probabilistic argument that claims to predict the number of future members of the human species given only an estimate of the total number of humans born so far. Simply put, it says that supposing the humans alive today are in a random place in the whole human history timeline, chances are we are about halfway through it.”

Doing the math based on these assumptions and the idea that so far about 60 Billion humans have existed in total over the course of all of human history, an average lifespan of 80 years and a world population stabilizing at 10 Billion individuals would result in human extinction in 9,120 years with 95% mathematical certainty. Applying Nick Bostrom’s self-sampeling assumption to the argument half’s this time horizon again to 4,560 years.  So far so grim.

There are a number of rebuttals to the DA however the most optimistic and positive one seems to so far have not been covered and requires the critical scrutiny of the reference class which in the DA is that of ‘humans’. As futurist we constantly talk about posthumans, transhumans, humanity+ and so on while oftentimes forgetting that humans are essentially postapes, transapes or apes+. This evolutionary perspective lets us understand the human condition as a transitory state within a long chain of previous states of existence reaching back into the past over the course of evolution all the way to the beginning of life itself. From this perspective it is more reasonable to define the reference class as the timeframe in which posthuman ancestors existed which is the time span from the beginning of life on earth until today giving us roughly 3.6 Billion years.

Applying this number to the DA yields a 95% chance that we will continue on the evolutionary trajectory for at least another 180 Million years and a 50% chance that we will do so for another 3.6 Billion years. The 95% certainty of the eventual extinction of our progeny’s progeny on the other hand lies in the distant future of the next 72 Billion years. A long time horizon indeed. But not only that. From this vantage point the Doomsday Argument becomes the Transcension Argument (TA) in which we can assume with 95% probability that we will have realized our posthuman ambition within roughly the next 9,120 years or within 4,560 years given Bostrom’s self-sampeling assumption.

The Simulation Hypothesis – No, They Wont just Switch us Off

The Simulation Hypothesis (SH) is another futuristic boogeyman. This is how it is formulated:

“A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true:

  1. The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
  2. The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
  3. The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”

The nature of the SH becomes scary when one begins to imagine that the simulation may be turned of at the end of the experiment or that all of our experiences are not ‘real’. I will be addressing these three points one by one. As it turns out the insights gained from an understanding of the DA above has significant bearing on the scariness of the SH.

Re 1) Well – maybe. But given the details of my TA above this is far from certain.
Re 2) Nothing to worry about here.
Re 3) So what? Let me explain in a bit more detail below.

First of all, assuming that one can not tell the difference between the simulation and ‘real’ reality, the only rational choice at that point would be to stop worrying and to simply carry on. But then there is still the risk of being switched off at some point. Assuming that those running the ancestor simulation would only be interested in the ‘human’ level part of their ancestral history then based on the DA above there would be a 50% chance of running the simulation for another 480 subjective years before we transcend/go extinct. But even then – who is to say that the simulation is going to be switched off at some point anyway? This would imply that our ancestors running these simulations had no consideration at all for the plight of hundreds of billions of iterations of human level conciousness or that they lack the resources to sustain those simulations for a very long time both of these assumptions I find utterly implausible.

We have every reason to believe that posthuman intelligences are far more compassionate, enlightened and caring beings that we are today. Instead of simply switching off their ancestral simulations they would likely plan for a post simulation virtual ‘heaven’ for all conscious beings in that simulation or allow for the simulation to run its course until it merges with their main branch of consciousness. The computational resources to allow for this would by that time be absolutely abundant as Kurzweil explains in great detail in The SIngularity of Near discussing the limits of Nanocomputing:

“If we use the figure of 10^16 cps that I believe well be sufficient for functional emulation of human intelligence, the ultimate laptop [1kg mass in 1 liter volume] would function at the equivalent brain power of five trillion trillion human civilizations. Such a laptop could perform the equivalent of all human thought over the last ten thousand years (that is, ten billion human brains operating for ten thousand years) in one ten-thousands of a nanosecond.” Ray Kurzweil, The Singularity if Near, p. 134, ISBN 0-14-303788-9

Incidentally 10 billion humans over 10,000 years is within the margin of error of our DA assumptions of 95% probability of extinction/transcendence for 10 Billion humans over 9,120 years. In other words – the computational resources needed for an ancestor simulation on the human scale would be too cheap to meter. The same would be true even if the capacity for Kurzweil’s ultimate laptop would only be realized by only 0.01% increasing the time required for simulating 10,000 years of consciousness in 10 billion human to a whooping nanosecond. At the same time the ethical implications of simply switching it off so great that there would be no reason at all not to continue the simulation until an eventual merger with ‘real’ reality. After all, posthuman civilizations at scales sufficient to run ancestor simulations would have left meatspace anyway long ago.

 Gigadeath – Seriously, Just Stop it Already

In his 2005 The Artilect War, Hugo De Garis outlines an argument for a bitter controversy in the near future between terrans – opposed to building ‘godlike massively intelligent machines’ – and the cosmists who are in favor. In De Garis’ view a war causing billions of deaths, hence ‘gigadeath’, will become inevitable in the struggle following the unsuccessful resolution of the ‘shall we build AI gods’ controversy. De Garis argues for an exponential increase in the casualties of war over the course of human history and comes to his conclusion by extrapolating that trend into the future.

What De Garis fails to realize in his gigadeath prognosis is the fact that while the number of casualties has in fact risen over the course of history, the number of human beings on the planet has risen even faster resulting in a proportionally smaller share of casualties during conflicts. This trend is of course brilliantly quantified and discussed in Steven Pinkert‘s The Better Angels of Our Nature – Why Violence has Declined. Aside from that I always thought that the cosmists would be so advanced by that time that the conflict would boil down to some angry fist shaking and name calling on the side of the terrans anyway. Given the possibility for a hard take off the entire discussion would be moot as well since it would all be over before it really begins.

Unfriendly AI – A Contradiction in Terms

I have addressed this before but allow me to reiterate here. So there is this entire movement of researchers out there concerning themselves with the absolute horrific idea of a transhuman AI that instead of being ‘friendly’ turns out to be a real party pooper and converts the entire universe into paperclips or place an infinite number of dust motes in eyes depending who you ask.  The sheer horror of the idea boggles the mind! Except it doesn’t.

Yes sure – there is a true risk in creating dumb AI that blindly causes harm and destruction. A transhuman AI however is a completely different cup of tea. The emphasis here lies on ‘transhuman’ or in other words smarter – in every way – than you or I or any human being alive today or in the past. The question of the dangers of an ‘unfriendly’ AI boils down to a very simple question:

Does the universe exhibit moral realism or not?

A gentle reminder:

“Moral realism is the meta-ethical view which claims that:

  1. Ethical sentences express propositions.
  2. Some such propositions are true.
  3. Those propositions are made true by objective features of the world, independent of subjective opinion.”

From here we can make two assumptions:

A: Yes – the universe exhibits moral realism
B: No – the universe does not exhibit moral realism

If ‘A’ is true then a transhuman AI would reason itself into the proper goal system and through the power of reason alone would be transhumanly ‘friendly’.

If ‘B’ is true then no one – not even a transhuman AI – could not reason about ‘friendliness’ at all making the notion of ‘unfriendly’ meaningless.

In short the idea of ‘unfriendly AI’ is either self solving or meaningless so stop worrying about it.

The Basilisk – Meet the Xenu of the Singularitarians

This one is a real doozy and probably deserves an entire post in itself at some point but lets keep it basic for now. For some real gems see this early 2013 Reddit thread with Yudkowsky. In essence The Basilisk is a modified, futurist version of the Pascal’s wager argument in which a transhuman AI could eventually aim to punish individuals that failed to do everything in their power to bring it about:

“The claim is that this ultimate intelligence may punish those who fail to help it (or help create it), with greater punishment for those who knew the importance of the task. But it’s much more than just “serve the AI or you will go to hell” — the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong’s Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you — and furthermore, you might be the simulation.”

I know, it is bizarre. But not only that, in addition any and all public discussion of the matter among the high priesthood of singularitarians on lesswrong.com is completely and utterly banned making the Basilisk truly the Xenu of the singularitarians.

Unfortunately however this matter should not be taken all too lightly:

“Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they’re fairly sure intellectually that it’s a silly problem. The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can’t reconstruct a copy of them to torture.”

Firstly it is orders of magnitude more likely that one exists within a sophisticated ancestor simulation than being simulated by a malevolent transhuman intelligence hell-hound on finding out if one would have contributed adequately towards bringing it about. But leaving that aside entirely, fortunately The Basilisk is even easier refuted than Pascal’s wager: the quintessential unknowability of what criteria one is being tested against. It is quite simple really, the whole point of such a simulation is to make the desired behavior unknown to the individuals being tested. For if it was clear from the outset what was expected from the candidates the subsequent behavior would be utterly meaningless in their assessment. A transhuman AI simulating you would by definition know if you happen to stumble upon the actual test criteria and would have to reset the simulation for a rerun after fixing the knowability of said criteria.

​In addition, how do you know you are not being simulated by a transcended japanese toilet seat wanting to determine if you properly flushed its pre-sentient brethren? Or any other conceivable alternative scenario? Again: stop worrying and live your life as if this is the only real reality. Really!

Conclusion

Diffusing several core futurist’s boogeymen is a matter of looking beyond basic assumptions and uncovering the broader context in which they are made. Recognizing our long evolutionary history transforms the Doomsday Argument into the Transcension Argument. The Simulation Hypothesis looses its teeth considering that the vast computational resources and ethical superiority of our evental decendants makes ‘flicking the off switch’ utterly implausible. The notion of inevitable Gigadeath before the end of the century in no way, shape or form conforms to historical trends on violence. The problem of unfriendly AI is logically either self solving or meaningless. And the dreaded Basilisk is but an unfortunate blustering into a set of overly complex so-called rational notions of how the future might look like while disregarding basic principles of logic. Once again it is the sleep of reason that produces the monsters.


Stefan Pernar has worked in corporate IT his entire life and is currently the CIO for the not for commercial profit retail group FrontLine Stores Australia Ltd. In 2005 he got inspired by Kurzweil's The Singularity is Near to write his own philosophical novel on friendly AI called Jame5 - A Tale of Good and Evil which set him on a path of exciting discovery ever since.
Print Email permalink (3) Comments (2089) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Wow, those are some pretty exotic doomsday “arguments.”  Maybe that is why they seem to me to be spacious and reaching for straws (to prevent drowning in optimism about the future?).  To me the biggest worry is the lack of other civilizations in the stars, but maybe we simply can’t perceive them yet.  It seems even more spacious and reaching for straws to argue we are the first to come awake and reach for the Singularity in this universe.





One can reject moral realism and still find moral statements informative, or, at least, not meaningless.





Hi Stefan—it is great to see you back at IEET and this is an excellent essay.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Transhumanism and its Impact on Society

Previous entry: On Consciousness and the Brain

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376