IEET > Technopolitics > Futurism > Technoprogressivism > Affiliate Scholar > John G. Messerly
Survival of the Richest
John G. Messerly   Sep 29, 2018   Reason and Meaning  

Professor and media theorist Douglas Rushkoff recently penned an article that went viral, “Survival of the Richest.” It outlines how the super wealthy are preparing for doomsday. Here is a recap followed by a brief commentary.

Rushkoff was recently invited to deliver a speech for an unusually large fee, about half his academic salary, on “the future of technology.” He expected a large audience but, upon arrival, he was ushered into a small room with a table surrounded by five wealthy men. But they weren’t interested in the future of technological innovation. Instead, they wanted to know things like where they should move to avoid the coming climate crisis, whether mind uploading will work and, most prominently, how to “maintain authority over [their] security force after the event?”

The Event. That was their euphemism for the environmental collapse, social unrest, nuclear explosion, unstoppable virus, or Mr. Robot hack that takes everything down.

This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader? The billionaires considered using special combination locks on the food supply that only they knew. Or making guards wear disciplinary collars of some kind in return for their survival. Or maybe building robots to serve as guards and workers — if that technology could be developed in time.

That’s when it hit me: At least as far as these gentlemen were concerned, this was a talk about the future of technology. Taking their cue from Elon Musk colonizing Mars, Peter Thiel reversing the aging process, or Sam Altman and Ray Kurzweil uploading their minds into supercomputers, they were preparing for a digital future that had a whole lot less to do with making the world a better place than it did with transcending the human condition altogether and insulating themselves from a very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic, and resource depletion. For them, the future of technology is really about just one thing: escape.

Rushkoff continues by expressing his disdain for transhumanism,

The more committed we are to this [transhuman] view of the world, the more we come to see human beings as the problem and technology as the solution. The very essence of what it means to be human is treated less as a feature than bug. No matter their embedded biases, technologies are declared neutral. Any bad behaviors they induce in us are just a reflection of our own corrupted core. It’s as if some innate human savagery is to blame for our troubles.

Ultimately, according to the technosolutionist orthodoxy, the human future climaxes by uploading our consciousness to a computer or, perhaps better, accepting that technology itself is our evolutionary successor. Like members of a gnostic cult, we long to enter the next transcendent phase of our development, shedding our bodies and leaving them behind, along with our sins and troubles.

The mental gymnastics required for such a profound role reversal between humans and machines all depend on the underlying assumption that humans suck. Let’s either change them or get away from them, forever.

It is such thinking that leads the tech billionaires to want to escape to Mars, or at least New Zealand. But “the result will be less a continuation of the human diaspora than a lifeboat for the elite.”

For his part, Rushkoff suggested to his small audience that the best way to survive and flourish after “the event,” would be to treat other people well now. Better act to avoid social instability, environmental collapse and all the rest than to figure out how to deal with them in the future. Their response?

They were amused by my optimism, but they didn’t really buy it. They were not interested in how to avoid a calamity; they’re convinced we are too far gone. For all their wealth and power, they don’t believe they can affect the future. They are simply accepting the darkest of all scenarios and then bringing whatever money and technology they can employ to insulate themselves — especially if they can’t get a seat on the rocket to Mars.

But for Rushkoff:

We don’t have to use technology in such antisocial, atomizing ways. We can become the individual consumers and profiles that our devices and platforms want us to be, or we can remember that the truly evolved human doesn’t go it alone.

Being human is not about individual survival or escape. It’s a team sport. Whatever future humans have, it will be together.

Reflections – I don’t doubt that many wealthy and powerful people would willingly leave the rest of us behind, or enslave or kill us all—a theme endorsed by Ted Kaczynski in The Unabomber Manifesto: Industrial Society and Its Future. But notice that these tendencies toward evil have existed independent of technology or any transhumanist philosophy—history is replete with examples of cruelty and genocide.

So the question is whether we can create a better world without radically transforming human beings. I doubt it. As I’ve said many times our apelike brains—characterized by territoriality, aggression, dominance hierarchies, irrationality, superstition, and cognitive biases—in combination with 21st-century technology is a lethal combination. And that’s why, in order to survive the many existential risks now confronting us and to have descendants who flourish, we should (probably) embrace transhumanism.

So while there are obvious risks associated with the power that science and technology afford, they are our best hope as we approach many of these “events.” So if we don’t want our planet to circle our sun lifeless for the next few billion years, if we believe that conscious life is really worthwhile, then we must work quickly to transform both our moral and intellectual natures. Otherwise at most only a few will survive.

John G. Messerly is an Affiliate Scholar of the IEET. He received his PhD in philosophy from St. Louis University in 1992. His most recent book is The Meaning of Life: Religious, Philosophical, Scientific, and Transhumanist Perspectives. He blogs daily on issues of philosophy, evolution, futurism and the meaning of life at his website: reasonandmeaning.com.



COMMENTS

If you are reading this, then you are probably useless.  I have just finished reading The Economic Singularity by Calum Chace.  He describes a future where the Economic Singularity happens before the Technological Singularity.  It is a world where we go from where the current 1% of the world who owns 50% of the world’s wealth to something worse.  The few who own the AI will be able to afford the transhumanist advancements and rest will be unemployed.  It will be Universal Basic Income (UBI) to the rescue but the AIs will work out that the best way to stretch UBI would be to create simulated worlds for the useless and employ a blocking agent to ensure they believe their simulated 2018 is real.  This should keep the dogs of discontent quiet and away from the rich “Gods” as they enjoy a less congested world with AI systems not only serving them, but also keeping the rest out of the way while they start to clean up the environmental mess we have created.  Put a layer of Bostrom’s Simulation Hypothesis on top and this has already happened.  Hello fellow useless being!  What changes all this is if the simulations are more meaningful than the “reality”.  Then even the rich may want to live in the simulation.  Maybe they will want privileged positions where they still have lots of money, know it is a simulation and can build rockets to mars etc.  The crux of question of whether the simulated life is valuable depends on whether you believe your soul is external to your simulation avatar (in this simulation and the simulation layers above).  If this scenario is accurate, then your soul does exist outside of this reality and there is no reason why it should not exist outside of upper level simulations.  If the events and responses in the simulation define and hopefully refine your soul, then it does not matter where you are in these nested simulations and there may be light at the end of the tunnel for we, the useless.  For more detail on this scenario, please look up my book on Amazon – The Word of Bob – an AI Minecraft Villager.

YOUR COMMENT Login or Register to post a comment.

Next entry: Adam A. Ford Wins 1st Prize in the Longevity Film Competition

Previous entry: Vivre 300 ans en bonne santé : objectif fou… ou plausible ?