Survival of the Richest
John G. Messerly
2018-09-03 00:00:00
URL

Rushkoff was recently invited to deliver a speech for an unusually large fee, about half his academic salary, on “the future of technology.” He expected a large audience but, upon arrival, he was ushered into a small room with a table surrounded by five wealthy men. But they weren’t interested in the future of technological innovation. Instead, they wanted to know things like where they should move to avoid the coming climate crisis, whether mind uploading will work and, most prominently, how to “maintain authority over [their] security force after the event?”




The Event. That was their euphemism for the environmental collapse, social unrest, nuclear explosion, unstoppable virus, or Mr. Robot hack that takes everything down.

This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader? The billionaires considered using special combination locks on the food supply that only they knew. Or making guards wear disciplinary collars of some kind in return for their survival. Or maybe building robots to serve as guards and workers — if that technology could be developed in time.

That’s when it hit me: At least as far as these gentlemen were concerned, this was a talk about the future of technology. Taking their cue from Elon Musk colonizing Mars, Peter Thiel reversing the aging process, or Sam Altman and Ray Kurzweil uploading their minds into supercomputers, they were preparing for a digital future that had a whole lot less to do with making the world a better place than it did with transcending the human condition altogether and insulating themselves from a very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic, and resource depletion. For them, the future of technology is really about just one thing: escape.




Rushkoff continues by expressing his disdain for transhumanism,




The more committed we are to this [transhuman] view of the world, the more we come to see human beings as the problem and technology as the solution. The very essence of what it means to be human is treated less as a feature than bug. No matter their embedded biases, technologies are declared neutral. Any bad behaviors they induce in us are just a reflection of our own corrupted core. It’s as if some innate human savagery is to blame for our troubles.

Ultimately, according to the technosolutionist orthodoxy, the human future climaxes by uploading our consciousness to a computer or, perhaps better, accepting that technology itself is our evolutionary successor. Like members of a gnostic cult, we long to enter the next transcendent phase of our development, shedding our bodies and leaving them behind, along with our sins and troubles.

The mental gymnastics required for such a profound role reversal between humans and machines all depend on the underlying assumption that humans suck. Let’s either change them or get away from them, forever.




It is such thinking that leads the tech billionaires to want to escape to Mars, or at least New Zealand. But “the result will be less a continuation of the human diaspora than a lifeboat for the elite.”

For his part, Rushkoff suggested to his small audience that the best way to survive and flourish after “the event,” would be to treat other people well now. Better act to avoid social instability, environmental collapse and all the rest than to figure out how to deal with them in the future. Their response?




They were amused by my optimism, but they didn’t really buy it. They were not interested in how to avoid a calamity; they’re convinced we are too far gone. For all their wealth and power, they don’t believe they can affect the future. They are simply accepting the darkest of all scenarios and then bringing whatever money and technology they can employ to insulate themselves — especially if they can’t get a seat on the rocket to Mars.




But for Rushkoff:




We don’t have to use technology in such antisocial, atomizing ways. We can become the individual consumers and profiles that our devices and platforms want us to be, or we can remember that the truly evolved human doesn’t go it alone.

Being human is not about individual survival or escape. It’s a team sport. Whatever future humans have, it will be together.




Reflections – I don’t doubt that many wealthy and powerful people would willingly leave the rest of us behind, or enslave or kill us all—a theme endorsed by Ted Kaczynski in The Unabomber Manifesto: Industrial Society and Its Future. But notice that these tendencies toward evil have existed independent of technology or any transhumanist philosophy—history is replete with examples of cruelty and genocide.

So the question is whether we can create a better world without radically transforming human beings. I doubt it. As I’ve said many times our apelike brains—characterized by territoriality, aggression, dominance hierarchies, irrationality, superstition, and cognitive biases—in combination with 21st-century technology is a lethal combination. And that’s why, in order to survive the many existential risks now confronting us and to have descendants who flourish, we should (probably) embrace transhumanism.

So while there are obvious risks associated with the power that science and technology afford, they are our best hope as we approach many of these “events.” So if we don’t want our planet to circle our sun lifeless for the next few billion years, if we believe that conscious life is really worthwhile, then we must work quickly to transform both our moral and intellectual natures. Otherwise at most only a few will survive.