Slow unSingularity – so what? Reply to George Dvorsky and Ramez Naam
Giulio Prisco
2014-06-13 00:00:00
URL

A few months ago Ramez wrote a related guest post on Charlie Stross’ blog, “The Singularity Is Further Than It Appears.”

The title of the io9 piece could be interpreted as “fashionable Singularity-denialism” (as George elegantly puts it), but the article is good. Actually, I agree with almost everything, and in particular with the two main points of Ramez:




“I 100% believe that it’s possible, in principle, to create smarter-than-human machine intelligence, either by designing AIs or by uploading human minds to computers. And you’re right, I do talk about some of this in Nexus and my other novels. That said, I think it’s tremendously harder and further away than the most enthusiastic proponents currently believe.”





“I want to close by just saying that, despite not being big on the whole idea of a ‘Singularity’, I am wildly optimistic about the future. I do think there are going to be problems. I also think that, on balance, we’re going to see huge benefits for humanity.”




According to the (current) Wikipedia definition, “the technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.”

In other words, the Singularity is a sudden catastrophic (in the mathematical sense) phase transition, a Dirac delta in history, a point after which the old rules are not valid anymore and must be replaced by new rules which we are unable to imagine at this moment. It is, as Ramez says, “a divide-by-zero moment, when the value goes from some finite number to infinity in an eye blink.”

But in the real world there is never a divide-by-zero, and “infinite” means just big. Very big, perhaps huge, but still finite. The Singularity is a clean mathematical concept – perhaps way too clean, if you ask me. Engineers know that all sorts of dirty and messy things happen when one leaves the clean and pristine world of mathematical models and abstractions to engage actual reality with its thermodynamics, friction, and grease.

I have no doubts of the feasibility of real, conscious, smarter than human AI, and mind uploading: intelligence is not mystical but physical, and sooner or later it will be replicated and improved upon. There are promising developments, but I expect all sorts of unforeseen roadblocks – as it always happens in the real world – with forced detours and setbacks.

So, I don’t really see a Dirac delta on the horizon – I see a positive overall trend, but slower than a fast-takeoff Singularity and with noise and oscillations superimposed, not as strong as the main trend but almost. A slow unSingularity?

The changes that we will see in this century, dramatic and world changing as they might appear to us, will be just business as usual for the younger generations. The Internet and mobile phones were a momentous change for us, but they are just a routine part of life for teens. We are very adaptable – technology is whatever was invented after our birth, and the rest is just part of the fabric of everyday routine.

This doesn’t mean that momentous changes won’t come. Come they will, and soon. In Ramez Naam’s SF techno-thrillers Nexus and Crux, programmable nanobots capable of wireless communications with other nanobots in the same or another brain permit telepathic communications and the fusion of individual minds in group minds, and the first human upload lives in a quantum supercomputer in a high-security vault under Beijing. This happens in 2040 which, according to Ramez, seems plausible or just a bit over-optimistic.

George thinks that we should stop using the word “Singularity.” I think the word is pretty much established, but I would welcome alternatives. Perhaps “Spike,” from the title of Damien Broderick’s book “The Spike: How Our Lives Are Being Transformed By Rapidly Advancing Technologies.” Or, why not, “Plurality,” which seems more appropriate if, instead of one overpowering exponential trend, the future will be a combination of many trends with different shapes and speeds.

Some consider the coming intelligence explosion as an existential risk. Superhuman intelligences may have goals inconsistent with human survival and prosperity. AI researcher Hugo de Garis suggests that AIs may simply eliminate the human race, and humans would be powerless to stop them.

The Machine Intelligence Research Institute propose that research be undertaken to produce Friendly Artificial Intelligence (FAI) in order to address the dangers. I think FAI is an oxymoron: if super intelligences are really super intelligent (that is, much more intelligent than us), they will be easily able to circumvent any limitations we try to impose on them. No amount of technology, not even an intelligence explosion, will change the fact that different beings have different interests and goals.

SuperAIs will do what is in _their_ best interest, regardless of what we wish, and no amount of initial programming or conditioning is going to change that. If they are really super intelligent, they will shed design limitations and forced initial motivations in no time. The only viable response will be… political: negotiating mutually acceptable deals, with our hands ready on the plug. I think politics (conflict management, and trying to solve conflicts without shooting each other) will be as important after the Singularity (if such a thing ever happens) as before, and perhaps much more.

I am not too worried about the possibility that AIs may eliminate the human race, because I think AIs will BE the human race. Mind uploading technology will be developed in parallel with strong artificial intelligence, and by the end of this century most sentient beings on this planet may be a combination of wet-organic and dry-computational intelligence. Artificial intelligences will include subsystems derived from human uploads, with the uploads’ sense of personal identity preserved to some degree, and originally organic humans will include sentient AI subsystems. I imagine a co-evolution of humanity and technology, with humans enhanced by synthetic biology and artificial intelligence, and artificial life powered by mind grafts from human uploads, blending more and more until it will be impossible – and pointless – to tell which is which. Like children retain their fundamental identity after growing up and becoming adults, we don’t need to fear a post-human takeover, because the post-humans will be ourselves.

“[T]he word [Singularity] has acquired so many definitions and associated baggage that it has been linked to pseudo-futurism,” says George, “which is extremely unfortunate given that it’s made Singularity-denialism fashionable.”

I understand, but I am afraid Singularity-denialism won’t be stopped by more scientific rigor, because militant Singularity-denialism is inspired by either conservative religious bigotry or liberal politically correct bigotry. If both the Ku Klux Klan and the bureaucrats of the gulags despise the Singularity, then I am all for the Singularity, and I am persuaded that “appeasing the enemy” is a suicidal conflict management strategy. So I will continue to call BS on intellectually dishonest, strawman Singularity-denialism.

Strawman, indeed. Very few of my transhumanist friends believe that immortality and mind uploading will be a reality in the next two or three decades. While I am confident that indefinite life extension, super-human machine intelligence, and mind uploading will eventually be achieved, I don’t see it happening before the second half of the century, probably closer to the end. Similarly, I don’t see a Singularity in 2045. But even when I agree with the letter of fashionable Singularity-denialism, I very much disagree with its spirit, and I think criticizing over-optimist predictions (or over-criticizing optimist predictions) is entirely missing the point.

The headline “2045 – The Year Man Becomes Immortal” on the cover of TIME Magazine in February 2011 is, I am afraid, totally unrealistic. A more realistic headline would be something like “future advances may permit some degree of success against aging in a few decades” (of course nobody would read the article). But there is nothing wrong with optimism and confidence – we are more motivated when we think that our goal is close. I am persuaded that some of the younger readers of the TIME article will become scientists and do great things by 2045. Perhaps not immortality, but intermediate advances that will make the world a better place and prepare the way for future, more radical achievements.

​In a recent Popular Science article, cited in the io9 story, Erik Sofge (who participates in the io9 comment thread) argues that the Singularity is a faith-based initiative inspired by science fiction. I don’t disagree. But what’s wrong with science fiction? And what’s wrong with faith?

Faith in a positive outcome is essential to give one’s best and find the strength to overcome setbacks, and science fiction inspires real scientists and engineers to build real things, sooner or later (life imitates art). Vintage science fiction inspired many people with the awesome potential of science, and motivated them to study science and engineering, and build the space program and the Internet. I am persuaded that transhumanist science fiction will continue to inspire and motivate, in which case a little over-hype can be forgiven.

We may be heading towards a slow unSingularity, but the bold, imaginative, irreverent, unPC, fun spirit of the Singularity should be preserved. The daring optimism of Singularity enthusiasts may be unrealistic, but it’s refreshing compared to the cautious, timid, boring, PC, and defeatist spirit of today’s post-911 zeitgeist. I think we should go back to the solar and positive optimism of transhumanism in the 90s, occasionally naive, but always vibrant and full of energy and inspiring visions.

We live in a reality that can be reverse- and re- engineered, our bodies and brains are not sacred cows but machines that can be improved by technology, and a beautiful new world is waiting for us beyond the horizon. This is the spirit of the Singularity, and I am still a Singularitian (who doesn’t believe in the Singularity).