If we do have something we can describe as a Singularity, what then?
My talk last weekend at the New York Future Salon explored the likelihood and the implications of the transformative event known as the “Singularity.” I tend to part ways with many Singularity enthusiasts over two small issues: what comes before a Singularity, and what comes after.
In terms of what comes before, I’m generally in the camp that machine-substrate intelligence is very likely possible, but is probably a much more complex problem than some of the more enthusiastic Singularitarians would have us think. We currently have a single model of a mind emerging from a physical structure—the human brain—and (as noted by one of the 2009 Singularity Summit speakers, David Chalmers) we’re not even sure how that happens. Add to that issues around learning, around complexity, around the very definition of intelligence, and you have the potential for a situation where—even if there are no physical laws preventing the emergence of artificial general intelligence—“real” AI remains the computer science version of nuclear fusion: perpetually just a couple of decades away (with plenty of dead-ends and showy hoaxes along the way).
I’ve noted elsewhere that I suspect that “a stand-alone artificial mind will be more a tool of narrow utility than something especially apocalyptic.” Part of the reason is the difficulty, but another part is the near-certainty that the technologies of human intelligence augmentation will continue apace. The technologies that may be dead-ends for efforts to construct a self-aware artificial mind could easily be of great value as non-conscious assistants to human minds.
The notion that creating “real” AI may turn out to be extraordinarily difficult, and the idea that human intelligence augmentation could itself turn out to be a more promising line of research doesn’t get a lot of push-back from the more thoughtful Singularity proponents I’ve encountered. After all, both have been demonstrably true so far. A tougher sticking point, however, comes when I explore what could come afterwards.
If greater-than-human artificial intelligence emerges out of aggressively competitive projects, each seeking to be first, and is put to use without much thought to what might happen next, then the traditional Singularity scenario seems pretty likely. But that’s not the only one: