Hear that? It's the Singularity coming
George Dvorsky
2011-07-03 00:00:00
URL

Make no mistake. It's coming.

As I’ve discussed on this blog before, there are nearly as many definitions of the Singularity as there are individuals who are willing to talk about it. The whole concept is very much a sounding board for our various hopes and fears about radical technologies and where they may bring our species and our civilization. It’s important to note, however, that at best the Singularity describes a social event horizon beyond which it becomes difficult, if not impossible, to predict the impact of the advent of recursively self-improving greater-than-human artificial intelligence.

So, it’s more of a question than an answer. And in my own attempt to answer this quandary, I have personally gravitated towards the I.J. Good camp in which the Singularity is characterized as an intelligence explosion. In 1965 Good wrote,
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
This perspective and phrasing sits well with me, mostly because I already see signs of this pending intelligence explosion happening all around us. It’s becoming glaringly obvious that humanity is offloading all of it’s capacities, albeit in a distributed way, to its technological artifacts. Eventually, these artifacts will supersede our capacities in every way imaginable, including the acquisition of new ones altogether.

A common misnomer about the Singularity and the idea of greater-than-human AI is that it will involve a conscious, self-reflective, and even morally accountable agent. This has led some people to believe that it will have deep and profound thoughts, quote Satre, and resultantly act in a quasi-human manner. This will not be the case. We are not talking about artificial consciousness or even human-like cognition. Rather, we are talking about super-expert systems that are capable of executing tasks that exceed human capacities. It will stem from a multiplicity of systems that are individually singular in purpose, or at the very least, very limited in terms of functional scope. And in virtually all cases, these systems won't reflect on the consequences of their actions unless they are programmed to do so.

But just because they're highly specialized doesn’t mean they won’t be insanely powerful. These systems will have access to a myriad of resources around them, including the internet, factories, replicators, socially engineered humans, robots that they can control remotely, and much more; this technological outreach will serve as their arms and legs.

Consequently, the great fear of the Singularity stems from the realization that these machine intelligences, which will have processing capacities a significant order of magnitude beyond that of humans, will be able to achieve their pre-programmed goals without difficulty–even if we try to intervene and stop them. This is what has led to the fear of poorly programmed SAI or “malevolent” SAI. If our instructions to these super-expert systems are poorly articulated or under-developed, these machines could pull the old 'earth-into-paperclips' routine.

For those skeptics who don’t see this coming, I implore them to look around. We are beginning to see the opening salvo of the intelligence explosion. We are already creating systems that exceed our capacities and it's a trend that is quickly accelerating. This is a process that started a few decades ago with the advent of computers and other calculating machines, but it’s been in the last little while that we’ve been witness to more profound innovations. Humanity chuckled in collective nervousness back in 1997 when chess grandmaster Garry Kasparaov was defeated by Deep Blue. From that moment on we knew the writing was on the wall, but we’ve since chosen to deny the implications; call it proof-of-concept, if you will, that a Singularity is coming.

More recently, we have developed a machine that can defeat the finest Jeopardy players, and now there’s a AI/robotic system that can play billiards at a high level. You see where this is going, right? We are systematically creating individual systems that will eventually and collectively exceed all human capacities. This can only be described as an intelligence explosion. While we are a far ways off from creating a unified system that can defeat us well-rounded and highly multi-disciplinal humans across all fields, it’s not unrealistic to suggest that such a day is coming.

But that’s beside the point. What’s of concern here is the advent of the super-expert system that works beyond human comprehension and control—the one that takes things a bit too far and with catastrophic results.

Or with good results.

Or with something that we can't even begin to imagine.

We don’t know, but we can be pretty darned sure it’ll be disruptive—if not paradigmatic in scope. This is why it’s called the Singularity. The skeptics and the critics can clench their hands in a fist and stamp their feet all they want about it, but that’s where we find ourselves.

We humans are already lagging behind many of our systems in terms of comprehension, especially in mathematics. Our artifacts will increasingly do things for reasons that we can’t really understand. We’ll just have to stand back and watch, incredulous as to the how and why. And accompanying this will come the (likely) involuntary relinquishment of control.

So, we can nit-pick all we want about definitions, fantasize about creating a god from the machine, or poke fun at the rapture of the nerds.

Or we can start to take this potential more seriously and have a mature and fully engaged discussion on the matter.