Why I'm Not Afraid of the Singularity
Kyle Munkittrick
2011-01-21 00:00:00
URL




I used to be all about the Singularity. I thought for certain that some sort of Terminator / HAL-9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We'd make smart robots, we'd treat them poorly, they'd rebel and slaughter humanity.

Now I'm not so sure. I have big, gloomy doubts about the Singularity.

Michael Anissimov tries to re-stoke the flames of fear over at Accelerating Future with his blog post "Yes, The Singularity is the Single Biggest Threat to Humanity":

Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like.

It's hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can't. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there's a lot of evidence that we aren't.


image

Anissimov continues:

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend.

There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other's preferences deeply into account, in ways we are just beginning to understand.

If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.


Oh my stars, that does sound threatening.

But again, those weird, nagging doubts linger in the back of my mind. For a while, I couldn't place my finger on the problem, until I re-read Anissimov's post and realized that my disbelief flared up every time I read something about AGI doing something.

AGI will remove the atmosphere. Really? How? This article - and in fact, all arguments about the danger of the Singularity - necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not.

READ THE REST