Bruce Sterling Thinks Artificial Intelligence Has Jumped the Shark
George Dvorsky
2013-01-19 00:00:00
URL

Bruce Sterling Thinks Artificial Intelligence Has Jumped the Shark

Sterling made his remarks in the current manifestation of the Edge's annual Big Question. This year, editor John Brockman asked his coterie of experts to tell us what we should be most worried about. In response, Sterling penned a four paragraph article saying that we shouldn't fear the onset of super AI because a "Singularity has no business model." He writes:




This aging sci-fi notion has lost its conceptual teeth. Plus, its chief evangelist, visionary Ray Kurzweil, just got a straight engineering job with Google. Despite its weird fondness for AR goggles and self-driving cars, Google is not going to finance any eschatological cataclysm in which superhuman intelligence abruptly ends the human era. Google is a firmly commercial enterprise.

It's just not happening. All the symptoms are absent. Computer hardware is not accelerating on any exponential runway beyond all hope of control. We're no closer to "self-aware" machines than we were in the remote 1960s. Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s "minds on nonbiological substrates" that might allegedly have the "computational power of a human brain." A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there's no there there.

So, as a Pope once remarked, "Be not afraid." We're getting what Vinge predicted would happen without a Singularity, which is "a glut of technical riches never properly absorbed." There's all kinds of mayhem in that junkyard, but the AI Rapture isn't lurking in there. It's no more to be fretted about than a landing of Martian tripods.




In response, a number of commentators spoke up.

Tyler Cowen of Marginal Revolution reposted Sterling's article, prompting a healthy and heated discussion. Over at the New Yorker, Gary Marcus noted that Sterling's "optimism has little to do with reality." And Kevin Drum of Mother Jones wrote, "I'm genuinely stonkered by this. If we never achieve true AI, it will be because it's technologically beyond our reach for some reason. It sure won't be because nobody's interested and nobody sees any way to make money out of it."

Now, it's completely possible that Sterling is trolling us, but I doubt it. Rather, his take on the Singularity, and how it will come about, is completely skewed. As noted, there is most absolutely a business model for something like this to happen, and we're already starting to see these seeds begin to sprout.

And indeed, one leading artificial intelligence researcher has estimated that there's roughly a trillion dollars to be made alone as we move from keyword search to genuine AI question-answering on the web.

Sterling's misconception about the Singularity is a frustratingly common one, a mistaken notion that it will arise as the result of efforts to create "self aware" machines that mimic the human brain. Such is hardly the case. Rather, it's about the development of highly specialized and efficient intelligence systems — systems that will eventually operate outside of human comprehension and control.

Already today, machines like IBM's Watson (who defeated the world's best Jeopardy players) and computers that trade stocks at millisecond speeds are precursors to this. And it's very much in the interests of private corporations to develop these technologies, whether it be to program kiosk machines at corner stores, create the next iteration of Apple's SIRI, or program the first generation of domestic robots.

And indeed, it's not a coincidence that Google recently hired Ray Kurzweil — author of The Singularity is Near — to help it build a rival system to SIRI.

Moreover, the U.S. military, as it continues to push its technologies forward, will most certainly be interested in creating AI systems that work at speeds and computational strengths far beyond what humans are capable of. The day is coming when human decision-making will be removed from the battlefield.

And does anyone seriously believe that the Pentagon will allow other countries to get a head start on any of this? The term ‘arms race' most certainly seems to apply — especially considering that AI can be used to develop other advanced forms of military technologies.

Finally, there's the potential for non-business and non-military interests to spawn super AI. Neuroscientists, cognitive scientists, and computer scientists are all hacking away at the problem — and they may very well be the first to reach the finish line. Human cognition and its relation to AI is still an unsolved problem for scientists, and for that reason they will continue to push the envelope of what's technically possible.

I'll give the last word to Kevin Drum:




As for the Singularity, a hypothesized future of runaway technological advancement caused by better and better AI, who knows? It might be the end result of AI, or it might not. But if it happens, it will be a natural evolution of AI, not something that happens because someone came up with a business model for it.




Image: Bruce Sterling/OARN.