Singularities Enough, and Time
Jamais Cascio
2008-06-30 00:00:00
URL

Karl argues that we can't count on super-intelligent AIs to save us from environmental disaster, since by the time they're possible (assuming that they're possible), things will have gotten so bad that they won't matter (and/or won't have any resources available to act, or even persist). It's a pretty straightforward argument, and echoes pieces I've written on parallel themes. In short, my initial reaction, was "yeah, of course."

But giving it a bit more thought, I see that Karl's argument has a couple of subtle, but important, flaws.

The first is that he makes the assumption that nearly every casual discussion of the Singularity concept makes, in that he defines it as "...within about 25 years, computers will exceed human intelligence and rapidly bootstrap themselves to godlike status." But if you go back to Vinge's original piece, you'll see that he actually suggests four different pathways to a Singularity, only two of which arguably include super-intelligent AI. His four pathways are:

• There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
• Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
• Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
• Biological science may provide means to improve natural human intellect.


The first two depend upon computers gaining self-awareness and boostrapping themselves into super-intelligence through some handwaved process. People don't talk much about the Internet "waking up" these days, but talk of artificially intelligent systems remains quite popular. And while the details of how we might get from here to a seemingly intelligent machine grow more sophisticated, there's still quite a bit of handwaving about how that bootstrapping to super-intelligence would actually take place.

The second two -- computer/human interfaces and biological enhancement -- fall into the category of "intelligence augmentation," or IA. Here, the notion is that the human brain remains the smartest thing around, but has either cybernetic or biotechnological turbo chargers. It's important to note that the cyber version of this concept does not require that the embedded/connected computer be anything other than a fancy dumb system -- you wouldn't necessarily have to put up with an AI in your head.

So when Karl says that the Singularity, if it's even possible, wouldn't arrive in nearly enough time to deal with global environmental disasters, he's really only talking about one kind of Singularity. It's this narrowing of terms that leads to the second flaw in his argument.

Karl seems to suggest that only super-intelligent AIs would be able to figure out what to do about an eco-pocalypse. But there's still quite a bit of advancement to be had between the present level of intelligence-related technologies, and Singularity-scale technologies -- and that pathway of advancement will almost certainly be of tremendous value to figuring out how to avoid disaster.

This pathway is especially clear when it comes to the two non-AI versions of the Singularity concept. With bio-enhancement, it's easy to find stories about how Ritalin or Adderall or Provigil have become productivity tools in school and in the workplace. To the degree that our sense of "intelligence" depends on a capacity to learn and process new information, these drugs are simple intelligence boosters (ones with potential risks, as the linked articles suggest). While they're simple, they're also indicative of where things are going: our increasing understanding of how the brain functions will very likely lead to more powerful cognitive modifications.

The intelligence-boosting through human-computer connections is even easier to see -- just look in front of you. We're already offloading certain cognitive functions to our computing systems, functions such as memory, math, and increasingly, information analysis. Powerful simulations and petabyte-scale datasets allow us to do things with our brains that would once have been literally unimaginable. That the interface between our brains and our computers requires typing and/or pointing, rather than just thinking, is arguably a benefit rather than a drawback: upgrading is much simpler when there's no surgery involved.

You don't have to believe in godlike super-AIs to see that this kind of intelligence enhancement can lead to some pretty significant results as the systems get more complex, datasets get bigger, connections get faster, and interfaces become ever more useable.

So we have intelligence augmentation through both biochemistry and human-computer interface well underway and increasingly powerful, with artificial intelligence on some possible horizon. Let's cast aside the loaded term "Singularity" and just talk about getting smarter. This is happening now, and will under nearly any plausible scenario keep happening for at least the next decade and a half. Enhanced intelligence alone won't solve global warming and other environmental threats, but it will almost certainly make the solutions we come up with more effective. We could deal with these crises without getting any smarter, to be sure, and we shouldn't depend on getting smarter later as a way of avoiding hard work today. But we should certainly take advantage of whatever new capacities or advantages may emerge.

I still say that the Singularity is not a sustainability strategy, and agree with Karl that it's ludicrous to consider future advances in technology as our only hope. But we should at the same time be ready to embrace such advances if they do, in fact, emerge. The situation we face, particularly with regards to climate disruption, is so potentially devastating that we have to be willing to accept new strategies based on new conditions and opportunities. In the end, the best tool we have for dealing with potential catastrophe is our ability to innovate.