Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

“Unequal access to technology: what can we learn from smartphones?” (50min)

“Demystifying visionary technology” (1hr)

“What is a fair distribution of brains?” (1hr)

Natasha Vita-More, “Informed Radical Life Extension, by Design” (53min)

Ambition: A Short Sci Fi Film Celebrates the Rosetta Mission (5min)

Transvision 2014, the Technoprogressive Declaration, & the ISF


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

jhughes on 'Technoprogressive Declaration - Transvision 2014' (Nov 26, 2014)

dangrsmind on 'Technoprogressive Declaration - Transvision 2014' (Nov 26, 2014)

Peter Wicks on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Giulio Prisco on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Peter Wicks on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Giulio Prisco on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)

Peter Wicks on 'Summa Technologiae, Or Why The Trouble With Science Is Religion' (Nov 26, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Why Running Simulations May Mean the End is Near
Nov 3, 2014
(21338) Hits
(15) Comments

Does Religion Cause More Harm than Good? Brits Say Yes. Here’s Why They May be Right.
Nov 18, 2014
(19901) Hits
(2) Comments

Decentralized Money: Bitcoin 1.0, 2.0, and 3.0
Nov 10, 2014
(8900) Hits
(1) Comments

Psychological Harms of Bible-Believing Christianity
Nov 2, 2014
(6916) Hits
(5) Comments



IEET > Security > Life > Enablement > Vision > Futurism > Technoprogressivism > Fellows > Jamais Cascio

Print Email permalink (2) Comments (3326) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Singularities Enough, and Time


Jamais Cascio
By Jamais Cascio
Open the Future

Posted: Jun 30, 2008

A few people have asked me what I thought of Karl Schroeder’s recent article at Worldchanging, “No Time for the Singularity.”

Karl argues that we can’t count on super-intelligent AIs to save us from environmental disaster, since by the time they’re possible (assuming that they’re possible), things will have gotten so bad that they won’t matter (and/or won’t have any resources available to act, or even persist). It’s a pretty straightforward argument, and echoes pieces I’ve written on parallel themes. In short, my initial reaction, was “yeah, of course.”

But giving it a bit more thought, I see that Karl’s argument has a couple of subtle, but important, flaws.

The first is that he makes the assumption that nearly every casual discussion of the Singularity concept makes, in that he defines it as “...within about 25 years, computers will exceed human intelligence and rapidly bootstrap themselves to godlike status.” But if you go back to Vinge’s original piece, you’ll see that he actually suggests four different pathways to a Singularity, only two of which arguably include super-intelligent AI. His four pathways are:

• There may be developed computers that are “awake” and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is “yes, we can”, then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
• Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity.
• Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
• Biological science may provide means to improve natural human intellect.

The first two depend upon computers gaining self-awareness and boostrapping themselves into super-intelligence through some handwaved process. People don’t talk much about the Internet “waking up” these days, but talk of artificially intelligent systems remains quite popular. And while the details of how we might get from here to a seemingly intelligent machine grow more sophisticated, there’s still quite a bit of handwaving about how that bootstrapping to super-intelligence would actually take place.

The second two—computer/human interfaces and biological enhancement—fall into the category of “intelligence augmentation,” or IA. Here, the notion is that the human brain remains the smartest thing around, but has either cybernetic or biotechnological turbo chargers. It’s important to note that the cyber version of this concept does not require that the embedded/connected computer be anything other than a fancy dumb system—you wouldn’t necessarily have to put up with an AI in your head.

So when Karl says that the Singularity, if it’s even possible, wouldn’t arrive in nearly enough time to deal with global environmental disasters, he’s really only talking about one kind of Singularity. It’s this narrowing of terms that leads to the second flaw in his argument.

Karl seems to suggest that only super-intelligent AIs would be able to figure out what to do about an eco-pocalypse. But there’s still quite a bit of advancement to be had between the present level of intelligence-related technologies, and Singularity-scale technologies—and that pathway of advancement will almost certainly be of tremendous value to figuring out how to avoid disaster.

This pathway is especially clear when it comes to the two non-AI versions of the Singularity concept. With bio-enhancement, it’s easy to find stories about how Ritalin or Adderall or Provigil have become productivity tools in school and in the workplace. To the degree that our sense of “intelligence” depends on a capacity to learn and process new information, these drugs are simple intelligence boosters (ones with potential risks, as the linked articles suggest). While they’re simple, they’re also indicative of where things are going: our increasing understanding of how the brain functions will very likely lead to more powerful cognitive modifications.

The intelligence-boosting through human-computer connections is even easier to see—just look in front of you. We’re already offloading certain cognitive functions to our computing systems, functions such as memory, math, and increasingly, information analysis. Powerful simulations and petabyte-scale datasets allow us to do things with our brains that would once have been literally unimaginable. That the interface between our brains and our computers requires typing and/or pointing, rather than just thinking, is arguably a benefit rather than a drawback: upgrading is much simpler when there’s no surgery involved.

You don’t have to believe in godlike super-AIs to see that this kind of intelligence enhancement can lead to some pretty significant results as the systems get more complex, datasets get bigger, connections get faster, and interfaces become ever more useable.

So we have intelligence augmentation through both biochemistry and human-computer interface well underway and increasingly powerful, with artificial intelligence on some possible horizon. Let’s cast aside the loaded term “Singularity” and just talk about getting smarter. This is happening now, and will under nearly any plausible scenario keep happening for at least the next decade and a half. Enhanced intelligence alone won’t solve global warming and other environmental threats, but it will almost certainly make the solutions we come up with more effective. We could deal with these crises without getting any smarter, to be sure, and we shouldn’t depend on getting smarter later as a way of avoiding hard work today. But we should certainly take advantage of whatever new capacities or advantages may emerge.

I still say that the Singularity is not a sustainability strategy, and agree with Karl that it’s ludicrous to consider future advances in technology as our only hope. But we should at the same time be ready to embrace such advances if they do, in fact, emerge. The situation we face, particularly with regards to climate disruption, is so potentially devastating that we have to be willing to accept new strategies based on new conditions and opportunities. In the end, the best tool we have for dealing with potential catastrophe is our ability to innovate.


Jamais Cascio is a Senior Fellow of the IEET, and a professional futurist. He writes the popular blog Open the Future.
Print Email permalink (2) Comments (3327) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


We can and are solving the current energy and environmental crisis with technology already available. For example, weak nanotech can manfacture PV whose installed cost yields an ROI greater than bonds or the average stock portfolio gains.

I should also note, those paths are not mutually exclusive.





“Karl argues that we can’t count on super-intelligent AIs to save us from environmental disaster, since by the time they’re possible (assuming that they’re possible), things will have gotten so bad that they won’t matter”

- I doubt that there is any environmental scenario that is beyond the ability of a true “superintelligence” to sort out - I suspect Karl Schroeder is simply underestimating what is possible with enough intelligence. I tend to think of the outcome of an AI hard takeoff as an agent who can arbitrarily (subject to the laws of physics) re-arrange the atoms of the solar system. This would include the task of reducing the concentration of CO_2 in the atmosphere, for example.

But I would agree that we cannot *count* on a superintelligent AI or other form of superintelligence solving our problems: we have no really good way of predicting when such an advance will happen. Most intelligent people who have thought about the situation carefully tend conclude that a superintelligence of some form is likely by 2100.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Dupuy’s “anti-humanism”

Previous entry: Aging: the disease, the cure, the implications

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376