Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Summa Technologiae, Or Why The Trouble With Science Is Religion

Technoprogressive Declaration - Transvision 2014

Transhumanism: A Glimpse into the Future of Humanity

Brain, Mind, and the Structure of Reality

How America’s Obsession With Bad Birth Control Hurts and Even Kills Women

A decade of uncertainty in nanoscale science and engineering


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

instamatic on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 21, 2014)

Peter Wicks on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 21, 2014)

instamatic on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 20, 2014)

Peter Wicks on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 20, 2014)

instamatic on 'Pastor-Turned-Atheist Coaches Secular Church Start-Ups' (Nov 20, 2014)

Michael Nuschke on 'What is Technoprogressivism?' (Nov 19, 2014)

@andy00778 on 'Does Religion Cause More Harm than Good? Brits Say Yes. Here’s Why They May be Right.' (Nov 19, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Why Running Simulations May Mean the End is Near
Nov 3, 2014
(20390) Hits
(13) Comments

Does Religion Cause More Harm than Good? Brits Say Yes. Here’s Why They May be Right.
Nov 18, 2014
(18948) Hits
(1) Comments

2040’s America will be like 1840’s Britain, with robots?
Oct 26, 2014
(14485) Hits
(33) Comments

Decentralized Money: Bitcoin 1.0, 2.0, and 3.0
Nov 10, 2014
(8522) Hits
(1) Comments



IEET > Security > Life > Vision > Futurism > Directors > George Dvorsky

Print Email permalink (11) Comments (5404) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Bruce Sterling Thinks Artificial Intelligence Has Jumped the Shark


George Dvorsky
By George Dvorsky
io9

Posted: Jan 19, 2013

Bruce Sterling wrote influential works like Schismatrix and Islands in the Net, plus he practically invented cyberpunk (with all due respect, of course, to William Gibson and Rudy Rucker). We are serious fans of his work. And if his recent comments about the potential risks of greater-than-human artificial intelligence — or lack thereof — are any indication, he's itching to start a giant fight among futurists.

Bruce Sterling Thinks Artificial Intelligence Has Jumped the Shark
Sterling made his remarks in the current manifestation of the Edge's annual Big Question. This year, editor John Brockman asked his coterie of experts to tell us what we should be most worried about. In response, Sterling penned a four paragraph article saying that we shouldn't fear the onset of super AI because a "Singularity has no business model." He writes:

This aging sci-fi notion has lost its conceptual teeth. Plus, its chief evangelist, visionary Ray Kurzweil, just got a straight engineering job with Google. Despite its weird fondness for AR goggles and self-driving cars, Google is not going to finance any eschatological cataclysm in which superhuman intelligence abruptly ends the human era. Google is a firmly commercial enterprise.

It's just not happening. All the symptoms are absent. Computer hardware is not accelerating on any exponential runway beyond all hope of control. We're no closer to "self-aware" machines than we were in the remote 1960s. Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s "minds on nonbiological substrates" that might allegedly have the "computational power of a human brain." A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there's no there there.

So, as a Pope once remarked, "Be not afraid." We're getting what Vinge predicted would happen without a Singularity, which is "a glut of technical riches never properly absorbed." There's all kinds of mayhem in that junkyard, but the AI Rapture isn't lurking in there. It's no more to be fretted about than a landing of Martian tripods.

In response, a number of commentators spoke up.

Tyler Cowen of Marginal Revolution reposted Sterling's article, prompting a healthy and heated discussion. Over at the New Yorker, Gary Marcus noted that Sterling's "optimism has little to do with reality." And Kevin Drum of Mother Jones wrote, "I'm genuinely stonkered by this. If we never achieve true AI, it will be because it's technologically beyond our reach for some reason. It sure won't be because nobody's interested and nobody sees any way to make money out of it."

Now, it's completely possible that Sterling is trolling us, but I doubt it. Rather, his take on the Singularity, and how it will come about, is completely skewed. As noted, there is most absolutely a business model for something like this to happen, and we're already starting to see these seeds begin to sprout.

And indeed, one leading artificial intelligence researcher has estimated that there's roughly a trillion dollars to be made alone as we move from keyword search to genuine AI question-answering on the web.

Sterling's misconception about the Singularity is a frustratingly common one, a mistaken notion that it will arise as the result of efforts to create "self aware" machines that mimic the human brain. Such is hardly the case. Rather, it's about the development of highly specialized and efficient intelligence systems — systems that will eventually operate outside of human comprehension and control.

Already today, machines like IBM's Watson (who defeated the world's best Jeopardy players) and computers that trade stocks at millisecond speeds are precursors to this. And it's very much in the interests of private corporations to develop these technologies, whether it be to program kiosk machines at corner stores, create the next iteration of Apple's SIRI, or program the first generation of domestic robots.

And indeed, it's not a coincidence that Google recently hired Ray Kurzweil — author of The Singularity is Near — to help it build a rival system to SIRI.

Moreover, the U.S. military, as it continues to push its technologies forward, will most certainly be interested in creating AI systems that work at speeds and computational strengths far beyond what humans are capable of. The day is coming when human decision-making will be removed from the battlefield.

And does anyone seriously believe that the Pentagon will allow other countries to get a head start on any of this? The term ‘arms race' most certainly seems to apply — especially considering that AI can be used to develop other advanced forms of military technologies.

Finally, there's the potential for non-business and non-military interests to spawn super AI. Neuroscientists, cognitive scientists, and computer scientists are all hacking away at the problem — and they may very well be the first to reach the finish line. Human cognition and its relation to AI is still an unsolved problem for scientists, and for that reason they will continue to push the envelope of what's technically possible.

I'll give the last word to Kevin Drum:

As for the Singularity, a hypothesized future of runaway technological advancement caused by better and better AI, who knows? It might be the end result of AI, or it might not. But if it happens, it will be a natural evolution of AI, not something that happens because someone came up with a business model for it.

Image: Bruce Sterling/OARN.


George P. Dvorsky serves as Chair of the IEET Board of Directors and also heads our Rights of Non-Human Persons program. He is a Canadian futurist, science writer, and bioethicist. He is a contributing editor at io9 — where he writes about science, culture, and futurism — and producer of the Sentient Developments blog and podcast. He served for two terms at Humanity+ (formerly the World Transhumanist Association). George produces Sentient Developments blog and podcast.
Print Email permalink (11) Comments (5405) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Well, yes and no.

Here are the nuggets of truth at the heart of what Bruce Sterling says.

First, there is a lot of talk (which, George, you repeat) about how a general tide of evolution in AI will eventually yield systems that will be non-human-like and non-self-aware, but nevertheless so intelligent that they are beyond human ken and control.  This. Is. A. Myth!  It is a story that AI researchers tell themselves about progress, and it is the same story that has been told ever since the Minsky crowd waxed lyrical about how it would only take a decade or so to build a computer that was as intelligent as a human.

I say that from a position on the inside:  I have been doing AI research since the 80s, and I have clocked a serious amount of time as both an AI/software engineer and a cognitive scientist.  AI researchers have been hacking away, using basically the same stunted techniques, for decades without making any substantial progress.  And before you jump on that statement (!) let me add:  “substantial” progress means something more than mopping up the low-hanging fruit.  Siri and Watson are just 80s technology running on faster machines, and with a few more bells and whistles, but the core of what they do contains the same failure rates and the same failure modes that we already saw in the 80s.  Watson is not a supersonic jetliner, compared to the horse—it is just a horse on performance-enhancing drugs, with legs replaced by carbon-fiber springs.  AI as a field has been dead for decades:  it’s just that the corpse hasn’t stopped twitching yet.

As for the “no business model” idea:  I think Sterling nailed it.  Do you know what happens when researchers ask for a small investment to explore radical new ideas in AI?  Radical new ideas that might break the stalemate and unleash the future?  Nothing.  Nada.  Deadsville.  (Okay, I speak from bitter experience.  Indulge me, yeah?).  Everyone, but everyone, wants to support incremental progress and fashionable trends.  Even DARPA, which makes a point of specifically targeting “high-risk/high-payoff technologies” does not really do that in practice:  instead, it looks for track-record people who work in a gold-plated rut carved out by their postdocs, who spin some ideas until they look crazy enough that DARPA will throw some money at them (again).

You ask a rhetorical question that I’d like to answer: “And does anyone seriously believe that the Pentagon will allow other countries to get a head start on any of this?”

Umm, yes.  The Pentagon is a Bear of Little Brain.  Sure, it would never in a thousand years *intend* to let other countries get a head start on any of this, but that won’t stop it trying. grin

Final word.  Actually, I think that progress can be had, and that Sterling can be wrong.  I think you are right to say that by pushing the envelope (especially in the human-cognition/AI area, which is a barren wasteland at the moment) things can be made to happen.  But, see, when I look at the landscape of the research that is actually going on out there, I don’t see how anyone could come to such a positive conclusion using *that* for evidence.  Me, I can see a silver lining if I look at my own research and that of a tiny handful of other renegades out there, but I don’t see how anyone else could possibly come to such a positive conclusion, when what they seem to be doing is staring directly at the main rump of what is happening in AI at the moment, without seeming to even know that there are some unfunded renegades on the loose.

But, hey ho.  Let’s wait another ten years and see if I was right.





@Richard Loosemore

Could you direct me to a paper(s) that you wrote that best represents the path that you believe should be taken in AI?





@ Richard..

So what do you think about Kurzweil’s new corporate collaboration with Google?

Can superfast internet driven data mining and A.I search/interaction/participation, simulate an intelligent responsive system that can reap real benefits for global society?

It’s difficult to define intelligence? Holistically, is this really any more than speed of data processing to “weed out” invalid and illogical/irrational responses, creativity as “swerve balls” emerging in chaotic systems?

I would stretch to guess that biological brains are no more and no less than the same as complex neural machine processing?

How complex does the “derivative” algorithm need to be at the lowest level, is it not complexity from which the phenomenon of intelligence emerges?





Hey, Sterling’s bit sounds remarkably close to what I discussed much more extensively in the article “Artificious Intelligences”, recently published , inter alia, here: http://hplusmagazine.com/2012/11/20/artificious-intelligences/





Sterling suffers from a common problem that many people in society do.  They are confused by the illusion of dualism into a false belief that humans hold some sort of “god-like” powers called “consciousness’”.

Humans hold no such magical powers.  The insignificant powers we do hold, are already being duplicated by AI researches.  Not only is Sterling dead wrong with his belief that we are getting “no closer” to self aware systems, our systems are already fully self-aware.

It’s this false belief, that is so common in society, that “real” humans, have a magical power that no one has duplicated in a computer is what makes people like Sterling so un-aware of what’s happening at places like Google.

The problem with people like Sterling is not that he is unaware of what our computers have achieved.  It’s that he is unaware of just how truly insignificant, human mental powers really are.  Like most of us humans, he has a highly inflated ego when it comes to understanding our place in the universe.

Our machines have been replacing humans in the work force for a long time now, and as they continue their exponential climb in ability, while we humans see our physical and mental powers standing still, they will soon replace all of us.  This has been happening for a long time, but we are now reaching the point where the humans are starting to lose the race against the machines.  The machines are advancing faster, and taking over more jobs, faster than humans can retrain and find new jobs.  Anyone that doesn’t understand this, and who suggests, “there’s no business model for AI”, is going to left in the dust.





Hah….

Richard Loosemore and I wrote an article a couple years ago called “Why an Intelligence Explosion is Probable”: http://hplusmagazine.com/2011/03/07/why-an-intelligence-explosion-is-probable/

We considered there many counter-arguments and refuted them ... but we didn’t think of the counter-argument Sterling raises here, i.e. that advanced human-like AGI won’t occur because there’s no profit to be made in it…

With all due respect for Sterling’s literary prowess, I have to say that’s just kinda dumb….

No economic value in home service robots, elder care or nanny robots that can really do what humans do, and empathize with and understand humans?  No economic value in AGI scientists that can hold the totality of scientific datasets online in their minds and make hypotheses and analyses based thereupon?  No economic value in something SIRI-like that really works?  Gimme a break !! ....  Sterling reminds me of some know-it-all biz consultant in 1975 pontificating that there’s no business model in the Internet or mobile phones…

No one company, not even Google, is likely to build the Singularity—any more than one company built the Internet, or the world’s mobile phone infrastructure, or the Industrial Revolution, or modern mathematics, etc….  The Singularity is going to emerge from the inter-reinforcing activities of a host of different actors pursuing their own goals, some mainly economically motivated and some not….  Explicit AGI designs like, perhaps, my team’s OpenCog system, will be part of the picture…

People who whine that AGI is never going to happen because the AI field has been trying and failing since the 1960s, really make me laugh.  Get some historical perspective, folks.  That’s 50-60 years, which is a blink of the eye historically.  The hardware of the 1960s sucked by the standards of my iPhone….

Sterling is a very good SF writer—but remember, SF writers sell books by being interesting, not by being right…

—Ben Goertzel





@Michael Bone.  The paper “Complex Systems, Artificial Intelligence and Theoretical Psychology” is probably the one you want.  It can be found at http://richardloosemore.com/docs/2007_ComplexSystems_rpwl.pdf





Sterling may just be into provocation, but I think a more interesting interpretation of what he says is that there is no (great, obvious, high-priority) business case for the the development of persuasively anthropomorphic AIs.

*Of course* there is a business case for increasingly intelligent computers. Or for increasingly intelligent humans, for that matter.

In fact, I maintain in the article indicated above that *this* is both more strategic and more interesting than manufacturing Turing-qualified machines - something that Wolfram, eg, persuasively suggests could be done with any universal computing device at all, including the original PC, no mater how “stupid”, if you allow for an unlimited memory and an unlimitedly slow responsiveness.

In fact, besides the intrinsic scientific interest of creating the latter systems, “for the sake of it”, the only immediate application I can think for them is the emulation of existing humans with a sufficient accuracy to be considered by the individual interested as a metaphor of spiritual immortality - see under “mind-uploading”.





Stefano Vaj - I think there actually are many very good business cases for the development of anthropomorphic AIs.  Though I do agree that there are far more businesses cases for smart AIs that replace our mental abilities without seeming to be human, there are also plenty of cases where human-like behaviors from our machines will be the source of profits.

To start with, it’s important to build some AIs with human bodies so they can operate in environments built for humans which means being able to walk through buildings, ride elevators, climb stairs and ladders, sit in our chairs, ride in our cars, drive our cars, operate all the tools made for humans, including operating construction equipment, etc.

Though many of the tools currently built to be operated by humans will be redesigned in the age of AI there will always remain the need for an AI to operate within the environments built for humans, to perform services and to interact with humans so in this case, a humanoid body that can perform the tasks of a human will always be required - which means there will be a business model for it.

But in addition, humans like to interact with humans.  When I buy a cup of coffee, I like to have idle chat with the person behind the counter about how their day is going.  I like it when they they ask me how my life is going.  I like it when they notice I did something like get a hair cut, bought a new jacket.  We will be able to make coffee dispensing machines that have no human qualities at all - we just tell it what it wants, and it gives us the coffee.  But a machine that knows who we are, and can chat with us as human would, will always be more valuable, than a machine with no ability to understand us and say a kind thing or two to us.  There is a business model in having machines that understand us, and our needs, in very human-like ways.

And of course, there will be a huge market for highly anthropomorphic AI prostitutes.  The business model for that is obvious.

What Sterling got right, was the fact that there is no business model in making machines that will want to take control of the world away from us humans.  There is no business model for that.





Curt Welch

I don’t get why people would want to replace human employees with AIs for the qualities human employees already have, other than their bosses not having to pay AI employees.  I guess this is where the fear of machines taking our jobs comes into play.





Curt Welch: I am quite in agreement with what you say.

But by “anthropomorphic AIs” in my article I do not refer to human-shaped robots - yup, there are many conceivable uses for them, also given that many things around are made to be employed by humans… - but to computer programs emulating agency, Darwinian behaviours, consciousness, Turing-test interacting abilities, etc.

Now, such things are often an annoyance even in human employees or contractors. Why would I implement them in a robot or a computer program?





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: ‘Child Witches’ in Ghana

Previous entry: Time to Find Common Ground: Climate Change

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376