IEET > Vision > Fellows > Ben Goertzel > Futurism
How long until human-level AI?
Ben Goertzel   Sep 19, 2010   Seth Baum  

Experts on artificial general intelligence provide estimates for the future of AGI.


Summary:

The development of human-level AI has been a core goal of the AI field since its inception, though at present it occupies only a fraction of the field’s efforts. To help understand the viability of this goal, this article presents an assessment of expert opinions regarding human-level AI research conducted at AGI-09, a conference for this AI specialty.

We found that various experts strongly disagree with each other on certain matters, such as timing and ordering of key milestones. However, we did find that most experts expect human-level AI to be reached within upcoming decades, and all experts give at least some chance that some milestones will be reached within this time. Furthermore, a majority of experts surveyed favor an integrative approach to human-level AI rather than an approach centered on a particular technique. Finally, experts are skeptical about the impact of massive research funding, especially if it is concentrated in relatively few approaches. These results suggest that the possibility of achieving human-level AI in the near term should be given serious consideration.

“How long until human-level AI? Results from an expert assessment” - by Seth D. Baum., Ben Goertzel, and Ted G. Goertzel. To be published in Technological Forecasting & Social Change.

Click here to view a full pre-print of the article (pdf).

Non-technical summary by Ben Goertzel, Seth Baum, and Ted Goertzel: “How long till human-level AI? What do the experts say?h+ Magazine, February 5, 2010.

Blog discussion at Moral Machines.

Ben Goertzel Ph.D. is a fellow of the IEET, and founder and CEO of two computer science firms Novamente and Biomind, and of the non-profit Artificial General Intelligence Research Institute (agiri.org).



COMMENTS

First, I want to point out that the common consensus thirty years ago was that a computer would never beat the best human.  WRONG.

Second,  I would like to point out that the experts surveyed almost certainly have zero knowledge of top secret government AI programs.

Third, various definitions exist for AI.  The one I prefer is that human-level AI means a computer that excels beyond human ability - which means AI already exists for many areas of human ability (like chess).

Finally, I have found that even people who are familiar with AI and advanced program techniques are “homo-centric” and are bias against computer based superiority.  In fact, I have found experts generally are bias against “other expert” superiority.  Therefore, until an AI (or other experts) is demonstrably eating their lunch, experts will generally exhibit skepticism toward anything that challenges their superiority.

Life is about pain, and Living is how we manage the Pain, i mean Amnesia and Memory. We select, We are an Ethic being. Our body send us a mass of messages superior to our interpretation and we have to Select every nanosecond what we perceive “what we prefere to feel or to know”.
We would die in agony if we could accept every single “pain messages from ours single cells”.
Maybe a full human-mind will be possible to re-create in 10 years (like Kurzweil asserts) but not a full human artificial brain. There is a big difference from Mind and Human Intellingence and a “Simple Sentient (Human or not) Algorythm”. This is my humble opinion. Marzio.

The above comment does not pass the Turing test.

YOUR COMMENT Login or Register to post a comment.

Next entry: On Wrestling with a Pig: Getting Dirty in a Debate

Previous entry: Reconstructing Minds from Software Mindfiles