IEET > GlobalDemocracySecurity > Vision > Directors > George Dvorsky > HealthLongevity > Futurism > Innovation > SciTech
There’s More to Singularity Studies Than Kurzweil
George Dvorsky   Aug 27, 2010   Sentient Developments  

I’m finding myself a bit disturbed these days about how fashionable it has become to hate Ray Kurzweil - because it’s not all about Ray.

It wasn’t too long ago, with the publication of The Age of Spiritual Machines, that he was the cause célèbre of our time. I’m somewhat at a loss to explain what has happened in the public’s mind since then; his ideas certainly haven’t changed all that much. Perhaps it’s a collective impatience with his timelines; the fact that it isn’t 2049 yet has led to disillusionment. Or maybe it’s because people are afraid of buying into a set of predictions that may never come true-a kind of protection against disappointment or looking foolish.
image
What’s more likely, however, is that his ideas have reached a much wider audience since the release of Spiritual Machines and The Singularity is Near. In the early days his work was picked up by a community who was already primed to accept these sorts of wide-eyed speculations as a valid line of inquiry. These days, everybody and his brother knows about Kurzweil. This has naturally led to an increased chorus of criticism by those who take issue with his thesis-both from experts and non-experts alike.

As a consequence of this popularity and infamy, Ray has been given a kind of unwarranted ownership over the term ‘Singularity.’ This has proven problematic on several levels, including the fact that his particular definition and description of the technological singularity is probably not the best one. Kurzweil has essentially equated the Singularity with the steady, accelerating growth of all technologies, including intelligence. His definition, along with its rather ambiguous implications, is inconsistent with the going definition used by other Singularity scholars, that of it being an ‘intelligence explosion’ caused by the positive feedback of recursively improving machine intelligences.

Moreover, and more importantly, Ray Kurzweil is one voice among many in a community of thinkers who have been tackling this problem for over half a century. What’s particularly frustrating these days is that, because Kurzweil has become synonymous with the Singularity concept, and because so many people have been caught in the hate-Ray trend, people are throwing out the Singularity baby with the bathwater while drowning out all other voices. This is not only stupid and unfair, it’s potentially dangerous; Singularity studies may prove crucial to the creation of a survivable future.

Consequently, for those readers new to these ideas and this particular community, I have prepared a short list of key players whose work is worth deeper investigation. Their work extends and complements the work of Ray Kurzweil in many respects. And in some cases they present an entirely different vision altogether. But what matters here is that these are all credible academics and thinkers who have worked or who are working on this important subject.

Please note that this is not meant to be a comprehensive list, so if you or your favorite thinker is not on here just take a chill pill and add a post to the comments section along with some context.

  • Jon von Neumann: The brilliant Hungarian-American mathematician and computer scientist, John von Neumann is regarded as the first person to use the term ‘Singularity’ in describing a future event. Speaking with Stanislaw Ulam in 1958, von Neumann made note of the accelerating progress of technology and constant changes to human life. He felt that this tendency was giving the appearance of our approaching some essential singularity beyond which human affairs, as we know them, could not continue. In this sense, von Neumann’s definition is more a declaration of an event horizon.
  • I. J. Good: One of the first and best definitions of the Singularity was put forth by mathematician I. J. Good. Back in 1965 he wrote of an “intelligence explosion”, suggesting that if machines could even slightly surpass human intellect, they might be able to improve their own designs in ways unforeseen by their designers and thus recursively augment themselves into far greater intelligences. He thought that, while the first set of improvements might be small, machines could quickly become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a Singularity).
  • Marvin Minsky: Inventor and author, Minsky is universally regarded as one of the world’s leading authorities in artificial intelligence. He has made fundamental contributions to the fields of robotics and computer-aided learning technologies. Some of his most notable books include The Society of Mind, Perceptrons, and The Emotion Machine. Ray Kurzweil calls him his most important mentor. Minsky argues that our increasing knowledge of the brain and increasing computer power will eventually intersect, likely leading to machine minds and a potential Singularity.
  • Vernor Vinge: In 1983, science fiction writer Vernor Vinge rekindled interest in Singularity studies by publishing an article about the subject in Omni magazine. Later, in 1993, he expanded on his thoughts in the article, “The Coming Technological Singularity: How to Survive in the Post-Human Era.” He (now famously) wrote, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Inspired by I. J. Good, he argued that superhuman intelligence would be able enhance itself faster than the humans who created them. He noted that, “When greater-than-human intelligence drives progress, that progress will be much more rapid.” He speculated that this feedback loop of self-improving intelligence could cause large amounts of technological progress within a short period, and that the creation of smarter-than-human intelligence represented a breakdown in humans’ ability to model their future. Pre-dating Kurzweil, Vinge used Moore’s law in an attempt to predict the arrival of artificial intelligence.
  • Hans Moravec: Carnegie Mellon roboticist Hans Moravec is a visionary thinker who is best known for his 1988 book, Mind Children, where he outlines Moore’s law and his predictions about the future of artificial life. Moravec’s primary thesis is that humanity, through the development of robotics and AI, will eventually spawn their own successors (which he predicts to be around 2030-2040). He is also the author of Robot: Mere Machine to Transcendent Mind (1998) in which he further refined his ideas. Moravec writes, “It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half-century of development. Indeed, for that reason, many long-time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty.”
  • Robin Hanson: Associate professor of economics at George Mason University, Robin Hanson has taken the “Singularity” term to to refer to sharp increases in the exponent of economic growth. He lists the agricultural and industrial revolutions as past “singularities.” Extrapolating from such past events, he proposes that the next economic singularity should increase economic growth between 60 and 250 times. Hanson contends that such an event could be triggered by an innovation that allows for the replacement of virtually all human labor, such as mind uploads and virtually limitless copying.
  • Nick Bostrom: The University of Oxford’s Nick Bostrom [co-founder of the IEET] has done seminal work in this field. In 1998 he published, “How Long Before Superintelligence,” in which he argued that superhuman artificial intelligence would likely emerge within the first third of the 21st century. He reached this conclusion by looking at various factors, including different estimates of the processing power of the human brain, trends in technological advancement and how fast superintelligence might be developed once there is human-level artificial intelligence.
  • Eliezer Yudkowsky: Artificial intelligence researcher Eliezer Yudkowsky is a co-founder and research fellow of the Singularity Institute for Artificial Intelligence (SIAI). He is the author of “Creating Friendly AI” (2001) and “Levels of Organization in General Intelligence” (2002). Primarily concerned with the Singularity as a potential human-extinction event, Yudkowsky has dedicated his work to advocacy and developing strategies towards creating survivable Singularities.
  • David Chalmers: An important figure in philosophy of mind studies and neuroscience, David Chalmers has a unique take on the Singularity where he argues that it will happen through self-amplifying intelligence. The only requirement, he claims, is that an intelligent machine be able to create an intelligence smarter than itself. The original intelligence itself need not be very smart. The most plausible way, he says, is simulated evolution. Chalmers feels that if we get to above-human intelligence it seems likely it will take place in a simulated world, not in a robot or in our own physical environment.

Like I said, this is a partial list, but it’s a good place to start. Other seminal thinkers include Alan Turing, Alvin Toffler, Eric Drexler, Ben Goertzel, Anders Sandberg, John Smart, Shane Legg, Martin Rees, Stephen Hawking and many, many others. I strongly encourage everyone, including skeptics, to take a deeper look into their work.

And as for the all the anti-Kurzweil sentiment, all I can say is that I hope to see it pass. There is no good reason why he-and others-shouldn’t explore this important area. Sure, it may turn out that everyone was wrong and that the future isn’t at all what we expected. But as Enrico Fermi once said, “There’s two possible outcomes: if the result confirms the hypothesis, then you’ve made a discovery. If the result is contrary to the hypothesis, then you’ve made a discovery.”

Regardless of the outcome, let’s make a discovery.

George P. Dvorsky serves as Chair of the IEET Board of Directors and also heads our Rights of Non-Human Persons program. He is a Canadian futurist, science writer, and bioethicist. He is a contributing editor at io9 — where he writes about science, culture, and futurism — and producer of the Sentient Developments blog and podcast. He served for two terms at Humanity+ (formerly the World Transhumanist Association). George produces Sentient Developments blog and podcast.



COMMENTS

Excellent article, well-stated points.

    Two minor spelling corrections:

        Singuarlity -> Singularity
        Marin Rees -> Martin Rees

Forget it.

I can’t hate all those poeple.

Do people tend to hate Kurzweil - or just the Singularity idea, which he has investigated in most detail and most promoted? Anyway, only the idea is important.

I know little of Kurzweil’s personality, but he is certainly genius. Nevertheless, I think he is misguided if he is betting his shirt on an imminent singularity, or even getting close in his lifetime - surely not enough to provide his hoped-for immortality, &c.

When computing is no bottleneck - already plenty for most of its originally envisaged purposes - other bottlenecks will surely emerge, as they always have in the past. Consider weather or climate prediction. Suppose that, for trivial cost, we can do almost perfect simulations, including repeating them, even billions of times, so that we have accurate estimates of errors. Given their chaotic nature, there will still be very large errors - maybe not much better than today - unless we can dramatically improve physical sampling, especially number of spatial points. The bottleneck to progress will then be in sensors and their deployment. Even if they, too, improve at an exponential of an exponenential rate, their rates may be many orders of magnitude slower than computing to date, which is surely exceptional, e.g., compared to transport.

If or when the bottleneck to progress is not sensors and their deployment, it would surely be something else. If not technological, then maybe social. How about political bottlenecks? Progress there seems little better than linear; maybe already close to its zenith! grin

It may be true that some anxieties have perhaps lead to frustrations that progress appears to be slow and not as fast as we wish, (although this is not really the case). Moreover, everyone expects Kurzweil to always have something new to offer, and so criticise his recent presentations as “heard it all before”? Yet a good salesman should maybe always have at least something a little new to offer? However, in reaching wider audiences restating his ideas is absolutely necessary. He has been criticised most recently regarding his speculations for reverse engineering the human brain, which I feel has been rather harsh, yet I would like to hear more of his views on tapping solar energy and transforming these ideas into reality?

Great article and thanks for the info.

What a great post.  For the record, I am doing a documentary on the singularity and approached Ray, as best I could, only to be rebuffed through his staff.  His own new documentary, which I have not seen but am looking forward to seeing, may explain that, but the point is the same—-  are we talking about a description of a natural phenomenon, like a hurricane, or talking about the many strategies people have concocted to benefit from it? 

Ray excels at self promotion and benefiting from the concept (I bought some vitamins from him myself) but that has nothing to do with the hurricane— that’s like selling flashlights because you know a hurricane is coming.  That is what I think most people resent about Ray.

And most of the others noted here are not in the game to promote themselves or make a dollar from it—they are scientists who are studying the hurricane coming.

Also— you are spot on regarding the varying “point"of the singularity— is it when computers are smarter than humans (I think Ray leans this way) or when our life expectancy exceeds the passage of time (life expectancy rises 1.1 years in a year) and we reach “immortality”?  Based on the stock market, one could say the singularity for planet earth is already happening in financial terms, certainly, we’re on the “knee”. 

Anyway, won’t ramble on—great post, great comments, keep it coming.  I wish I knew how to drop my trailer on this post, but give it a search and you’ll find it—

Regardless of the outcome, let’s make a discovery.

Amen!


Now I wish people where more open minded of all the possibilities that exist out there in the vast ocean of information.

I have always argued for defining the Singularity as the point at which predictive models fail because of the discovery or invention of something which alters the developmental path of humanity in ways that no-one “pre-singularity” could anticipate or predict.

Fire was one such Singularity, as was language, agriculture, writing, the printing press and the Industrial Revolution. We stated a Singularity with the invention of Computers, which is still ongoing, and we are heading into another one this decade as many other technologies begin matureing, such as VR, THz computers, Bioprinting, and numerous other developments, all tied directly into steadily increasing computer power accelerating knowledge acquisition.

And this will lead to many of those other “Singularities” mentioned above.

Many people think Ray is too optimistic, but I don’t, mainly because I am not looking at a few “strong” developments, like AI, Robotics, Genetics or Nanotech, I am looking at ten million “weak” developments that are racing at us at breakneck speed, and will change the world in ways that we can barely begin to imagine.

For example, I recently wrote a article for H_ magazine on the potential use of Quantum Dots to create displays that are capable of being a display and a camera simultaneously, with individual pixels smaller than the human eye can discern, and how with the “liquid ink” technologies DOW is using to “print” OLED screens, such Camera/Displays could be applied to nearly any surface, providing us the ability to dictate an object’s appearance as easily as you can in a VR world like SecondLife.

But such a display could also grant us such “superpowers” as invisibility, IR and UV vision, Lidar vision, not to mention enable fully immersive VR/AR displays

Compared to “A.I.” it’s not as “big” but it’s going to make enormous changes to our day to day reality, and it’s just one of millions of small advances that add up to a tsunami of changes over the next decade.

So, while I respect Ray, I can’t call him an optimist, and to be honest, I think he’s focused so much attention on the “long range final results” that he’s made too many people ignore the “short term developmental steps” that lead to the long range final results

The public is fickle, and always has been.  And when they get impatient or disillusioned, it’s always easier to find a single figure to place the blame.  Ray is one of the best-known transhumanist prognosticators, so he automatically gets the scorn.

I most certainly do not hate Kurzweil, but I did post some virulent criticisms of his. Ray is losing track in some key important areas, and that is not the acuity of his vision (the guy is on track for a Nobel prize) .

Rather I am bothered by his annoying monotone style. This is an exercise in alienating people. It is frankly stupid and offensive. You can argue I am debating the choice of his tie, but I do take offense. Ray’s choice affect us all. What he does affects us all. If he acts like a buffoon on stage and sooner or later becomes a punchline drum-roll joke, over something he could have done something about ....

If he hired a press agent and a somewhat competent stand-up comedian in time (about a year ago) and polished up his presentation then I would have breathed safer by now. But this is an accident waiting to happen, mark my words.

Ray should be more… entertaining…. get more diverse material…

Though I quote Kurzweil often in my newspaper column, I think he does weak interviews and often speaks defensively even when he’s not being attacked.

Also, announcing that he gulps down over 200 pills daily in order to extend his life turns off many. At 79 years of age, I have a life extension program that with any luck, could keep me patched up until tomorrow’s technologies can give me a boost.

However, I respect the guy and wish him the best.

What ray’s personal lifestyle choices are are NONE of my business. His choice in intake, his sexual preferences, his clothes style, even his presentation - seriously it is none of my business.  He stays the main icon of this movement, the main spokesperson of this movement and one the visionaries of this day and age. And yes, he is due for a nobel prize, if he survives for 20 odd years.  So his chances are fair for fortune and glory.

But what will is the practical reality? The practical reality is he alienates people. Ray is a perfectionist - and it isn’t just one person saying this - I have heard this from five people now. In one case I overheard a phone call I was NOT supposed to overheard between someone who worked with him, and someone else and OH boy - it may be private information, but people don’t like the guy close up. he has a serious likeability problem going on. Of course, again, this is none of my business. And I am being rude and invasive and I should keep my incensitive big mouth shut.

But I am just speculating creatively here. It’s a matter of time before someone imitiates and persiflages him. Look at the guy. He is an accident waiting to happen and he is precisely the kind of ivory tower intellectual that will respond with the kind of utter confused bewilderment we have seen so often often of apparatchniks when taken apart publicly.  He’ll just stare and ‘’ but ... but…but’

I certainly didn’t miss it when in the last installment of The Hulk they needed a ‘mad, manic, irresponsible scientist’ and they clearly were influenced by… someone…

http://www.squaremans.com/images/Hulk2.4.jpg

remember him? Mister blue… seriously…

http://machineslikeus.com/images/people/de-garis_hugo.jpg

There is something subconscious at work here. I hope. What happen if this becomes conscious? What happens if the stand up comedians start doing this intentionally?  Ray has a bullseye painted on his head.

I wish more of use were as dedicated, smart, and productive Individuals as Raymond… but instead we’re just… inconsequential.

May he outlive us all.

Of course. Ray will one day be prosper and be happy and a respected monarch on Moon, at the Kurzweil solar sytem university of Galactic progress and Humanity, Professor, at a ripe young age of 1680, before he retires from his seat to migrate to beta cassiopeia.

No and that was no sarcastic. This man, if he lives - wont go away. He is 100% right and worthy of love and adoration. But damn, a LOT can happen until then, and a lot can go wrong before then. Please wake up and be a bit proactive.

@Khannea

I wrote Ray not to long ago to discuss how VR should have been mentioned far more prominently as it was going to be the “next major development” that would fuel the development of GN and R, and got a “I mentioned it here and here and here”  When I wrote back to explain why it deserved far more than just “oh yeah and there will be VR” mention, I got ignored.

I’ve also sought to ask Ray for his views on the development of THz computers and whether they affected his timeline predictions for computing power, as based on current graphene research it seems likely we will see THz computers this decade, a multi order of magnitude jump that far exceeds the “Moore’s Law” curve, but also was ignored.

I am not proposing rebellion against Ray. Ray is not our King or Priest or Savior. He is a mortal, fallible human being doing a difficult split between verifiable science and speculative media. He is just vulnerable and exposed.

I am not proposing making him more exposed. If anything he needs support and hints and help.  I just hope he is smart enough to see his flaws, but I fear he is missing some of them.

And me, I am myself so riddled with flaws and am the LAST person to lead crusades or challenge or accuse. Only thing I can morally do is express concern. That’s all.  Maybe some distaste.

That is all I will say about it.

One more typo: “I. G. Good” -> “I. J. Good”

YOUR COMMENT Login or Register to post a comment.

Next entry: The Neurological Orgasm

Previous entry: A Paradox of Enhancement