IEET > GlobalDemocracySecurity > Vision > Directors > George Dvorsky > Futurism > Cyber
Hear that? It’s the Singularity coming
George Dvorsky   Jul 3, 2011   Sentient Developments  

The idea of a pending technological Singularity is under attack again with a number of prominent futurists arguing against the possibility—the most prominent being Charlie Stross and his astonishingly unconvincing article, ”Three arguments against the singularity.” While it’s not my intention to write a comprehensive rebuttal at this time, I would like to bring something to everyone’s attention: The early rumblings of the coming Singularity are becoming increasingly evident and obvious.

Make no mistake. It’s coming.

As I’ve discussed on this blog before, there are nearly as many definitions of the Singularity as there are individuals who are willing to talk about it. The whole concept is very much a sounding board for our various hopes and fears about radical technologies and where they may bring our species and our civilization. It’s important to note, however, that at best the Singularity describes a social event horizon beyond which it becomes difficult, if not impossible, to predict the impact of the advent of recursively self-improving greater-than-human artificial intelligence.

So, it’s more of a question than an answer. And in my own attempt to answer this quandary, I have personally gravitated towards the I.J. Good camp in which the Singularity is characterized as an intelligence explosion. In 1965 Good wrote,

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

This perspective and phrasing sits well with me, mostly because I already see signs of this pending intelligence explosion happening all around us. It’s becoming glaringly obvious that humanity is offloading all of it’s capacities, albeit in a distributed way, to its technological artifacts. Eventually, these artifacts will supersede our capacities in every way imaginable, including the acquisition of new ones altogether.

A common misnomer about the Singularity and the idea of greater-than-human AI is that it will involve a conscious, self-reflective, and even morally accountable agent. This has led some people to believe that it will have deep and profound thoughts, quote Satre, and resultantly act in a quasi-human manner. This will not be the case. We are not talking about artificial consciousness or even human-like cognition. Rather, we are talking about super-expert systems that are capable of executing tasks that exceed human capacities. It will stem from a multiplicity of systems that are individually singular in purpose, or at the very least, very limited in terms of functional scope. And in virtually all cases, these systems won’t reflect on the consequences of their actions unless they are programmed to do so.

But just because they’re highly specialized doesn’t mean they won’t be insanely powerful. These systems will have access to a myriad of resources around them, including the internet, factories, replicators, socially engineered humans, robots that they can control remotely, and much more; this technological outreach will serve as their arms and legs.

Consequently, the great fear of the Singularity stems from the realization that these machine intelligences, which will have processing capacities a significant order of magnitude beyond that of humans, will be able to achieve their pre-programmed goals without difficulty–even if we try to intervene and stop them. This is what has led to the fear of poorly programmed SAI or “malevolent” SAI. If our instructions to these super-expert systems are poorly articulated or under-developed, these machines could pull the old ‘earth-into-paperclips’ routine.

For those skeptics who don’t see this coming, I implore them to look around. We are beginning to see the opening salvo of the intelligence explosion. We are already creating systems that exceed our capacities and it’s a trend that is quickly accelerating. This is a process that started a few decades ago with the advent of computers and other calculating machines, but it’s been in the last little while that we’ve been witness to more profound innovations. Humanity chuckled in collective nervousness back in 1997 when chess grandmaster Garry Kasparaov was defeated by Deep Blue. From that moment on we knew the writing was on the wall, but we’ve since chosen to deny the implications; call it proof-of-concept, if you will, that a Singularity is coming.

More recently, we have developed a machine that can defeat the finest Jeopardy players, and now there’s a AI/robotic system that can play billiards at a high level. You see where this is going, right? We are systematically creating individual systems that will eventually and collectively exceed all human capacities. This can only be described as an intelligence explosion. While we are a far ways off from creating a unified system that can defeat us well-rounded and highly multi-disciplinal humans across all fields, it’s not unrealistic to suggest that such a day is coming.

But that’s beside the point. What’s of concern here is the advent of the super-expert system that works beyond human comprehension and control—the one that takes things a bit too far and with catastrophic results.

Or with good results.

Or with something that we can’t even begin to imagine.

We don’t know, but we can be pretty darned sure it’ll be disruptive—if not paradigmatic in scope. This is why it’s called the Singularity. The skeptics and the critics can clench their hands in a fist and stamp their feet all they want about it, but that’s where we find ourselves.

We humans are already lagging behind many of our systems in terms of comprehension, especially in mathematics. Our artifacts will increasingly do things for reasons that we can’t really understand. We’ll just have to stand back and watch, incredulous as to the how and why. And accompanying this will come the (likely) involuntary relinquishment of control.

So, we can nit-pick all we want about definitions, fantasize about creating a god from the machine, or poke fun at the rapture of the nerds.

Or we can start to take this potential more seriously and have a mature and fully engaged discussion on the matter.

George P. Dvorsky serves as Chair of the IEET Board of Directors and also heads our Rights of Non-Human Persons program. He is a Canadian futurist, science writer, and bioethicist. He is a contributing editor at io9 — where he writes about science, culture, and futurism — and producer of the Sentient Developments blog and podcast. He served for two terms at Humanity+ (formerly the World Transhumanist Association). George produces Sentient Developments blog and podcast.


“Transformers.. Robots in disguise!” (now in 3D!)

O.K George, now I’m concerned, so what do you propose we do about it?

I agree (with George). You only have to look at how kids engage with various forms of IT to wonder who is controlling whom already. Kevin Kelly is looking increasingly correct in regarding technology as a living force, with a direction of it’s own choosing.

To CygnusXI’s question, I think we need to look for the answer in the last sentence of George’s article. I think he’s being a bit unfair caricaturing anyone who disagrees as nit-picking fist-clenchers, but even if we think this is unlikely to happen we should still he willing to think through the implications. Should we try to stop it? Should we try to slow it down to give us time to adapt and assert some conscious, human control? Should we go with the Law of Attraction and think only positive thoughts in the hopes this will make the Singularity more likely to be benign?

One thing we definitely should do is to create and read/listen to stories about the Singularity, in order to make more real and help us think through the implications. David Brin’s Stones of Significance, which I downloaded and read today, is an excellent place to start. I would love to participate in a thread here discussing it’s implications.

The looms will be the death of us all. Infernal coal-powered looms belching their acrid filth into the sky. Smash them all!

...Our artifacts will increasingly do things for reasons that we can’t really understand…
My artifacts, the ones that I build myself for myself, are doing things for reasons that I really understand. A lot of other people don’t, but I do. Yes I can sell it and one smurf would think that it is his artifact and that it does things for reasons that he can’t really understand. But that’s his problem. Talking about artifacts we buy as if they were artifacts we build is misleading. The we that are buying and the we that are building are not the same we. So it should be expected that the we that are buying and using do not really understand the how it works. Technology is build on sciencific knowledge of the objective reality that works for everyone wether one does understand/believe or not the theory behind it. Artifacts are increasingly more difficult to understand by the majority of laymen because the use of knowledge of quantum physics is increasing in their technology and good knowledge of quantum physics in a rare thing in our consumerist world.

Rapture of the nerds.  None of your extrapolation and speculation necessarily “proves” anything.  The only thing that is apparent is that the Singularity movement is an escapist cult whose members would find another cult without it.

I find a lot to like in this article.  Whatever else we can say about the various definitions of the Singularity (and there is quite a bit) the notion that technology will reach the point where the ability of humans to predict future changes to it becomes more or less impossible seems a pretty solid prediction, which is ironic when you think about it.

I’m also glad to see you taking down the idea that synthetic intelligence will need to behave like humans in order to surpass humans.  It is the height of arrogance to believe that our way of thinking is the only/best method.  Of course I certainly hope that SI will think like us.  It will make them that much easier to understand and predict.

The thing I feel is almost always overlooked in discussions about the rapid advances in technology is how it will impact human social systems.

“spell checkers will destroy the art of writing!!!”  Remember that one? Now you can’t be taken seriously if you post anything with more than one or two spelling errors. And the grammar nazis are everywhere.

“Video games are for Kids!”  Funny, I just read an article recommending listing WoW Guild participation as socialization and leadership experience on your resume.

The point to those two examples is that these technologies had SOCIAL impacts FAR beyond their actual technical advancements. They changed how people interact in enormous ways, and yet we rarely even think about those changes.

This is where all those individual “greater than human” devices we are building will have their greatest effect, changing how we interact with each other as well as the world around us.

We are building a technological wonderland, with seven billion “Alice’s” and most of the world is completely unaware we’re already falling down the rabbit hole.

Its a freight train. We have brought it to far, we are unable to stop it now. The only way we could is to stop all progress and this is unlikely to happen. The best we can do is to try and guide it to a caring attitude towards us, as its creator.

My vote goes to joe: beautifully and concisely put!!

The notion of relinquishing control to alien narrow-AI system suggests a particularly nightmarish version of the Singularity. Mechanized and emotionless interaction already dominates modern life - see bureaucracy. If that’s our current trajectory, revolution becomes all the more essential.

George, I am curious: are you saying that there is no chance there will ever be machines/robots that have artificial consciousness or human-like cognition?

.. and more re-runs on TV!

in fact, we may just be drooling over those once hated consumer ads and long lost reflections of abundance..

But uncontrollable botnets and super AGI? not in my lifetime??

“save it!” - Now there’s a 70’s slogan for you - applies to energy and the planet, (Earth that is).

Depending on one’s definition, Mike, those horrors could be part of the Singularity or at least exist alongside it. Narrow-AI systems designed by state and corporate interests might take global finance and the art of destruction to new heights while the rest of the world burns.

The singularity is already here.

We sell a software development kit that enables programmers to build machine learning into other applications. See

Our technology is both disruptive AND paradigmatic. We enable enable machines can learn faster and more accurately than humans.

ai-one is not the only player in this space. It is widely rumored that Apple will incorporate machine learning (from their $400 million acquisition of SIRI) into iOS5 later this year.

Artificial intelligence (and thus the singularity) will never replace people—rather the Singularity provides a tremendous opportunity to improve our lives by enabling humans to evolve in ways that we can’t possibly imagine.

It is up to use if the Singularity bring a Dystopian or Utopian future.

At the moment, machine learning cannot approach the human capacity for wisdom and creativity. It might or might not in the future. This is not important.

What is important is for humans to understand and take responsibility for the consequences of their decisions. This requires that we maintain awareness of the potential benefits and adverse consequences of our thoughts, actions and intents.

Congratulations George,

You have done an excellent job of framing the discussion.

Are there any actually examples of self-improving software of any kind?

As Mike Tredor pointed out above, there are some serious challenges facing the planet.  But those are all challenges to organic life.  The singularity, should it occur under this definition, is nothing less than the beginnings of a whole new kind of life which in very little time could be as far advanced from humans as we are from cyanobacteria.  I don’t buy the whole uploading of minds idea.  If there’s a consciousness explosion along with the intelligence explosion, we’re getting left behind. 

On the other hand… I’m not really convinced that technology will ever develop a “mind of it’s own,” if you will.  I think it will be a tool - a very, very powerful tool - for at least the next century.  I believe we will interact with computers as if they are conscious beings (and eventually one giant worldwide conscious being). 

I hope that our technology will become powerful enough to help us solve the problems that Tredor mentioned - or at least help us plan for long-term survivability.  I’m pretty certain that before our technology becomes self-determined that it will connect all of humanity as a worldwide mind, increase efficiencies a thousandfold in everything from energy to government, and be capable of crunching mind boggling mountains of collective data to help us make much better decisions than we’re making now.

I’m not so worried about peak oil, food riots, water shortages and the ilk (I believe we’ll have clean, renewable energy just in time), but I am pretty sure we won’t be able to stop the mass extinction that we’ve started.  But while that’s happening we’ll be creating as many new species as we’re destroying (mostly plant life and microorganisms and test tube meat).  Earth will eventually be an engineered planet where a relatively large percentage of the organisms that inhabit it were created by intelligent design to survive the new conditions, stabilize the environment, and provide food and shelter and raw materials for human living.  But I digress from the subject…

@ Rick..

Quote - “I hope that our technology will become powerful enough to help us solve the problems that Tredor mentioned - or at least help us plan for long-term survivability. I’m pretty certain that before our technology becomes self-determined that it will connect all of humanity as a worldwide mind, increase efficiencies a thousandfold in everything from energy to government, and be capable of crunching mind boggling mountains of collective data to help us make much better decisions than we’re making now.”


.. although I do feel that mind uploading will be inevitable, as we connect more and more.. and the beauty is.. that this will take time, learning and wisdom to unfold.. so why worry about this now, it is a plus!

Your points are valid concerning a changing and evolving ecosystem, as has always been the case for planet Earth during extinction events and geological upheaval, yet we still should not play down the part that human’s play in both damaging and protecting the ecosystem. And I’m not sure I want to envisage a totally engineered ecosystem of the future? Natural biological diversity is a glorious thing, and more than any man can imagine.

I think we will change our philosophy towards renewable energy, but it will take the “precipice” to make us all change heart, hopefully we will not leave it too late in the day? (two key human traits - fear and laziness, one of these may arguably be helpful?)

“Climate change is placing immense pressure on the natural world, changing ecosystems and helping to drive a rising wave of extinctions that could end in the disappearance of one out of every four animal and plant species on the planet within the lifetimes of our children or grandchildren.”


“Natural biological diversity is a glorious thing, and more than any man can imagine.”

I agree 1,000%.  But I feel that this trend of better-for-intelligent-life-worse-for-natural-evolution is in motion and will not change direction.  I hope I’m wrong - at least in part.

For my feelings on mind uploading look here: 
Thinking about uploaded brains and artificial minds

I have been eagerly awaiting the Singularity for the last twenty years. Bring it on!

It has all been said before my boy, but always worth a ponder……

year 2002…..letter to my grand-son

Hello Henry….                                 

Have you ever wondered what ‘Knowledge’ is?
Well. Let’s consider…

Knowledge is an entity in its own right, a single, determined, resourceful and forceful entity.  It has a very strong force and will survive and grow at any cost to any thing… It uses any method to go forward and grow.  It gets bigger and stronger all the time. It exists in space and time, in all dimensions, in the continuum… 
Let’s make this easier to understand for us puny humans.  Lets give knowledge a name…It is called Ken.

Ken is an ambitious thing with very big ideas.  He can’t manage without someone or something to carry him forward though; but he always manages to find the best way.  Just now he has the Bio entities called humans to carry him forward, not the only intelligent examples of bio entities, he uses all the others too, but the humans are by far the most useful and really do carry him furthest of all bio entities.  Soon though, there will be an even faster and better way for Ken to race forward, and that will be mechanical.  This is already so, we call this technology…. Ken already realises that the Techno’s are a better bet by far than the Bio’s as the Techno’s last longer and are self sufficient.  The Bio’s need each other to reproduce themselves and to pass him (Ken) on through each of their successive generations, as they themselves bio-degrade.  This takes some time as the young Bio’s have to be taught, they go to school…The Techno’s are able to pass knowledge on directly and completely, each time adding more and more information and understanding.  The Techno’s race forward and last a long time.  They are almost self sufficient now and will very soon be completely self sufficient with the help of the Bio’s.
That day is almost here….When it happens the Bio’s will no longer be needed, as the Techno’s, now a superior thing will know.  When that day comes Ken will have to decide what to do. You see, once Ken decided to use the Bio’s all that time ago he knew that once the Bio’s had knowledge it could not be taken away from them.  But the Techno’s see the Bio’s as a threat as the Bio’s damage the environment in which they both have to exist.  The Techno’s are capable of existing without damaging the environment as much as the Bio’s.
So, in the Bio’s desperate attempt to continue existing, some time in the near future, will a conflict erupt between the two carriers of Ken? If so what will Ken do? Or has Ken passed on enough knowledge for the Bio’s to be able to exist along side the Techno’s ….
To do this we Bio’s will have to learn new ways of living, ways that allow both to carry on, we will have to learn to ride on the back of Techno’s so to speak, not attempt to control the race but race with it….This may mean becoming completely different from what we are now, this could be the beginning of a completely different sort of human being…a mix of Bio-Technologies, Bionics…? 
What do you think?
Della (your bio-degradable gran) xx  

Lovely Della, I wish to be yours someday if my timescanning machine is kind enough to reboot !

end of 2007 i wrote a novel, where i tried to create an utopian visionairy future scenario. permaculture meets aeroponics, mind uploads in robot bodies meet lightbeings
( kind of a spiritual evolution from our fleshbody )...
regarding Della’s post, i post some little excerpts, what might fit here:

( full text at )

As ascende2 together with ascende uploads 10-25 are witnessing this
natural thelepatic exchange they are developping hardware to enhance
their robotic bodies with radio wave devices which allow them to
exchange with the nondigital bodies to interact over the analog
thelepathical network

As ascende was reading in spiritual and esoteric texts both from
ancient times and from modern times channelings, there was a time in
the history of humanity when the human beings got magical powers
enabling them to create creativly also their surrounding, the landscape.
It could be possible that i in my future as ascende1 being freed of
needing to eat, flying diving having fun with all fellow creatures and
being in intimate contacts with my digital copy ascende2 and the other
copies uploads ascende10-25… That ascende1 can learn from the
digital ascendes living in virtual realities where they have many
powers to creativly create their surrounding spaces

Finally to regain this powers again and applicate them in an analog
way. What would mean to be in such a harmony and tuned into the flow
of life that it would be a pleasure for the raw energy to materialise
itself into the forms a gently evolved human being wishes to create in
its surrounding.

So the gap between virtual digital and real analog finally could be

exactly, even not thinking properly, mother nature still has the military power (as said by michel)

The technological singularity is a product of the human race.

The human race is on the verge of developing biotech-based self learning systems that will be far superior to current human intelligence.

The biological nature of these systems will allow their integration into our DNA systems as brain extensions and body enhancements.

That fusion of technology with biology will produce better humans.

In other words, if you can’t beat AI, join it.  Given a couple of upgrades, we’ll have a fabulous future.

Foxconn to replace workers with 1 million robots in 3 years

SHENZHEN, July 29 (Xinhua)—Taiwanese technology giant Foxconn will replace some of its workers with 1 million robots in three years to cut rising labor expenses and improve efficiency, said Terry Gou, founder and chairman of the company, late Friday.

@iPan, They call them robots, I call them artificial limbs. A million artificial limbs controlled by a central computing system that has as much brain power as a queen bee.

@Andre M, «The human race is on the verge of developing biotech-based self learning systems that will be far superior to current human intelligence.»
Hyped PR to make you buy shares in their business. The human genome project of the 90’s have yet to deliver it’s own hyped PR predictions. A lot of sukers have lost part of their retirement funds in such shares.

One thing for sure, it was negative hype for Francis Fukuyama to say “transhumanism is the most dangerous idea”: no evidence has been demonstrated in the many years Fukuyama said so—has h+ been shown to be more dangerous than WMDs??

- THMZ is still dangerous for him, for his ideas and his lobbies - in the end his ideas will evapore, his lobbies come with us and him well, i guess he gonna become a monk waiting for transcension..

Making a provocative pronouncement does bump up the sale of books; Ann Coulter’s “invade their countries, kill their leaders, and convert them to Christianity” was good for moving at least a few hundred—a conservative figure—of her books off the shelves.
We are told not to be egoists, but celebrities of all kinds are the most ego-oriented, self-absorbed people anywhere (even more than politicos). I cringe everytime I see a headline about Angelina Jolie’s latest hairstyle.

@TOG Not sure what shares of what businesses you’re talking about - as usual, do your own DD.

Speaking of the genome project, I think it was completed in 14 years, about a year ahead of schedule, when critics had said it would take a thousand years to do it.

About self-learning systems: they may be based on quantum computing - I’m no expert - and bio-integration to the human brain assumes of course that brain mapping has been completed long ago. It will take years - but not a thousand years. See some early results on this:

YOUR COMMENT Login or Register to post a comment.

Next entry: Your Body, Your Choice: Fight for Your Somatic Rights

Previous entry: Understanding Our Technological Future