IEET > Rights > Personhood > GlobalDemocracySecurity > Vision > Bioculture > Staff > Kyle Munkittrick > Futurism > SciTech
The AI Singularity is Dead. Long Live the Cybernetic Singularity!
Kyle Munkittrick   Jun 28, 2011   Science Not Fiction  

The nerd echo chamber is reverberating this week with the furious debate over Charlie Stross’ doubts about the possibility of an artificial “human-level intelligence” explosion – also known as the Singularity.

As currently defined, the Singularity will be an event in the future in which artificial intelligence reaches human level intelligence. At that point, the AI (i.e. AI n) will reflexively begin to improve itself and build AI’s more intelligent than itself (i.e. AI n+1) which will result in an exponential explosion of intelligence towards near deity levels of super-intelligent AI.

After reading over the debates, I’ve come to a conclusion that both sides miss a critical element of the Singularity discussion: the human beings. Putting people back into the picture allows for a vision of the Singularity that simultaneously addresses several philosophical quandaries. To get there, however, we must first re-trace the steps of the current debate.

I’ve already made my case for why I’m not too concerned, but it’s always fun to see what fantastic fulminations are being exchanged over our future AI overlords. Sparking the flames this time around is Charlie Stross, who knows a thing or two about the Singularity and futuristic speculation. It’s the kind of thing I love to report on: a science fiction author tackling the rational scientific possibility of something about which he has written.

Stross argues in a post entitled “Three arguments against the singularity” that “In short: Santa Clause doesn’t exist.”

This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs “intelligently.” But it will be the intelligence of the serving hand rather than the commanding brain, and we’re only at risk of disaster if we harbour self-destructive impulses.

We may eventually see mind uploading, but there’ll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run [Robert] Nozick’s experience machine thought experiment for real, I’m not sure we’d be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.

I am thankful that many readers of my articles are avowed skeptics and raise a wary eyebrow to discussions of the Singularity. Given his stature in the science fiction and speculative science community, Stross’ comments elicited quite an uproar. Those who are believers (and it is a kind of faith, regardless of how much Bayesian analysis one does) in the Rapture of the Nerds have two holy grails which Stross unceremoniously dismissed: the rise of super-intelligent AI and mind uploading. As a result, a few commentators on emerging technologies squared off for another round of speculative slap fights.

In one corner, we have Singularitarians Michael Anissimov of the Singularity Institute for Artificial Intelligence and AI researcher Ben Goertzel. In the other, we have the excellent Alex Knapp of Forbes’ Robot Overlords and the brutally rational George Mason University (my alma mater) economist and Oxford Future of Humanity Institute contributor Robin Hanson. I’ll spare you all the back and forth and cut to the point being debated.

To paraphrase and summarize, the argument is as follows:

1. Stross’ point: Human intelligence has three characteristics: embodiment, self-interest, and evolutionary emergence. AI will not/cannot/should not mirror human intelligence.

2. Singularitarian response: Anissimov and Goertzel argue that human-level general intelligence need not function or arise the way human intelligence has. With sufficient research and devotion to Saint Bayes, super-intelligent friendly AI is probable.

3. Skeptic rebuttal: Hanson argues A) “Intelligence” is a nebulous catch-all like “betterness” that is ill-defined. The ambiguity of the word renders the claims of Singularitarians difficult/impossible to disprove (i.e. special pleading); Knapp argues B) Computers and AI are excellent at specific types of thinking and augmenting human thought (i.e. Kasparov’s Advanced Chess). Even if one grants that AI could reach human or beyond human level, the nature of that intelligence would be neither independent nor self-motivated nor sufficiently well-rounded and, as a result, “bootstrapping” intelligence explosions would not happen as the Singularitarians foresee.

In essence, the debate is that “human intelligence is like this, AI is like that, never the twain shall meet. But can they parallel one another?” The premise is false, resulting in a useless question. So what we need is a new premise. Here is what I propose instead: the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence.


Kyle Munkittrick
Kyle Munkittrick, IEET Program Director: Envisioning the Future, is a recent graduate of New York University, where he received his Master's in bioethics and critical theory.
Nicole Sallak Anderson is a Computer Science graduate from Purdue University. She developed encryption and network security software, which inspired the eHuman Trilogy—both eHuman Dawn and eHuman Deception are available at Amazon, the third installment is expected in early 2016. She is a member of the advisory board for the Lifeboat Foundation and the Institute for Ethics and Emerging Technologies.


great article, Kyle - I am going to bookmark it for future reference.

@ Kyle..

Yeah baby! I’m with you all the way here!

However, the most amazing thing here is that you have managed to make this all sound like a new idea? It’s like you’ve taken the words, Transhumanism and brain augmentation, and created this new term “Cybernetic singularity”?

These ideas must all be a part of mind-machine interfacing and eventually reverse engineering the brain and uploading the mind and it’s memories. All of your ethical and philosophical points are more than valid!

Hemispheres? Don’t even get me started! Yet the right/left brain split is rather a myth these days - checkout articles concerning severe epilepsy and lobotomy, (I’ve no links to offer at this time).

One small gripe - why can’t you include the whole article here? Is it a salary/copyright thing?

Biggest obstacle remains personality politics / BoaF dynamics. Ever try connecting with anyone on the @IBMWatson team?

Looks like inside baseball, talks like inside baseball ... HeyHo, so it goes, until things change.


Hey Kyle.. enjoyed the article.

I have to agree with CygnusX1 when he says that the whole ‘left’ ‘right’ brain thing is a problematic way of looking at it. Better to say, I think, that history of AI research has demonstrated that some engineering projects having to do with human intelligence are more difficult than others.


I disagree with your thesis.  Your idea of singularity relies on humans as the central motive/directive force that is augmented by artificial constructs.  It isn’t immediately clear to me that a society of such cybernetic beings would behave in a way so radically inexplicable as to merit the term singularity. 

Your argument for why uploading is feasible is flawed as well.  You can’t extrapolate from any current neuroprosthetic (commercial or experimental) to piecemeal whole brain replacement.  The failure of a piecemeal approach to uploading is most clearly demonstrated in an attempt to create an artificial memory storage system.  While recent work has demonstrated neuroprosthetics for enhancing encoding and retrieval no such work has been performed in storage.  This is in part because no one knows where or how memories are encoded.  Most neuroscientists I have talked to suspect that memory is encoded holistically in the cortex, the whole brain, or even extending somatically outside the nervous system.  This aspect of holism means attempting to create a memory storage prosthetic can almost certainly not be done piecemeal.

@karl - (I don’t mean to bring up the hard question of consciousness here.) Your “no one knows where or how memories are encoded” is the tip of the ice-berg.

Ponder proprioception; that’s some elegant processing!!

@Cygnus, Nikki: the left/right break is figurative. I am aware that my ability to do math is not solely located in one hemisphere, nor is my ability to wax eloquent solely located in the other.

And Cygnus, you should no by now that Everything is a remix!

@Karl - I am having difficulty understanding your points. The first argument you make is tautological: you can’t define the singularity as you have because the singularity has a different definition!

As for your second point, I well recognize what we can’t do now. My point is that uploading, if it ever happens at all, will happen in a slow, gradual process in which the biological mind and digital system overlap, not by some sort of direct upload from wetware to hardware. Savvy?

“the left/right break is figurative” ... to say the least, Kyle. “Multi-dimensional” hardly covers it. I’m tempted to say fractal.
And and of course it’s modifying code on the fly / in real time. (Feed-back / feed-forward anyone?)


First, we are going to increase the world’s population by 10x because of a new clean and cheap energy technology (LENR using nickel).  Next, we are going to plug everyone into the internet, and remove barriers to communicate internationally.  The technological explosion connecting all those minds together will enable us to increase the population 10x again!

Remember Ni+H (heated at pressure)=Cu + lots of heat in the form of gamma rays.  Google the Rossi E-Cat.

No residual radiation.  Less than 5x cheaper than any other form of energy production.  1 gram of nickel=1.7 billion calories.


I understand that the first argument is taulological however redefining a term so that the object is missing one of its most properties unnecessarily muddies language.  The intelligence explosions were named singularities because they were supposed to represent a point in history no one before the shift could see past.  Your new definition of singularity may lack this feature.

The second argument still refutes the idea that a full mind upload can be done by gradually replacing a brain with subunit machines.  The brain likely stores memory distributed through out its mass (multiplexing the memories on tissue that serves other functions), because of this it seems implausible that you could slowly replace parts of the brain until it is all machine while still maintaining the underlying memory system. 

@ Ben

This isn’t a hard problem issue, it is an engineering issue.  It is difficult to create artificial interfaces with systems that evolved holistically (this is also part of the reason why it is so difficult to engineer scalable complexity in genetic circuits). 

This argument doesn’t preclude uploading of course (technically it doesn’t even fully preclude piecemeal uploading), what it demonstrates is that it would probably be easier to just swap the whole brain at once (or copy it all at once). 


Another argument for why uploading by incremental brain replacement is unlikely to come before scan and duplicate style uploading can be derived from the the individual differences in brain structure and function.  The text book version of brain functions generally shows well delineated ares of the cortex with defined function.  This idea is known to be wrong, there is substantial inter-individual variation in where/how the brain allocates tasks.  This fact combined with computational task multiplexing and brain holism means that to replace a portion of brain from a specific person with a synthetic alternative would require a full functional mapping of that individuals brain.  Then you would have to build/program the prosthetic to fulfill that function it is supposed to be replacing. 

At the point that you have to scan the brain fully just to replace a part of it whole brain emulation begins to look more promising (no custom crafted prosthetic, possibly easier scale to large numbers of people).

Paraphrasing the “Singularitarian response”, Kyle Munkittrick wrote the following:

“With sufficient research and devotion to Saint Bayes, super-intelligent friendly AI is probable.”

Do you have a reference for SIAI folks claiming that Friendly AI would be *probable*?

I have difficulty believing that such a reference exists. SIAI folks may be found saying that they think it probable that *some* kind of superintelligent AI is eventually created, but they don’t tend to make big claims regarding the likelihood that they’ll succeed in creating a superintelligence of the Friendly kind (they do think it’s vital to *try*, though).

In general, when reading your article, I was disappointed with the tone you take towards those you seek to ridicule.

Hi Kyle, very nice post of yours.

I personally think you are right when you say that “the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence.”

Please, have a look at this project we are working on:

I can only add that I agree with this post.

IA will most likely dominate AI.

At this point, we can only place bets.

I bet on IA.

Your scenario of intelligence augmentation does have ethical and practical advantages over the mad rush for Friendly AI. It’s the technical side where I’m unconvinced. AGI theoretically offers vast power to whoever manages to bottle the genie. Whether it will work or not isn’t something we can decide now. Only rolling the dice will say. I doubt the pursuit of AGI will stop without a considerable political campaign. Is that what you’re advocating?

You will like this, as will all Lawnmower Women/Men

$18.5 Million for Brain-Computer Interfacing

Another university is opening up a BCI lab, University of Washington. It makes sense because it’s near the Allen Institute for Brain Science, among other reasons. Did I mention that Christof Koch, the new Chief Science Officer of the Allen Institute, will be speaking at Singularity Summit?

Next entry: IEET Readers Take Climate Change Seriously

Previous entry: Race for the Future