IEET > Rights > HealthLongevity > GlobalDemocracySecurity > Vision > Fellows > Ramez Naam > Minduploading > Sociology > Philosophy > Futurism > Technoprogressivism > Innovation > SciTech
The Singularity Is Further Than It Appears
Ramez Naam   Mar 27, 2014  

Are we headed for a Singularity? Is it imminent? I write relatively near-future science fiction that features neural implants, brain-to-brain communication, and uploaded brains. I also teach at a place called Singularity University. So people naturally assume that I believe in the notion of a Singularity and that one is on the horizon, perhaps in my lifetime.

This article was originally posted on

I think it's more complex than that, however, and depends in part on one's definition of the word. The word Singularity has gone through something of a shift in definition over the last few years, weakening its meaning. But regardless of which definition you use, there are good reasons to think that it's not on the immediate horizon.

My first experience with the term Singularity (outside of math or physics) comes from the classic essay by science fiction author, mathametician, and professor Vernor Vinge, The Coming Technological Singularity.

Vinge, influenced by the earlier work of I.J. Good, wrote this, in 1993:

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. 
The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.
When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -- on a still-shorter time scale.

I've bolded that last quote because it's key. Vinge envisions a situation where the first smarter-than-human intelligence can make an even smarter entity in less time than it took to create itself. And that this keeps continuing, at each stage, with each iteration growing shorter, until we're down to AIs that are so hyper-intelligent that they make even smarter versions of themselves in less than a second, or less than a millisecond, or less than a microsecond, or whatever tiny fraction of time you want.


This is the so-called 'hard takeoff' scenario, also called the FOOM model by some in the singularity world. It's the scenario where in a blink of an AI, a 'godlike' intelligence bootstraps into being, either by upgrading itself or by being created by successive generations of ancestor AIs.

It's also, with due respect to Vernor Vinge, of whom I'm a great fan, almost certainly wrong.

It's wrong because most real-world problems don't scale linearly. In the real world, the interesting problems are much much harder than that.

Consider chemistry and biology. For decades we've been working on problems like protein folding, simulating drug behavior inside the body, and computationally creating new materials. Computational chemistry started in the 1950s. Today we have literally trillions of times more computing power available per dollar than was available at that time. But it's still hard. Why? Because the problem is incredibly non-linear. If you want to model atoms and moleculesexactly you need to solve the Schrodinger equation, which is so computationally intractable for systems with more than a few electrons that no one bothers.

Instead, you can use an approximate method. This might, of course, give you an answer that's wrong (an important caveat for our AI trying to bootstrap itself) but at least it will run fast. How fast? The very fastest (and also, sadly, the most limited and least accurate) scale at N^2, which is still far worse than linear. By analogy, if designing intelligence is an N^2 problem, an AI that is 2x as intelligent as the entire team that built it (not just a single human) would be able to design a new AI that is only 70% as intelligent as itself. That's not escape velocity.

We can see this more directly. There are already entities with vastly greater than human intelligence working on the problem of augmenting their own intelligence. A great many, in fact. We call them corporations. And while we may have a variety of thoughts about them, not one has achieved transcendence.

Let's focus on as a very particular example: The Intel Corporation. Intel is my favorite example because it uses the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs! (And also to create better software for designing CPUs.) Those better CPUs will run the better software to make the better next generation of CPUs. Yet that feedback loop has not led to a hard takeoff scenario. It has helped drive Moore's Law, which is impressive enough. But the time period for doublings seems to have remained roughly constant. Again, let's not underestimate how awesome that is. But it's not a sudden transcendence scenario. It's neither a FOOM nor an event horizon.

And, indeed, should Intel, or Google, or some other organization succeed in building a smarter-than-human AI, it won't immediately be smarter than the entire set of humans and computers that built it, particularly when you consider all the contributors to the hardware it runs on, the advances in photolighography techniques and metallurgy required to get there, and so on. Those efforts have taken tens of thousands of minds, if not hundreds of thousands. The first smarter-than-human AI won't come close to equaling them. And so, the first smarter-than-human mind won't take over the world. But it may find itself with good job offers to join one of those organizations.

Recently, the popular conception of what the 'Singularity' means seems to have shifted. Instead of a FOOM or an event horizon, the definitions I saw most commonly discussed a decade ago, now the talk is more focused on the creation of digital minds, period.

​Much of this has come from the work of Ray Kurzweil, whose books and talks have done more to publicize the idea of a Singularity than probably anyone else, and who has come at it from a particular slant.

Now, even if digital minds don't have the ready ability to bootstrap themselves or their successors to greater and greater capabilities in shorter and shorter timeframes,eventually leading to a 'blink of the eye' transformation, I think it's fair to say that the arrival of sentient, self-aware, self-motivated, digital intelligences with human level or greater reasoning ability will be a pretty tremendous thing. I wouldn't give it the term Singularity. It's not a divide by zero moment. It's not an event horizon that it's impossible to peer over. It's not a vertical asymptote. But it is a big deal.

I fully believe that it's possible to build such minds. Nothing about neuroscience, computation, or philosophy prevents it. Thinking is an emergent property of activity in networks of matter. Minds are what brains - just matter - do. Mind can be done in other substrates.

But I think it's going to be harder than many project. Let's look at the two general ways to achieve this - by building a mind in software, or by 'uploading' the patterns of our brain networks into computers.

Building Minds
We're living in the golden age of AI right now. Or at least, it's the most golden age so far. But what those AIs look like should tell you a lot about the path AI has taken, and will likely continue to take.

The most successful and profitable AI in the world is almost certainly Google Search. In fact, in Search alone, Google uses a great many AI techniques. Some to rank documents, some to classify spam, some to classify adult content, some to match ads, and so on. In your daily life you interact with other 'AI' technologies (or technologies once considered AI) whenever you use an online map, when you play a video game, or any of a dozen other activities.

None of these is about to become sentient. None of these is built towards sentience. Sentience brings no advantage to the companies who build these software systems. Building it would entail an epic research project - indeed, one of unknown length involving uncapped expenditure for potentially decades - for no obvious outcome. So why would anyone do it?

Perhaps you've seen video of IBM'sWatson trouncing Jeopardy champions. Watson isn't sentient. It isn't any closer to sentience than Deep Blue, the chess playing computer that beat Gary Kasparov. Watson isn't even particularly intelligent. Nor is it built anything like a human brain. It isvery very fast with the buzzer, generally able to parse Jeopardy-like clues, and loaded full of obscure facts about the world. Similarly, Google's self-driving car, while utterly amazing, is also no closer to sentience than Deep Blue, or than any online chess game you can log into now.

There are, in fact, three separate issues with designing sentient AIs:

1) No one's really sure how to do it. AI theories have been around for decades, but none of them has led to anything that resembles sentience. My friend Ben Goertzel has a very promising approach, in my opinion, but given the poor track record of past research in this area, I think it's fair to say that until we see his techniques working, we also won't know for sure about them.

2) There's a huge lack of incentive. Would you like a self-driving car that has its own opinions? That might someday decide it doesn't feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that's extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.

Many of us want the semblance of sentience. There would be lots of demand for an AI secretary who could take complex instructions, execute on them, be a representative to interact with others, and so on. You may think such a system would need to be sentient. But once upon a time we imagined that a system that could play chess, or solve mathematical proofs, or answer phone calls, or recognize speech, would need to be sentient. It doesn't need to be. You can have your AI secretary or AI assistant and have it be all artifice. And frankly, we'll likely prefer it that way.

3) There are ethical issues. If we design an AI that truly is sentient, even at slightly less than human intelligence we'll suddenly be faced with very real ethical issues. Can we turn it off? Would that be murder? Can we experiment on it? Does it deserve privacy? What if it starts asking for privacy? Or freedom? Or the right to vote?

What investor or academic institution wants to deal with those issues? And if they do come up, how will they affect research? They'll slow it down, tremendously, that's how.

For all those reasons, I think the future of AI is extremely bright. But not sentient AI that has its own volition. More and smarter search engines. More software and hardware that understands what we want and that performs tasks for us. But not systems that truly think and feel.

Uploading Our Own Minds
The other approach is to forget about designing the mind. Instead, we can simply copy the design which we know works - our own mind, instantiated in our own brain. Then we can 'upload' this design by copying it into an extremely powerful computer and running the system there.

I wrote about this, and the limitations of it, in an essay at the back of my secondNexus novel, Crux. So let me just include a large chunk of that essay here:

The idea of uploading sounds far-fetched, yet real work is happening towards it today. IBM's 'Blue Brain' project has used one of the world's most powerful supercomputers (an IBM Blue Gene/P with 147,456 CPUs) to run a simulation of 1.6 billion neurons and almost 9 trillion synapses, roughly the size of a cat brain. The simulation ran around 600 times slower than real time - that is to say, it took 600 seconds to simulate 1 second of brain activity. Even so, it's quite impressive. A human brain, of course, with its hundred billion neurons and well over a hundred trillion synapses, is far more complex than a cat brain. Yet computers are also speeding up rapidly, roughly by a factor 100 times every 10 years. Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 - 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human's.

Now, it's one thing to be able to simulate a brain. It's another to actually have the exact wiring map of an individual's brain to actually simulate. How do we build such a map? Even the best non-invasive brain scanners around - a high-end functional MRI machine, for example - have a minimum resolution of around 10,000 neurons or 10 million synapses. They simply can't see detail beyond this level. And while resolution is improving, it's improving at a glacial pace. There's no indication of a being able to non-invasively image a human brain down to the individual synapse level any time in the next century (or even the next few centuries at the current pace of progress in this field).

There are, however, ways to destructively image a brain at that resolution. At Harvard, my friend Kenneth Hayworth created a machine that uses a scanning electron microscope to produce an extremely high resolution map of a brain. When I last saw him, he had a poster on the wall of his lab showing a print-out of one of his brain scans. On that poster, a single neuron was magnified to the point that it was roughly two feet wide, and individual synapses connecting neurons could be clearly seen. Ken's map is sufficiently detailed that we could use it to draw a complete wiring diagram of a specific person's brain.
Unfortunately, doing so is guaranteed to be fatal.

The system Ken showed 'plastinates' a piece of a brain by replacing the blood with a plastic that stiffens the surrounding tissue. He then makes slices of that brain tissue that are 30 nanometers thick, or about 100,000 times thinner than a human hair. The scanning electron microscope then images these slices as pixels that are 5 nanometers on a side. But of course, what's left afterwards isn't a working brain - it's millions of incredibly thin slices of brain tissue. Ken's newest system, which he's built at the Howard Hughes Medical Institute goes even farther, using an ion bean to ablate away 5 nanometer thick layers of brain tissue at a time. That produces scans that are of fantastic resolution in all directions, but leaves behind no brain tissue to speak of.

So the only way we see to 'upload' is for the flesh to die. Well, perhaps that is no great concern if, for instance, you're already dying, or if you've just died but technicians have reached your brain in time to prevent the decomposition that would destroy its structure.

In any case, the uploaded brain, now alive as a piece of software, will go on, and will remember being 'you'. And unlike a flesh-and-blood brain it can be backed up, copied, sped up as faster hardware comes along, and so on. Immortality is at hand, and with it, a life of continuous upgrades.
Unless, of course, the simulation isn't quite right.

How detailed does a simulation of a brain need to be in order to give rise to a healthy, functional consciousness? The answer is that we don't really know. We can guess. But at almost any level we guess, we find that there's a bit more detail just below that level that might be important, or not.

For instance, the IBM Blue Brain simulation uses neurons that accumulate inputs from other neurons and which then 'fire', like real neurons, to pass signals on down the line. But those neurons lack many features of actual flesh and blood neurons. They don't have real receptors that neurotransmitter molecules (the serotonin, dopamine, opiates, and so on that I talk about though the book) can dock to. Perhaps it's not important for the simulation to be that detailed. But consider: all sorts of drugs, from pain killers, to alcohol, to antidepressants, to recreational drugs work by docking (imperfectly, and differently from the body's own neurotransmitters) to those receptors. Can your simulation take an anti-depressant? Can your simulation become intoxicated from a virtual glass of wine? Does it become more awake from virtual caffeine? If not, does that give one pause?

Or consider another reason to believe that individual neurons are more complex than we believe. The IBM Blue Gene neurons are fairly simple in their mathematical function. They take in inputs and produce outputs. But an amoeba, which is both smaller and less complex than a human neuron, can do far more. Amoebae hunt. Amoebae remember the places they've found food. Amoebae choose which direction to propel themselves with their flagella. All of those suggest that amoebae do far more information processing than the simulated neurons used in current research.

If a single celled micro-organism is more complex than our simulations of neurons, that makes me suspect that our simulations aren't yet right.

Or, finally, consider three more discoveries we've made in recent years about how the brain works, none of which are included in current brain simulations. 
First, there're glial cells. Glial cells outnumber neurons in the human brain. And traditionally we've thought of them as 'support' cells that just help keep neurons running. But new research has shown that they're also important for cognition. Yet the Blue Gene simulation contains none.

Second, very recent work has shown that, sometimes, neurons that don't have any synapses connecting them can actually communicate. The electrical activity of one neuron can cause a nearby neuron to fire (or not fire) just by affecting an electric field, and without any release of neurotransmitters between them. This too is not included in the Blue Brain model.

Third, and finally, other research has shown that the overall electrical activity of the brain also affects the firing behavior of individual neurons by changing the brain's electrical field. Again, this isn't included in any brain models today.

I'm not trying to knock down the idea of uploading human brains here. I fully believe that uploading is possible. And it's quite possible that every one of the problems I've raised will turn out to be unimportant. We can simulate bridges and cars and buildings quite accurately without simulating every single molecule inside them. The same may be true of the brain.

Even so, we're unlikely to know that for certain until we try. And it's quite likely that early uploads will be missing some key piece or have some other inaccuracy in their simulation that will cause them to behave not-quite-right. Perhaps it'll manifest as a mental deficit, personality disorder, or mental illness.Perhaps it will be too subtle to notice. Or perhaps it will show up in some other way entirely.

But I think I'll let someone else be the first person uploaded, and wait till the bugs are worked out.

In short, I think the near future will be one of quite a tremendous amount of technological advancement. I'm extremely excited about it. But I don't see a Singularity in our future for quite a long time to come.


Ramez Naam

Ramez Naam, a Fellow of the IEET, is a computer scientist and the author of four books, including the sci-fi thriller Nexus and the nonfiction More than Human: Embracing the Promise of Biological Enhancement and The Infinite Resource: The Power of Ideas on a Finite Planet.  He writes at


“Or do you want a completely non-sentient self-driving car that’s extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.”

No, but I want a robot that can clean the whole house, put the wash in and fold the clothes, make the bed, dig ditches in places only a human can dig them now, hold one end of a board or a cabinet to move it, and generally help and do the work of a human. 

If we can get this without some form of sentience, fine, but if not then there is an enormous incentive to create a sentient or near-sentient AI.

In my definitions, a rapid takeoff isn’t necessary for a near-term Singularity.  We only need

1) Watson’s non-sentient successors.
2) Robots good enough to do most human work.
3) Immortality (anti-aging).
4) We probably need to augment our brains, with implants that boost our general cognitive abilities and integrate with our memory.

I’ve never believed we are anywhere near uploading our brains, and never believed that we would want to if we could.  Rather we would want a gradual takeover of our mental functions which in the end might allow an upload or a copy.  Gradualism allows the brain to train its own successor to perfection prior to any wholesale destruction.

So question: with this softer definition of the Singularity, do you think we’re on track in the next 30 or 40 years?

Hi Ramez,

Interesting article.  Putting aside the concept of singularity for the moment, I think you fall into the trap of many of the critics of machine sentience.

Defining sentience can be a tricky business and can lead to philosophical questions concerning the nature of mind and even life versus non-life, which have been tackled many “heavy-duty” philosophers of science over generations and for which the jury is still out.

I would like to refer you a New Scientist article titled, “Not like us: Artificial minds we can’t understand,” from the August 2013 edition of New Scientist by Douglas Heaven.

“As soon as we gave up the attempt to produce mental, psychological qualities we started finding success.” - Nello Cristianini at the University of Bristol, UK, who has written about the history and evolution of AI research.

As has been pointed out in previous debates on this issue, most of the architecture of the human brain relates to evolutionary biological imperatives such a reproduction which is irrelevant to a machine.

“When people dreamed of making AI in our image, they may have looked forward to meeting these thinking machines as equals. The AI we’ve ended up with is alien - a form of intelligence we’ve never
encountered before.”

In other words, we may never know how successful we are at creating a new form of mind (the next step in evolution) until it’s too late.

Good article, except this statement:

“We can simulate bridges and cars and buildings quite accurately without simulating every single molecule inside them. The same may be true of the brain.”

Simulating bridges and cars does not require every molecule because it is the total outcome of those systems that are of value, the calculated value is more or less static.

Simulating a consciousness would require all the details, a consciousness observes itself and is not an external observer. The simulation is THE consciousness. As an analogy the map is not the territory. A maps of a consciousness is not the samething as a consciousness. While maps are very accurate and helpfull for navigation (car simulations) they are not the terrain.





Two thoughts:

1) That is not worthy of a word like “Singularity” which invokes an ‘event horizon’ and derives from dividing by zero, an event in which a line on a graph approaches infinity.

2) No, I don’t expect we’ll see most of those things in the next 30-40 years. Specifically:

- I think we may see some advances in the retardation of aging, but nothing like a full halt to it, or the talked about ‘longevity escape velocity’. 

- It’s quite possible that computers and machinery will replace most of *today’s* human-done jobs. (Indeed, the tractor is probably the single machine that will take away the largest number.) But for quite some time to come, I think it’s likely that new niches will continue to be created where humans can add progressively more value.


I quite agree, actually, that sentient machine intelligence is likely to be ‘not in our image’.

But that is a separate point from mine.

My point is that sentience, whether similar to ours or not, is likely to arise from one of only two courses of action:

1) Strong Darwinian pressure for it, over at least millions of generations of quite complex minds.

2) A very robust research effort aimed at specifically creating sentience (not just aimed at creating the ability to do useful work).

The notion that it will arise accidentally out of attempts to create, say, search engines and driverless cars, without any strong selection pressure for it, seems to me to be extremely unlikely.

That is independent of whether the sentience would at all resemble ours.


Well-reasoned article. I would argue for an “optimism” factor to account for the clever solutions we cannot predict, to set against the “pessimistic” unknown unknowns that you rightly mention. I’d read somewhere that higher expertise leads to worse predictions, generally erring to pessimism. That may be because experts are aware of the problems (unlike laypersons) but both neglect to add in the clever factor.

What about distributed intelligence? For me, what is likely to drive if not a “Singularity” exactly then at least a very fundamental shift in the nature of reality as “we” experience (including questions concerning our own identity) is the ever-increasing bandwidth of connectivity between machines, and eventually between human brains. Already now we have Big Data, which seems fairly indistinguishable to me from the emergence of some kind of sentience either in the “global brain” or more fragmented forms of distributed intelligence.

“For all those reasons, I think the future of AI is extremely bright. But not sentient AI that has its own volition. More and smarter search engines. More software and hardware that understands what we want and that performs tasks for us. But not systems that truly think and feel.”

Yet we can still separate sentient AI from an AI which does possess volition, even if this volition is not it’s own but pre-dominantly determined, it’s mechanism and appearance of free thinking will supported by lightning fast processing, (Hmm.. makes me now question the reality of my own free will and volition – is relative)?

And such a subservient, beneficial and altruistic artificial intelligence may then naturally give the “impression” , (to us, and for us), of a being sentient and even compassionate, commanding our respect due to our own perception of it’s powers of intelligence and reasoning, and of it’s pre-programmed beneficence?

We may then either willingly or wistfully anthropomorphize that such “intelligence” possesses genuine sentience, or at least perceived by us as real as real can be, and as much as it really matters at all – a line where intelligence and sentience becomes blurred – can intelligence in possession of “intellectual”, (programmed/learning), understanding of another’s predicament, and in being cognisant/sympathetic to our position enough to enact and cause effect, be described as sentient?

Is this really then a matter of “choice” for us to describe this evolved/evolving intelligence as sentient? What is a practicable description of “Intelligence” as unaffiliated phenomenon?

Hmm… now I begin to question my own intelligence/conscious perception as a sentient entity as qualified and quantifiable?

Is it merely my physical brain and it’s complex neural network that supports my intelligence and capability, which then superimposes and substantiates my delusion of a singular “mind/Self/Ego”, and this singular mind then subscribing to this notion of “conscious perception” as “sentient”?

Does my “sentient” intellect really feel anything at all? Or rather, does my “mind” supervene, (Notaro), on physical brain states which then stimulates release of chemical hormones, (and vice versa)? My own volition appears to be “entirely” directed by my sub-conscious and is beyond my “intellectual” perception?

Physical Pain, as separate and distinct from perceived “intellectual” feeling, is after all, merely electrical stimulus that imposes further upon brain states?

Ergo.. Sentience relies upon consciousness – yet what is consciousness? Merely a construct/abstract of a highly intelligent yet deluded physical “process” of “mind”?

Coherent Extrapolated Volition

Next entry: Bill Clinton and Steny Hoyer: The “Wall Street Democrats” Fight Back

Previous entry: If Nature Had Been Kinder