IEET > Rights > HealthLongevity > GlobalDemocracySecurity > Vision > Affiliate Scholar > Rick Searle > PrivacySurveillance > Technoprogressivism > Cyber
How the Web Will Implode
Rick Searle   Mar 16, 2014   Utopia or Dystopia  

Jeff Stibel is either a genius when it comes to titles, or has one hell of an editor. The name of his recent book Breakpoint: Why the web will implode, search will be obsolete, and everything you need to know about technology is in your brain was about as intriguing as I had found a title, at least since The Joys of X. In many ways, the book delivers on the promise of its title, making an incredibly compelling argument for how we should be looking at the trend lines in technology, a book which is chalk full of surprising and original observations.

The problem is that the book then turns round to come up with almost the opposite conclusions one would expect. It wasn’t the Internet that imploded but my head.

Stibel’s argument in Breakpoint is that all throughout nature and neurology, economics and technology we see this common pattern of slow growth rising quickly to an exponential pace followed by a rapid plateau, a “breakpoint” at which the rate of increase collapses, or even a sharp decline occurs, and future growth slows to a snail’s pace. One might think such breakpoints were a bad thing for whatever it is undergoing them, and when they are followed by a crash they usually are, but in many cases it just ain’t so. When ant colonies undergo a breakpoint they are keeping themselves within a size that their pheromonal communication systems can handle. The human brain grows rapidly in connections between birth and five after which it loses a great deal of those connections through pruning- a process that allows the brain to discard useless information and solidify the types of knowledge it needs- such as the common language being spoken in its environment.

His thesis leads Stibel to all sorts of fascinating observations. Here are just a few: Precision takes a huge amount of energy, and human brains are error prone because they are trading this precision for efficiency. The example is mine, not Stibel’s, but it captures his point: if I did the math right, IBM’s Watson consumed about 4,000 times as much energy as its human opponents, and the machine, as impressive as it was, it couldn’t drive itself there, or get its kids to school that morning, or compose a love poem about Alex Trebek. It could only answer trivia questions.

Stibel points out how the energy consumption of computers and the web are approaching what are likely hard energy ceilings. Continuing on its current trajectory the Internet will consume, in relatively short order, 20% of the world energy, about as much as the percentage of calories that are needed to run the human brain. A prospect that makes the Internet’s growth  rate under current conditions ultimately unsustainable runless we really are determined to fry ourselves with global warming.

Indeed, this 20% mark seems to be a kind of boundary for intelligence, at least if the human brain is any indication. As always with, for me at least, new and surprising observations, Stibel points out how the human brain has been steadily shrinking and losing connections over time. Pound for pound, our modern brain is actually “dumber” than our cave man ancestors. (Not sure how this gels with the Flynn effect.) Big brains are expensive for bodies to maintain, and its caloric ravenousness relative to other essential bodily functions must not be favored by evolution otherwise we’d see more of our lopsided brain to body ratio in nature.  As we’ve been able to offload functions to our tools and to our cultures, evolution has been shedding some of this cost in raw thinking prowess and slowly moving us back towards a more “natural” ratio.             

If the Internet is going to survive it’s going to have to become more energy efficient as well.  Stibel sees this already happening. Mobile has allowed targeted apps rather than websites to be the primary way we get information. Cloud computing allows computational prowess and memory to be distributed and brought together as needed. The need for increased efficiency, Stibel believes, will continue to change the nature of search too. Increasing personalization will allow for ever more targeted information, so that the individual can find just what they are looking for. This becoming “brainlike”, he speculates may actually result in the emergence of something like consciousness from the web.

It is on these last two points, on personalization, and the emergence of consciousness from the Internet that he lost me. Indeed, had Stibel held fast to his idea of the importance of breakpoints he may have seen both personalization and emergent consciousness from the Internet in a much different light.

The quote below captures Steibel’s view of personalization:

We’re moving towards search becoming a kind of personal assistant that knows an awful lot about you. As a side note, some of you may be feeling quite uncomfortable at this point with your new virtual friend. My advice: get used to it. The benefits will be worth it. As Kevin Kelly has said: “Total personalization in this new world will require total transparency. That is going to be the price. If you want to have total personalization, you have to be totally transparent. “ (93)

I suppose the question one should ask of Steibel is transparent to whom and for what? The answer, can be seen in the example of he gives of transparency in action:

Imagine that the Internet can read your thoughts. Your personal computer, now a personal assistant, knows you skipped breakfast, just as your brain knows you skipped breakfast. She also knows that you have back to back meetings, but that your schedule just cleared. So she offers the suggestion “It’s 11:00am and you should really eat before your next meeting. D’Amore’s Pizza Express can deliver to you within 25 minutes. Shall I order your favorite, a large thin crust pizza, light on the cheese with extra red pepper flakes on the side?” (97)

The answer, as Stibel’s example makes apparent, is that one is transparent to advertisers and for them. In the example of D’Amore’s”, what is presented as something that works for you is actually a device on loan to a restaurant- it is their “personal assistant”.

Transparent individuals become a kind of territory mined for resources by those capable of performing the data mining. For the individual being “mined” such extraction can be good or bad, and part of our problem, now and in the future, will be to give the individual the ability to control this mining and refract it in directions that better suit our interest. To decide for ourselves when it is good and we want its benefits ,and are therefore are willing to pay its costs, and when it is bad and we are not.

Stibel thinks personalization is part of the coming “obsolescence of search” and a response of the web to the need for increased efficiency as a way to avoid, for a time, reaching its breakpoint. Yet, looking at our digital data as a sort of contested territory gives us a different version of the web’s breakpoint than the one that Stibel gives us even if it flows naturally from his logic. The fact that corporations and other groups are attempting to court individuals on the basis of having gathered and analysed a host of intimate and not so intimate details on those individuals sparks all kinds of efforts to limit, protect, monopolize, subvert, or steal such information. This is the real “implosion” of the web.

We would do well to remember that the Internet really got its public start as a means of open exchange between scientists and academics, a community of common interest and mutual trust. Trust essentially entails the free flow of information- transparency- and as human beings we probably agree that transparency exists along a spectrum with more information provided to those closest to you and less the further out you go.

Reflecting its origins, the culture of the Internet in its initial years had this sense of widespread transparency and trust baked  into our understanding of it. This period in Eden, even if it just imagined, could not last forever. It has been a long time since the Internet was a community of trust, and it can’t be, it’s just too damned big, even if it took a long time for us to realize this.

The scales have now fallen from our eyes, and we all know that the web has been a boon for all sorts of cyber-criminals and creeps and spooks, a theater of war between states. Recent events surrounding mass surveillance by state security services have amplified this cynicism and decline of trust. Trust, for humans, is like pheromones in Seibel’s ants- it gives the limits of how large a human community can be before breaking off to form a new one, unless some other way of keeping a community together is applied. So far, human societies have discovered three means of keeping societies that have grown beyond the capacity of circles of trust intact: ethnicity, religion and law.

Signs that trust has unraveled are not hard to find. There has been an incredible spike in interest in anti-transparent technologies with “crypto-parties” now being a phenomenon in tech circles. A lot of this interest is coming from private citizens, and sometimes, yes, criminals. Technologies that offer a bubble of protection for individuals against government and corporate snooping seem to be all the rage. Yet even more interest is coming from governments and businesses themselves. Some now seem to want exclusive powers to a “mining territory”- to spy on, and sometimes protect, their own citizens and customers in a domain with established borders. There are, in other words,  splintering pressures building against the Internet, or, as Steven Levy stated there are increased rumblings of:

… a movement to balkanize the Internet—a long-standing effort that would potentially destroy the web itself. The basic notion is that the personal data of a nation’s citizens should be stored on servers within its borders. For some proponents of the idea it’s a form of protectionism, a prod for nationals to use local IT services. For others it’s a way to make it easier for a country to snoop on its own citizens. The idea never posed much of a threat, until the NSA leaks—and the fears of foreign surveillance they sparked—caused some countries to seriously pursue it. After learning that the NSA had bugged her, Brazilian president Dilma Rousseff began pushing a law requiring that the personal data of Brazilians be stored inside the country. Malaysia recently enacted a similar law, and India is also pursuing data protectionism.

As John Schinasi points out in his paper Practicing Privacy Online: Examining Data Protection Regulations Through Google’s Global Expansion, even before the Snowden revelations, which sparked the widespread breakdown of public trust, or even just heated public debate regarding such trust, there were huge differences between different regimes of trust on the Internet, with the US being an area where information was exchanged most freely and privacy against corporations considered contrary to the spirit of American capitalism.

Europe, on the other had, on account of its history, had in the early years of the Internet taken a different stand adhering to an EU directive that was deeply cognizant of the dangers of too much trust being granted to corporations and the state. The problem was this directive is so antiquated, dating from 1995, it not only failed to reflect the Internet as it has evolved, but severely compromised the way the Internet in Europe now works. The way the directive was implemented turned Europe into a patchwork quilt of privacy laws, which was onerous for American companies, but which they were able to often circumvent being largely self-policing in any case under the so-called Safe Harbor provisions.

Then there is the whole different ball game of China, which  Schinasi characterizes as a place where the Internet is seen without apology or sense of limits by officialdom as a tool of for monitoring its own citizens placing huge restrictions on the extension of trust to entities beyond its borders. China under its current regime seems dedicated to carving out its own highly controlled space on the Internet a partnership between its Internet giants and its control freak government , something which we can hope the desire of such companies to go global might help eventually temper.

The US and Europe, in a process largely sparked by the Snowden revelations appear to be drifting apart. Just last week, on March 12, 2014 the European parliament by anoverwhelming majority of 621 to 10 (I didn’t forget a zero), passed a law that aims at bringing some uniformity to the chaos of European privacy laws and that would severely restrict the way personal data is used and collected, essentially upending the American transparency model. (Snowden himself testified to the parliament by video link). The Safe Harbor provisions, while not yet abandoned ,as that would take a decision of the European Council rather than the parliament, have not been nixed, but given the broad support for the other changes are clearly in jeopardy. If these trends continue they would constitute something of a breaking apart and consolidation of the Internet- a sad end to the utopian hopes of a global and transparent society that sprung from the Internet’s birth.

Yet, if Steibel’s thesis about breakpoints is correct, it may also be part of a “natural” process.  Where Steibel was really good was when it came to, well… ants. As he repeatedly shows, ants have this amazing capacity to know when their colony, their network, has grown too large and when it’s time to split up and send out a new queen. Human beings are really good at this formation into separate groups too. In fact as Mark Pagel points out in his Wired for Culture its one of the two things human beings are naturally wired to do: to form groups which breakup once they have exceeded the number of people that any one individual can know on a deep level- a number that remains even in the era of FaceBook “friends” right around where it was when we were setting out from Africa 60,000 years ago- about 150.

If we go by the example of ants and human beings the natural breakpoint(s) for the Internet is where bonds of trust become too loose. Where trust is absent, such as in large scale human societies, we have, as mentioned, come up with three major solutions of which only law, rather than ethnicity or religion, is applicable to the Internet.

What we are seeing- the Internet moving towards splitting itself off into rival spheres of trust, deception, protection and control. The only thing that could keep it together as a universal entity would be the adoption of global international law, as opposed to mere law within and between a limited number of countries, which regulated how the Internet is used, limited states from using the tools of cyber-espionage and what often amounts to the same thing cyber-war, international agreements on how exactly corporations could use customer information, and how citizens should be informed regarding the use of their data by companies and the state would all allow the universal promise of the Internet to survive. This would be the kind of “Magna Carta for the Internet” that Sir Tim Berners-Lee the man who wrote the first draft of the first proposal for what would become the world wide web” is calling for with his Web We Want initiative.

If we get to the destination proposed by Berners-Lee our arrival might have been as much from the push of self-interest from multinational corporations as from the pull of noble efforts by defenders of the world’s civil liberties. For, it plausible that to the desire of Internet giants to be global companies may lead help spur the adoption of higher limits against government spying in the name of corporate protections against “industrial” espionage, protections that might intersect with the desire to protect global civil society as seen in the efforts of Berners-Lee and others and that will help establish a firmer ground for the protection of political freedom for individual citizens everywhere. We’ll probably need both push and pull to stem, let alone rollback, the current regime of mass surveillance we have allowed to be built around us.

Thus, those interested in political freedom should throw their support behind Berners-Lee’s efforts. The erection of a “territory” in which higher standards of data protection prevail, as seen in the current moves of the EU, at this juncture, isn’t contrary to a data regime such as that which Berners-Lee proposed where “Bill of Rights for the Internet” is adhered to, but helps this process along. By creating an alternative to the current transparency model being promoted by American corporations and abused by its security services, one which is embraced by Chinese state capitalism as a tool of the authoritarian state, the EU’s efforts, if successful, would offer a region where the privacy (including corporate privacy) necessary for political freedom continues to be held sacred and protected.

Even if efforts such as those of Berners-Lee to globalize these protections should fail, which sadly appears ultimately most likely, efforts such as those of the EU would create a bubble of  protection- a 21st century version of the medieval fortress and city walls.  We would do well to remember that underneath our definition of the law lies an understanding of law as a type of wall hence the fact that we can speak of both being “breached”. Law, like the rules of a sports game are simply a set of rules that are agreed to within a certain defined arena. The more bound the arena the easier it is to establish a set of clear defined and adhered to rules.

To return to Stiebel, all this has implications for the other idea he explored and about which I also have doubts- the emergence of consciousness from the Internet. As he states:

It took millions of years for human to gain intelligence, but it may only take a century for the Internet. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines.

I largely agree with Steibel, especially when he echoes Dan Dennett in saying that artificial intelligence will be about as much like our human consciousness as the airplane is to a bird. Some similarities in terms of underlying principle, but huge differences in engineering and manifestation.  Meaning the path to machine intelligence probably doesn’t lie in brute computational force tried since the 1950′s or the current obsession with reverse engineering the brain, but in networks. Thing is, I just wishes he had said “internets” as in the plural rather than “Internet” singular, or just “networks”, again plural. For my taste, Stiebel has a tone when he’s talking about the emergence of intelligence from the Internet that leans a little too closely to Teilhard de Chardin and his Noosphere or Kevin Kelly and his Technium, all of which could have been avoided had Steibel just stuck with the logic of his breakpoints.

Indeed, given the amount of space he had given to showing how anomalous our human intelligence was and how networks (ants and others) could show intelligent behavior without human type consciousness at all, I was left to wonder why our networks would ever become conscious in the way we are in the first place. If intelligence could emerge from networked computers as Stibel suggests it seems more likely to emerge from well bounded constellations of such computers rather than the network as a global whole- as in our current Internet. If the emergence of AI resembles, which is not the same as replicates, the evolution and principles of the brain it will probably require the same sorts of sharp boundaries we have, the pruning that takes place as we individualize, similar sorts of self-referential goals to ourselves, and some degree of opacity visa-vi other similar entities.

​To be fair to Stibel, he admits that we may have already undergone the singularity in something like this sense. What he does not see is that ants or immune systems or economies give us alternative models of how something can be incredibly intelligent and complex but not conscious in the human sense- perhaps human type consciousness is a very strange anomaly rather than an almost pre-determined evolutionary path once the rising complexity train gains enough momentum. AI in this understanding would merely entail truly purposeful coordinated action and goal seeking by complex units, a dangerous situation indeed given that these large units will often be rivals, but one not existentially distinct from what human beings have known since we were advanced enough technologically to live in large cities or fight with massive armies or produce and trade with continent and world straddling corporations.

 Be all that as it may, Stibel’s Breakpoint was still a fascinating read. He not only left me with lots of cool and bizarre tidbits about the world I had not known before, he gave me a new way to think about the old problem of whither our society was headed and if in fact we might be approaching limits to the development of civilization which the scientific and industrial revolution had seemed to suggest we had freed ourselves eternally from. Stibel’s breakpoints were another way for me to understand Joseph A. Tainter’s idea of how and why complex societies collapse and why such collapse should not of necessity fill us with pessimism and doom. Here’s me on Tainter:

The only long lasting solution Tainter sees for  increasing marginal utility is for a society to become less complex that is less integrated more based on what can be provided locally than on sprawling networks and specialization. Tainter wanted to move us away from seeing the evolution of the Roman Empire into the feudal system as the “death” of a civilization. Rather, he sees the societies human beings have built to be extremely adaptable and resilient. When the problem of increasing complexity becomes impossible to solve societies move towards less complexity.

Exponential trends might not be leading us to a stark choice between global society and singularity or our own destruction. We might just be approaching some of Stibel’s breakpoints, and as long as we can keep your wits about us, and not act out of a heightened fear of losing dwindling advantages for us and ours, breakpoints aren’t all necessarily bad- and can even sometimes- be good.


Rick Searle, an Affiliate Scholar of the IEET, is a writer and educator living the very non-technological Amish country of central Pennsylvania along with his two young daughters. He is an adjunct professor of political science and history for Delaware Valley College and works for the PA Distance Learning Project.


I think this basically says it all in debunking the idea that we’re going to hit some limit due to power consumption:


Interesting, though it seems to have more to do with keeping Moore’s Law chugging along in the face of the fact that components are getting so small they are burning up, rather than with energy usage by the Web as a whole.

Even if it should increase overall efficiency this would just stave off a future breakpoint- for a while- but that can’t last forever- or is it your view that the Web can grow indefinitely?

Take the efficiency of human brains, and put that in computers, probably with a few improvements in efficiency.  That’s per the efficiencies mentioned in the article above, with no reduction in precision (and there are more, optical, quantum).  The human brain requires about 25 watts, if I recall.  If you asked 25 watts per the computing ability of a human brain, a superintelligence wouldn’t require much power.  There are also probably technologies which would shrink the components and require even less power per unit of intelligence.  By the time we even required a significant portion of solar power that lands on Earth (not to mention nuclear power or a few orbital power stations), we are well beyond the point where we should try to predict the needs of that society.  We don’t know what kind of power source such a society might invent, and we have no clue whatsoever how much it would need.

Probably thinking that power consumption is a limiting factor for the internet is at best a misapplication of exponential thinking.  We should rather think about the history of computing: how much power did the original computers need?  How much power does a desktop or cell phone need in comparison?  The ENIAC generated 174,000 watts of heat

The article linked above shows that efficiency gains per unit of processing power is going to continue indefinitely, or at least much longer than we should be trying to make any sort of prediction.

Anyways, we don’t see stars and galaxies going dark.  That’s probably not because intelligence isn’t out there.  It’s more likely because it doesn’t require loads of power.  The net could have all sorts of breaking points on a social level like you discuss, but not on a power consumption level.


Would be interested on your views on this section from one of my prior posts discussing Lee Billings’ book 5 billion years of Solitude:

Billings finds other, related, possible explanations for our solitude as well. He discusses the thought experiment of UC San Diego’s Tom Murphy who tried to extrapolate the world’s increasing energy use into the future at an historical rate of 2.3 percent per year. To continue to grow at that rate, which the United States has done since the middle of the seventeenth-century, we would have to encase every star in the Milky Way galaxy within an energy absorbing Dyson sphere within 2,500 years. At which Billings concludes:

“If technological civilization like ours are common in the universe, the fact that we have yet to see stars or entire galaxies dimming before our eyes beneath starlight-absorbing veneers of Dyson spheres suggests that our own present era of exponential growth may be anomalous, not only to our past, but also to our future.”

Interesting review, and it looks like an interesting book. Instinctively I suspect that Steibel is probably right to “lean closely to Kevin Kelly and his Technium”, because while human brains emerged as individual organs within separate entities (aka human beings), the Internet - for want of a better world - has emerged from the outset as a single world wide web. This does not mean that fragmentation is impossible, but my guess is that it will be more like the different hemispheres and lobes in a single brain than actually separate brains connected only via the five senses.

I do agree that any global consciousness will look very different to human consciousness, and that reverse engineering the human brain is a bit of a red herring in this context, however as I’ve written before I tend to see big data as evidence that such a consciousness is already emerging.


“ guess is that it will be more like the different hemispheres and lobes in a single brain than actually separate brains connected only via the five senses.”

I think the brain analogy for the Internet is showing some stress. It’s a pretty weird brain or dysfunctional brain that attacks itself (cyber-war) or steals from itself (cyber-crime/cyber espionage) or refuses to communicate fully with other sections of the brain to asset its own interest (state and corporate secrets). I think the analogy of an ecosystem with predator and prey would be more relevant. I mean I get the self as a “team of rivals” that psychology is showing us. but it is, after all, still a team.

@ Rick Searle

I read that previously, and was responding to it partly in my previous reply.  My personal idea on this is that first efficiency can allow us to create nearly any amount of intelligence with only the energy we easily have available. 

Say we can create a computer with intelligence equal to that of the human brain which requires 15 watts of power (10 less than the brain) and takes up 1 cubic centimeter.  I’ve heard that actually it might only require the volume of a grain of rice or something, so I’m giving it some leeway.  Now consider creating a being about double the size of a human, half of which was computing material.  If the half of the being that is computing material is 30 cm x 30 cm x 80 cm, you have intelligence 72,000 times as great as a human. 

However, it’s not just 72,000 times as smart as a human: intelligence has synergistic effects, so the capacity of a mind with raw processing power 72,000 times that of a human is much smarter than 72,000 as smart.

I think such a being would be able to understand all fields of science even if each field were much more developed than it is today.  It would use most of its intelligence on creative endeavors, socialization, and fantasy, which we can’t imagine. 

Maybe such a being would want to become infinitely smarter.  On the other hand, maybe intelligence is like other commodities, which when you have plenty, you don’t need more. 

In my personal view, processing power beyond a certain point becomes irrelevant.  When you fully understand your environment, and have plenty of brain power for fantasy and creativity, it’s what you do with what you’ve got, not how much you’ve got that counts. 

Additionally, if you are immortal, it doesn’t matter whether a thought takes you a little more time.  Perhaps advanced beings are both faster (in their thoughts) and slower than we are (they don’t care if their though process runs for long periods of time).

I think we will reach this level of intelligence (72,000 + times as intelligent as we are now) far sooner than we will reach the end of our energy supplies.  I doubt we’ll ever need more power than we can reap on a small part of the surface of the Earth with solar panels functioning at 95% efficiency.

Sure. I was referring more specifically to the degree of connectivity (more lobe/hemisphere-like than a collection of essentially separate entities) rather than now the different components behave towards each other. I certainly agree that the “global brain” is *highly* dysfunctional. I’ve written before that if the universe is a person then it has multiple personality disorder. That is certainly true of the “global brain”.

(My previous comment was a reply to Rick. Where did those “edit” buttons go again? O reckon will solve the hard problem of consciousness befroe we figure out when they appear and when they don’t .)


I’ve found that if you go to the complete entry rather than just the comments the edit button can be found there.


I have problems with the idea of “takeoff” if and when human level intelligence is achieved in machines. For one, we already have entities thousands of times smarter than an individual human beings- their called research labs- and we don’t have takeoff. (The example is R Naam’s). Second, I think science will still need experiments to move forward. A machine, no matter how smart, will not be able to completely simulate reality, and would run into problems of non-computability- we can’t even perfectly predict the evolution of orbits for a solar system with more than three bodies. There are problems or questions that may lay outside our experimental reach.

The energy problem identified by Billings is that there are limits to exponential growth- it ends somewhere and might end much closer to our current level of technological development than we normally think. We will very likely achieve human level intelligence in machines before then but even the super intelligent will likely run into hard ceilings.

So they do! Thanks! Now you can solve the hard problem of consciousness for us…


I’m working on it ;>)

@ Rick Searle Yes I don’t think it’s that easy to create intelligent machines.  But if it is, you’re correct that some experimentation would be necessary.  Still, science would advance many times faster than it does today.  I think we might have a lot more success enhancing our own brains by letting the brain train neural prostheses to do what the brain wants.  Even if we don’t strictly know how it all works.

There are probably some hard limits somewhere.  I did a few calculations relative to my “72,000 times as smart” idea.  We wouldn’t want to power all that with just sunlight, we would need some extra-planetary solar and some nuclear power.  But nothing like a Dyson Sphere. 

All I can really say about hard limits is that I’ve never heard of any historically that we couldn’t go beyond with the right technology.  And every one I’ve ever heard anyone come up with speculatively didn’t really hold water when you think of it. 

Because I never saw or thought of a hard limit that isn’t so far in the future as to be impossible to grasp, I think “hard limits” are a pre-singularity form of thinking, and whatever we see as a hard limit probably isn’t- it’s just something we haven’t invented our way around yet.  That’s historically true, and I have no reason to believe it won’t be true in the future. 

One other thing: brains are probably very inefficient.  They’ve been shrinking in recent evolutionary time, haven’t they?  Are we getting dumber, or are our brains getting more efficient?  So we really deeply don’t know the limits of efficiency relative to intelligence, and we might drastically increase our intelligence without increasing our power consumption all that much.

@ Rick Searle Oh, and is a research lab much smarter than a human being?  Only if you don’t think anything special goes on in our brains.  A research lab has a labor force than can do more experimenting drudgery than an individual researcher.  It is smarter than a human being because it has synergistic effects: people work together and their whole is greater than one individual brain working alone.  Nevertheless, there is something very special about an individual mind that no group has.  To say that a research lab is thousands of times smarter than an individual seems to me to overestimate it by far… I would say most are probably as smart as a 190 IQ person -a really good scientist- with a bunch of good slaves who can perform experiments but can’t do much with their results.  A few are a little smarter.  None rise to the level of twice as smart as a really smart person.  If this were NOT true, it wouldn’t matter so much that we have really smart people working on problems.  But we constantly hear about as much about discoveries and inventions from individual minds as from teams.  Teams probably allow multiple inferior minds to do the work of a single superior mind.  But a really smart person… I have no proof of course, but I bet a really smart person is as smart as most research labs.


“Still, science would advance many times faster than it does today. “

Probably, but I don’t think as quick as some people assume. My meaning by experiments being necessary aren’t really geared towards the development of super-intelligent machines themselves as it is that real world experiments will continue to be necessary for the progress of science and technology themselves, which is one bottle neck to super charged progress occurring shortly after super intelligent machines are developed. For another, even if the pace of discovering was rapidly increased it still takes human time to develop, market and distribute products, for social acceptance and adjustments to new technologies to occur.

To return to energy, these legacy systems are difficult to fold up and replace overnight even if you do have new technologies waiting in the wings and the same goes for telecommunications transportation and the like. I am not saying that super-intelligent machines will not bring innovation, only that the pace at which we can utilize this innovation is likely to be slower than is thought by AGI enthusiasts, because greater intelligence doesn’t at a sweep transform everything- not everything is a problem of just finding better approaches.

“All I can really say about hard limits is that I’ve never heard of any historically that we couldn’t go beyond with the right technology.  And every one I’ve ever heard anyone come up with speculatively didn’t really hold water when you think of it. “

Again, here I think the problem isn’t so much hard limits (though the physicist Lawrence Krauss thinks they exist: )

as our capacity to get enough energy within a reasonable frame of time. If we had super-intelligence tomorrow, even if we assume that they would be able to crack difficult problems like cold fusion it will take us/them some time to get this new capacity online. I think what the Billings observation suggest is that even for our current growth rate to continue, let alone vastly accelerated growth we will require much more energy than we can reasonably be expected to capture within that time frame.

“To say that a research lab is thousands of times smarter than an individual seems to me to overestimate it by far… “

I think this view is far too individualistic. The intelligence of an individual human brain is irrelevant outside of its social/historical context. Had Einstein lived 60,000 years ago he would have been perhaps a really smart cave man, but we shouldn’t assume that he would have discovered fire or accelerated progress. Our prehistoric ancestors were as smart as we are, but we still as a species spent the majority of our history not even knowing how to grow our own food. Perhaps super-intelligent machine need not be much smarter than us then better at processing, synthesizing and launching of the cultural intelligence we have already built.

@ Rick Searle Yes some people assume society will be ready as soon as the technology arrives- people and their misplaced priorities are more of a bottleneck than anything else. 

For example re our energy supply, how many billions of dollars would it take to design a really good Thorium reactor, to augment solar panels?  A worldwide push to build out nuclear power and solar would no doubt keep ahead of our energy needs.  So it’s not the technology, it’s the society.

We get stuck a lot of times thinking that our psychological shortfalls are actually out there in the real world.  How much time does it really take us to do things?  When we really want to do a thing, we get it the heck done.

Will society become more reasonable if it’s immortal?

What if we have life extension within 30 years (by which I basically mean immortality)?  The Singularity might be slower, but then it doesn’t really matter. 

“even if you do have new technologies waiting in the wings and the same goes for telecommunications transportation and the like. “

I keep hearing of incredible breakthroughs in telecommunications which would increase our bandwidth hugely.  These could be rolled out pretty quickly- they aren’t in need of a lot of development (I could get you articles, but I doubt you need me to).  It’s the corporate structures that are keeping these technologies from spreading as fast as they should.  Just for example, I researched satellite internet not long ago.  They have the technology to cheaply (given the economies of scale) produce and launch satellites that give a very large amount of bandwidth, and these would be fine for any application short of gaming or speech.  And this is but one technology which could do the trick, probably not the best one by far.

To me, the huge increase in the pace of all technological and scientific development across the board is not necessary to a Singularity.  Only the technologies most likely to undergo these advancements WITHOUT hitting bottlenecks are necessary to a soft Singularity.  These are the information technologies, including genetics (although the latter makes for huge hysteria, it doesn’t require huge dedicated investment like some things, such as power plants).  Non-conscious expert systems need to expand, but we’ll have a Singularity without intelligent machines.  All we need is a few breakthroughs in life extension, and possibly a few in neural enhancements.  You have the population possessed of Watson’s descendants, with (or even without) neural implants and immortal bodies… that’s a Singularity. 

So yes it will in my view be much slower than some like Kurzweil speculate (because of social factors and how hard it is to do AI), but it’s still going to happen in 30 years.

BTW, if cold fusion exists at all, it’s not a hard problem and they could have it online within a few years- all the AI would have to do is think up a theoretical basis for it and everyone would hop on the bandwagon.  You’d get your nuclear hot water heater at Home Depot.

“The intelligence of an individual human brain is irrelevant outside of its social/historical context…Perhaps super-intelligent machine need not be much smarter than us then better at processing, synthesizing and launching of the cultural intelligence we have already built. “

Isn’t that my point about intelligence?  Because it’s about synthesizing, and the individual mind is good at that, but groups not so much.  Groups only make information more readily available to the individual minds that compose them.  The synthesis goes on within individuals.  Thus, increases in individual intelligence advance us much more than increases in organization.  This is especially true after a certain point when there is lots of data, but relatively less capacity to synthesize and analyze it.  That’s a point we’ve reached.

I’m going to get to the Lawrence Krauss video later it’s an hour (:




A well reasoned response. I think you’re right that the bottlenecks are more political and social than technological, but those things are hard to change minus a major war. I suppose it could happen like the industrial revolution and maybe that’s the model singulartarians have sitting in their unconscious- I am not sure our society will survive such a disruption.

One thing I really like that you said was:

“Groups only make information more readily available to the individual minds that compose them.  The synthesis goes on within individuals.  Thus, increases in individual intelligence advance us much more than increases in organization.”

I don’t totally agree with this, but think there’s a great deal of truth in it, and it’s helpful on a totally different topic I’ve been thinking about- thanks.

Here’s hoping that both of us live long enough to see which one was wrong about the timing of the Singularity. We’ve got 30 years!

Cheers (:  Peace and long life.

YOUR COMMENT Login or Register to post a comment.

Next entry: Singularity 1 on 1: The Singularity is closer than it appears!

Previous entry: Applied Rationality & CFAR