Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

2014 Was a Good Year: Better Than You Remember

Stopping the innocent from pleading guilty: Can brain-based recognition detection tests help?

#19: Nanomedical Cognitive Enhancement

#20: Shape-shifting claytronics: wild future here by 2020, experts say

Superintelligences Are Already Out There!

For a Longer, Brighter and More Just Future


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

Christian K. Nordtomme on 'Superintelligences Are Already Out There!' (Dec 21, 2014)

AmbassadorZot on '#22: Ray Kurzweil on Rationality and the Moral Considerability of Intelligent Machines' (Dec 20, 2014)

reddibrek on 'Currency Multiplicity: Social Economic Networks' (Dec 20, 2014)

pansi4 on 'The Slut Shaming, Sex-Negative Message in the Christmas Story: It's Worth a Family Conversation' (Dec 20, 2014)

Vigrith on 'Technoprogressive Declaration - Transvision 2014' (Dec 20, 2014)

ericscoles on 'The small and surprisingly dangerous detail the police track about you' (Dec 20, 2014)

instamatic on 'Wage Slavery and Sweatshops as Free Enterprise?' (Dec 19, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Review of Michio Kaku’s, Visions: How Science Will Revolutionize the 21st Century
Dec 15, 2014
(10114) Hits
(0) Comments

What Will Life Be Like Inside A Computer?
Dec 7, 2014
(8508) Hits
(0) Comments

Bitcoin and Science: DNA is the Original Decentralized System
Nov 24, 2014
(8262) Hits
(0) Comments

2014 Was a Good Year: Better Than You Remember
Dec 22, 2014
(5662) Hits
(0) Comments



IEET > Security > Cyber > SciTech > Rights > Personhood > Vision > Futurism > Fellows > Ben Goertzel > Contributors > Richard Loosemore

Print Email permalink (18) Comments (6612) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Why an Intelligence Explosion is Probable


Richard Loosemore
By Richard Loosemore
H+ Magazine

Posted: Mar 22, 2011

(Co-authored with IEET Fellow Ben Goertzel) There is currently no good reason to believe that once a human-level AGI capable of understanding its own design is achieved, an intelligence explosion will fail to ensue.  A thousand years of new science and technology could arrive in one year. An intelligence explosion of such magnitude would bring us into a domain that our current science, technology and conceptual framework are not equipped to deal with; so prediction beyond this stage is best done once the intelligence explosion has already progressed significantly.




One of the earliest incarnations of the contemporary Singularity concept was I.J. Good’s concept of the “intelligence explosion,” articulated in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.  Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an -intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

We consider Good’s vision quite plausible but, unsurprisingly, not all futurist thinkers agree.  Skeptics often cite limiting factors that could stop an intelligence explosion from happening, and in a recent post on the Extropy email discussion list, the futurist Anders Sandberg articulated some of those possible limiting factors, in a particularly clear way:

One of the things that struck me during our Winter Intelligence workshop on intelligence explosions was how confident some people were about the speed of recursive self-improvement of AIs, brain emulation collectives or economies. Some thought it was going to be fast in comparison to societal adaptation and development timescales (creating a winner takes all situation), some thought it would be slow enough for multiple superintelligent agents to emerge. This issue is at the root of many key questions about the singularity (One superintelligence or many? How much does friendliness matter?).

It would be interesting to hear this list’s take on it: what do you think is the key limiting factor for how fast intelligence can amplify itself?

  1. Economic growth rate
  2. Investment availability
  3. Gathering of empirical information (experimentation, interacting with an environment)
  4. Software complexity
  5. Hardware demands vs. available hardware
  6. Bandwidth
  7. Lightspeed lags

Clearly many more can be suggested. But which bottlenecks are the most limiting, and how can this be ascertained?”

We are grateful to Sandberg for presenting this list of questions because it makes it especially straightforward for us to provide a clear refutation, in this article, of the case against the viability of an intelligence explosion.  We explain here why these bottlenecks (and some others commonly mentioned, such as the possible foundation of human-level intelligence in quantum mechanics) are unlikely to be significant issues, and thus why, as I.J. Good predicted, an intelligence explosion is indeed a very likely outcome.


The One Clear Prerequisite for an Intelligence Explosion

To begin, we need to delimit the scope and background assumptions of our argument.  In particular, it is important to specify what kind of intelligent system would be capable of generating an intelligence explosion.

According to our interpretation, there is one absolute prerequisite for an explosion to occur, and that is that an artificial general intelligence (AGI) must become smart enough to understand its own design.  In fact, by choosing to label it an “artificial general intelligence” we have already said, implicitly, that it will be capable of self understanding, since the definition of an AGI is that it has a broad set of intellectual capabilities that include all the forms of intelligence that we humans possess-and at least some humans, at that point, would be able to understand AGI design.

But even among humans there are variations in skill level and knowledge, so the AGI that triggers the explosion must have a sufficiently advanced intelligence that it can think analytically and imaginatively about how to manipulate and improve the design of intelligent systems. It is possible that not all humans are able to do this, so an AGI that met the bare minimum requirements for AGI-hood-say, a system smart enough to be a general household factotum-would not necessarily have the ability to work in an AGI research laboratory. Without an advanced AGI of the latter sort, there would be no explosion, just growth as usual, because the rate-limiting step would still be the depth and speed at which humans can think.

The sort of fully-capable AGI we’re referring to might be called a “seed AGI”, but we prefer to use the less dramatic phrase “self-understanding, human-level AGI.”  This term, though accurate, is still rather cumbersome, so we will sometimes use the phrase “the first real AGI” or just “the first AGI” to denote the same idea.  In effect, we are taking the position that for something to be a proper artificial general intelligence it has to be capable of competing with the best that the human intellect can achieve, rather than being limited to a bare minimum.  So the “first AGI” would be capable of initiating an intelligence explosion.


Distinguishing the Explosion from the Preceding Build-Up

Given that the essential prerequisite for an explosion to begin would be the availability of the first self-understanding, human-level AGI, does it make sense to talk about the period leading up to that arrival-the period during which that first real AGI was being developed and trained-as part of the intelligence explosion proper?  We would argue that this is not appropriate, and that the true start of the explosion period should be considered to be the moment when a sufficiently well qualified AGI turns up for work at an AGI research laboratory. This may be different from the way some others use the term, but it seems consistent with I.J. Good’s original usage.  So our concern here is to argue for the high probability of an intelligence explosion, given the assumption that a self-understanding, human-level AGI has been created.

By enforcing this distinction, we are trying to avoid possible confusion with the parallel (and extensive!) debate about whether a self-understanding, human-level AGI can be built at all.  Questions about whether an AGI with “seed level capability” can plausibly be constructed, or how long it might take to arrive, are of course quite different.  A spectrum of opinions on this issue, from a survey of AGI researchers at a 2009 AGI conference, were presented in a 2010 H+ magazine article In that survey, of an admittedly biased sample, a majority felt that an AGI with this capability could be achieved by the middle of this century, though a substantial plurality felt it was likely to happen much further out.  Ray Kurzweil has also elaborated some well-known arguments in favor of the viability of AGI of this sort, based purely on extrapolating technology trends.  While we have no shortage of our own thoughts and arguments on this matter, we will leave them aside for the purpose of the present paper.

It is arguable that the “intelligence explosion” as we consider it here is merely a subset of a much larger intelligence explosion that has been happening for a long time. You could redefine terms so as to say, for example, that

  • Phase 1 of the intelligence explosion occurred before the evolution of humans
  • Phase 2 occurred during the evolution of human culture
  • Phase 3 is Good’s intelligence explosion, to occur after we have human-level AGIs

This would also be a meaningful usage of the term “intelligence explosion”, but here we are taking our cue from Good’s usage, and using the term “intelligence explosion” to refer to “Phase 3″ only.
While acknowledging the value of understanding the historical underpinnings of our current and future situation, we also believe the coming Good-esque “Phase 3 intelligence explosion” is a qualitatively new and different phenomenon from a human perspective, and hence deserves distinguished terminology and treatment.


What Constitutes an “Explosion”?

How big and how long and how fast would the explosion have to be to count as an “explosion”?

Good’s original notion had more to do with the explosion’s beginning than its end, or its extent, or the speed of its middle or later phases.  His point was that in a short space of time a human-level AGI would probably explode into a significantly transhuman AGI, but he did not try to argue that subsequent improvements would continue without limit.  We, like Good, are primarily interested in the explosion from human-level AGI to an AGI with, very loosely speaking, a level of general intelligence 2-3 orders of magnitude greater than the human level (say, 100H or 1,000H, using 1H to denote human-level general intelligence). This is not because we are necessarily skeptical of the explosion continuing beyond such a point, but rather because pursuing the notion beyond that seems a stretch of humanity’s current intellectual framework.

Our reasoning, here, is that if an AGI were to increase its capacity to carry out scientific and technological research, to such a degree that it was discovering new knowledge and inventions at a rate 100 or 1,000 times the rate at which humans now do those things, we would find that kind of world unimaginably more intense than any future in which humans were doing the inventing.  In a 1,000H world, AGI scientists could go from high-school knowledge of physics to the invention of relativity in a single day (assuming, for the moment, that the factor of 1,000 was all in the speed of thought-an assumption we will examine in more detail later).  That kind of scenario is dramatically different from a world of purely human inventiveness-no matter how far humans might improve themselves in the future, without AGI, its seems unlikely there will ever be a time when a future Einstein would wake up one morning with a child’s knowledge of science and then go on to conceive the theory of relativity by the following day-so it seems safe to call that an “intelligence explosion.”

This still leaves the question of how fast it has to arrive, to be considered explosive.  Would it be enough for the first AGI to go from 1H to 1,000H in the course of a century, or does it have to happen much quicker, to qualify?

Perhaps there is no need to rush to judgment on this point.  Even a century-long climb up to the 1,000H level would mean that the world would be very different for the rest of history. The simplest position to take, we suggest, is that if the human species can get to the point where it is creating new types of intelligence that are themselves creating intelligences of greater power, then this is something new in the world (because at the moment all we can do is create human babies of power 1H), so even if this process happened rather slowly, it would still be an explosion of sorts.  It might not be a Big Bang, but it would at least be a period of Inflation, and both could eventually lead to a 1,000H world.


Defining Intelligence (Or Not)

To talk about an intelligence explosion, one has to know what one means by “intelligence” as well as by “explosion”.  So it’s worth reflecting that there are currently no measures of general intelligence that are precise, objectively defined and broadly extensible beyond the human scope.

However, since “intelligence explosion” is a qualitative concept, we believe the commonsense qualitative understanding of intelligence suffices.  We can address Sandberg’s potential bottlenecks in some detail without needing a precise measure, and we believe that little is lost by avoiding the issue.  We will say that an intelligence explosion is something with the potential to create AGI systems as far beyond humans as humans are beyond mice or cockroaches, but we will not try to pin down exactly how far away the mice and cockroaches really are.


Key Properties of the Intelligence Explosion

Before we get into a detailed analysis of the specific factors on Sandberg’s list, some general clarifications regarding the nature of the intelligence explosion will be helpful.  (Please bear with us!  These are subtle matters and it’s important to formulate them carefully….)

Inherent Uncertainty. Although we can try our best to understand how an intelligence explosion might happen, the truth is that there are too many interactions between the factors for any kind of reliable conclusion to be reached. This is a complex-system interaction in which even the tiniest, least-anticipated factor may turn out to be either the rate-limiting step or the spark that starts the fire.  So there is an irreducible uncertainty involved here, and we should be wary of promoting conclusions that seem too firm.

General versus Special Arguments for an Intelligence Explosion. There are two ways to address the question of whether or not an intelligence explosion is likely to occur.  One is based on quite general considerations.  The other involves looking at specific pathways to AGI.  An AGI researcher (such as either of the authors) might believe they understand a great deal of the technical work that needs to be done to create an intelligence explosion, so they may be confident of the plausibility of the idea for that reason alone.  We will restrict ourselves here to the first kind of argument, which is easier to make in a relatively non-controversial way, and leave aside any factors that might arise from our own understanding about how to build an AGI.

The “Bruce Wayne” Scenario. When the first self-understanding, human-level AGI system is built, it is unlikely to be the creation of a lone inventor working in a shed at the bottom of the garden, who manages to produce the finished product without telling anyone.  Very few of the “lone inventor” (or “Bruce Wayne”) scenarios seem plausible.  As communication technology advances and causes cultural shifts, technological progress is increasingly tied to rapid communication of information between various parties.  It is unlikely that a single inventor would be able to dramatically outpace multi-person teams working on similar projects; and also unlikely that a multi-person team would successfully keep such a difficult and time-consuming project secret, given the nature of modern technology culture.

Unrecognized Invention. It also seems quite implausible that the invention of a human-level, self-understanding AGI would be followed by a period in which the invention just sits on a shelf with nobody bothering to pick it up. The AGI situation would probably not resemble the early reception of inventions like the telephone or phonograph, where the full potential of the invention was largely unrecognized.  We live in an era in which practically-demonstrated technological advances are broadly and enthusiastically communicated, and receive ample investment of dollars and expertise.  AGI receives relatively little funding now, for a combination of reasons, but it is implausible to expect this situation to continue in the scenario where highly technically capable human-level AGI systems exist.  This pertains directly to the economic objections on Sandberg’s list, as we will elaborate below.

Hardware Requirements. When the first human-level AGI is developed, it will either require a supercomputer-level of hardware resources, or it will be achievable with much less. This is an important dichotomy to consider, because world-class supercomputer hardware is not something that can quickly be duplicated on a large scale.  We could make perhaps hundreds of such machines, with a massive effort, but probably not a million of them in a couple of years.

Smarter versus Faster. There are two possible types of intelligence speedup: one due to faster operation of an intelligent system (clock speed increase) and one due to an improvement in the type of mechanisms that implement the thought processes (“depth of thought” increase).  Obviously both could occur at once (and there may be significant synergies), but the latter is ostensibly more difficult to achieve, and may be subject to fundamental limits that we do not understand.  Speeding up the hardware, on the other hand, is something that has been going on for a long time and is more mundane and reliable.  Notice that both routes lead to greater “intelligence,” because even a human level of thinking and creativity would be more effective if it were happening a thousand times faster than it does now.

It seems quite possible that the general class of AGI systems can be architected to take better advantage of improved hardware than would be the case with intelligent systems very narrowly imitative of the human brain.  But even if this is not the case, brute hardware speedup can still yield dramatic intelligent improvement.

Public Perception. The way an intelligence explosion presents itself to human society will depend strongly on the rate of the explosion in the period shortly after the development of the first self-understanding human-level AGI.  For instance, if the first such AGI takes five years to “double” its intelligence, this is a very different matter than if it takes two months.  A five-year time frame could easily arise, for example, if the first AGI required an extremely expensive supercomputer based on unusual hardware, and the owners of this hardware were to move slowly.  On the other hand, a two-month time frame could more easily arise if the initial AGI were created using open source software and commodity hardware, so that a doubling of intelligence only required addition of more hardware and a modest number of software changes.  In the former case, there would be more time for governments, corporations and individuals to adapt to the reality of the intelligence explosion before it reached dramatically transhuman levels of intelligence. In the latter case, the intelligence explosion would strike the human race more suddenly.  But this potentially large difference in human perception of the events would correspond to a fairly minor difference in terms of the underlying processes driving the intelligence explosion.

So - now, finally, with all the preliminaries behind us, we will move on to deal with the specific factors on Sandberg’s list, one by one, explaining in simple terms why each is not actually likely to be a significant bottleneck.  There is much more that could be said about each of these, but our aim here is to lay out the main points in a compact way.


Objection 1: Economic Growth Rate and Investment Availability

The arrival, or imminent arrival, of human-level, self-understanding AGI systems would clearly have dramatic implications for the world economy. It seems inevitable that these dramatic implications would be sufficient to offset any factors related to the economic growth rate at the time that AGI began to appear.  Assuming the continued existence of technologically advanced nations with operational technology R&D sectors, if self-understanding human-level AGI is created, then it will almost surely receive significant investment.  Japan’s economic growth rate, for example, is at the present time somewhat stagnant, but there can be no doubt that if any kind of powerful AGI were demonstrated, significant Japanese government and corporate funding would be put into its further development.

And even if it were not for the normal economic pressure to exploit the technology, international competitiveness would undoubtedly play a strong role. If a working AGI prototype were to approach the level at which an explosion seemed possible, governments around the world would recognize that this was a critically important technology, and no effort would be spared to produce the first fully-functional AGI “before the other side does.” Entire national economies might well be sublimated to the goal of developing the first superintelligent machine, in the manner of Project Apollo in the 1960s.  Far from influencing the intelligence explosion, economic growth rate would be defined by the various AGI projects taking place around the world.

Furthermore, it seems likely that once a human-level AGI has been achieved, it will have a substantial-and immediate-practical impact on multiple industries. If an AGI could understand its own design, it could also understand and improve other computer software, and so have a revolutionary impact on the software industry.  Since the majority of financial trading on the US markets is now driven by program trading systems, it is likely that such AGI technology would rapidly become indispensible to the finance industry (typically an early adopter of any software or AI innovations).  Military and espionage establishments would very likely also find a host of practical applications for such technology.  So, following the achievement of self-understanding, human-level AGI, and complementing the allocation of substantial research funding aimed at outpacing the competition in achieving ever-smarter AGI, there is a great likelihood of funding aimed at practical AGI applications, which would indirectly drive core AGI research along.

The details of how this development frenzy would play out are open to debate, but we can at least be sure that the economic growth rate and investment climate in the AGI development period would quickly become irrelevant.

However, there is one interesting question left open by these considerations.  At the time of writing, AGI investment around the world is noticeably weak, compared with other classes of scientific and technological investment.  Is it possible that this situation will continue indefinitely, causing so little progress to be made that no viable prototype systems are built, and no investors ever believe that a real AGI is feasible?

This is hard to gauge, but as AGI researchers ourselves, our (clearly biased) opinion is that a “permanent winter” scenario is too unstable to be believable.  Because of premature claims made by AI researchers in the past, a barrier to investment clearly exists in the minds of today’s investors and funding agencies, but the climate already seems to be changing.  And even if this apparent thaw turns out to be illusory, we still find it hard to believe that there will not eventually be an AGI investment episode comparable to the one that kicked the internet into high gear in the late 1990s.  Furthermore, due to technology advanced in allied fields (computer science, programming language, simulation environments, robotics, computer hardware, neuroscience, cognitive psychology, etc.), the amount of effort required to implement advanced AGI designs is steadily decreasing - so that as time goes on, the amount of investment required to get AGI to the explosion-enabling level will keep growing less and less.


Objection 2:  Inherent Slowness of Experiments and Environmental Interaction

This possible limiting factor stems from the fact that any AGI capable of starting the intelligence explosion would need to do some experimentation and interaction with the environment in order to improve itself.  For example, if it wanted to reimplement itself on faster hardware (most probably the quickest route to an intelligence increase) it would have to set up a hardware research laboratory and gather new scientific data by doing experiments, some of which might proceed slowly due to limitations of experimental technology.

The key question here is this: how much of the research can be sped up by throwing large amounts of intelligence at it? This is closely related to the problem of parallelizing a process (which is to say: you cannot make a baby nine times quicker by asking nine women to be pregnant for one month).  Certain algorithmic problems are not easily solved more rapidly simply by adding more processing power, and in much the same way there might be certain crucial physical experiments that cannot be hastened by doing a parallel set of shorter experiments.

This is not a factor that we can understand fully ahead of time, because some experiments that look as though they require fundamentally slow physical processes-like waiting for a silicon crystal to grow, so we can study a chip fabrication mechanism-may actually be dependent on the intelligence of the experimenter, in ways that we cannot anticipate.  It could be that instead of waiting for the chips to grow at their own speed, the AGI could do some clever micro-experiments that yield the same information faster.

The increasing amount of work being done on nanoscale engineering would seem to reinforce this point-many processes that are relatively slow today could be done radically faster using nanoscale solutions.  And it is certainly feasible that advanced AGI could accelerate nanotechnology research, thus initiating a “virtuous cycle” where AGI and nanotech research respectively push each other forward (as foreseen by nanotech pioneer Josh Hall).  As current physics theory does not even rule out more outlandish possibilities like femtotechnology, it certainly does not suggest the existence of absolute physical limits on experimentation speed existing anywhere near the realm of contemporary science.

Clearly, there is significant uncertainty in regards to this aspect of future AGI development. One observation, however, seems to cut through much of the uncertainty. Of all the ingredients that determine how fast empirical scientific research can be carried out, we know that in today’s world the intelligence and thinking speed of the scientists themselves must be one of the most important.  Anyone involved with science and technology R&D would probably agree that in our present state of technological sophistication, advanced research projects are strongly limited by the availability and cost of intelligent and experienced scientists.

But if research labs around the world have stopped throwing more scientists at problems they want to solve, because the latter are unobtainable or too expensive, would it be likely that those research labs are also, quite independently, at the limit for the physical rate at which experiments can be carried out?  It seems hard to believe that both of these limits would have been reached at the same time, because they do not seem to be independently optimizable.  If the two factors of experiment speed and scientist availability could be independantly optimized, this would mean that even in a situation where there was a shortage of scientists, we could still be sure that we had discovered all of the fastest possible experimental techniques, with no room for inventing new, ingenious techniques that get over the physical-experiment-speed limits.  In fact, however, we have every reason to believe that if we were to double the number of scientists on the planet at the moment, some of them would discover new ways to conduct experiments, exceeding some of the current speed limits.  If that were not true, it would mean that we had quite coincidentally reached the limits of science talent and physical speed of data collecting at the same time-a coincidence that we do not find plausible.

This picture of the current situation seems consistent with anecdotal reports:  companies complain that research staff are expensive and in short supply; they do not complain that nature is just too slow.  It seems generally accepted, in practice, that with the addition of more researchers to an area of inquiry, methods of speeding up and otherwise improving processes can be found.

So based on the actual practice of science and engineering today (as well as known physical theory), it seems most likely that any experiment-speed limits lie further up the road, out of sight.  We have not reached them yet, and we lack any solid basis for speculation about exactly where they might be.

Overall, it seems we do not have concrete reasons to believe that this will be a fundamental limit that stops the intelligence explosion from taking an AGI from H (human-level general intelligence) to (say) 1,000 H.  Increases in speed within that range (for computer hardware, for example) are already expected, even without large numbers of AGI systems helping out, so it would seem that physical limits, by themselves, would be very unlikely to stop an explosion from 1H to 1,000 H.


Objection 3:  Software Complexity

This factor is about the complexity of the software that an AGI must develop in order to explode its intelligence.  The premise behind this supposed bottleneck is that even an AGI with self-knowledge finds it hard to cope with the fabulous complexity of the problem of improving its own software.

This seems implausible as a limiting factor, because the AGI could always leave the software alone and develop faster hardware.  So long as the AGI can find a substrate that gives it a thousand-fold increase in clock speed, we have the possibility for a significant intelligence explosion.

Arguing that software complexity will stop the first self-understanding, human-level AGI from being built is a different matter.  It may stop an intelligence explosion from happening by stopping the precursor events, but we take that to be a different type of question.  As we explained earlier, one premise of the present analysis is that an AGI can actually be built.  It would take more space than is available here to properly address that question.

It furthermore seems likely that, if an AGI system is able to comprehend its own software as well as a human being can, it will be able to improve that software significantly beyond what humans have been able to do.  This is because in many ways, digital computer infrastructure is more suitable to software development than the human brain’s wetware.  And AGI software may be able to interface directly with programming language interpreters, formal verification systems and other programming-related software, in ways that the human brain cannot.  In that way the software complexity issues faced by human programmers would be significantly mitigated for human-level AGI systems.  However, this is not a 100% critical point for our arguments, because even if software complexity remains a severe difficulty for a self-understanding, human-level AGI system, we can always fall back to arguments based on clock speed.


Objection 4:  Hardware Requirements

We have already mentioned that much depends on whether the first AGI requires a large, world-class supercomputer, or whether it can be done on something much smaller.

This is something that could limit the initial speed of the explosion, because one of the critical factors would be the number of copies of the first AGI that can be created.  Why would this be critical?  Because the ability to copy the intelligence of a fully developed, experienced AGI is one of the most significant mechanisms at the core of an intelligence explosion.  We cannot do this copying of adult, skilled humans, so human geniuses have to be rebuilt from scratch every generation.  But if one AGI were to learn to be a world expert in some important field, it could be cloned any number of times to yield an instant community of collaborating experts.

However, if the first AGI had to be implemented on a supercomputer, that would make it hard to replicate the AGI on a huge scale, and the intelligence explosion would be slowed down because the replication rate would play a strong role in determining the intelligence-production rate.

However, as time went on, the rate of replication would grow, as hardware costs declined.  This would mean that the rate of arrival of high-grade intelligence would increase in the years following the start of this process.  That intelligence would then be used to improve the design of the AGIs (at the very least, increasing the rate of new-and-faster-hardware production), which would have a positive feedback effect on the intelligence production rate.

So if there was a supercomputer-hardware requirement for the first AGI, we would see this as something that would only dampen the initial stages of the explosion.  Positive feedback after that would eventually lead to an explosion anyway.

If, on the other hand, the initial hardware requirements turn out to be modest (as they could very well be), the explosion would come out of the gate at full speed.


Objection 5: Bandwidth

In addition to the aforementioned cloning of adult AGIs, which would allow the multiplication of knowledge in ways not currently available in humans, there is also the fact that AGIs could communicate with one another using high-bandwidth channels.  This is inter-AGI bandwidth, and it is one of the two types of bandwidth factors that could affect the intelligence explosion.

Quite apart from the communication speed between AGI systems, there might also be bandwidth limits inside a single AGI, which could make it difficult to augment the intelligence of a single system.  This is intra-AGI bandwidth.

The first one-inter-AGI bandwidth-is unlikely to have a strong impact on an intelligence explosion because there are so many research issues that can be split into separably-addressible components.  Bandwidth between the AGIs would only become apparent if we started to notice AGIs sitting around with no work to do on the intelligence amplification project, because they had reached an unavoidable stopping point and were waiting for other AGIs to get a free channel to talk to them.  Given the number of different aspects of intelligence and computation that could be improved, this idea seems profoundly unlikely.

Intra-AGI bandwidth is another matter. One example of a situation in which internal bandwidth could be a limiting factor would be if the AGI’s working memory capacity were dependent on the need for total connectivity-everything connected to everything else-in a critical component of the system.  If this case, we might find that we could not boost working memory very much in an AGI because the bandwidth requirements would increase explosively.  This kind of restriction on the design of working memory might have a significant effect on the system’s depth of thought.

However, notice that such factors may not inhibit the initial phase of an explosion, because the clock speed, not the depth of thought, of the AGI may be improvable by several orders of magnitude before bandwidth limits kick in.  The main element of the reasoning behind this is the observation that neural signal speed is so slow.  If a brain-like AGI system (not necessarily a whole brain emulation, but just something that replicated the high-level functionality of the brain) could be built using components that kept the same type of processing demands, and the same signal speed as neurons, then we would be looking at a human-level AGI in which information packets were being exchanged once every millisecond.  In that kind of system there would then be plenty of room to develop faster signal speeds and increase the intelligence of the system.  The processing elements would also have to go faster, if they were not idling, but the point is that the bandwidth would not be the critical problem.


Objection 6:  Lightspeed Lags

Here we need to consider the limits imposed by special relativity on the speed of information transmission in the physical universe.  However, its implications in the context of AGI are not much different than those of bandwidth limits.

Lightspeed lags could be a significant problem if the components of the machine were physically so far apart that massive amounts of data (by assumption) were delivered with a significant delay.  But they seem unlikely to be a problem in the initial few orders of magnitude of the explosion.  Again, this argument derives from what we know about the brain.  We know that the brain’s hardware was chosen due to biochemical constraints.  We are carbon-based, not silicon-and-copper-based, so there are no electronic chips in the head, only pipes filled with fluid and slow molecular gates in the walls of the pipes.  But if nature was forced to use the pipes-and-ion-channels approach, that leaves us with plenty of scope for speeding things up using silicon and copper (and this is quite apart from all the other more exotic computing substrates that are now on the horizon).  If we were simply to make a transition membrane depolarization waves to silicon and copper, and if this produced a 1,000x speedup (a conservative estimate, given the intrinsic difference between the two forms of signalling), this would be an explosion worthy of the name.

The main circumstance under which this reasoning would break down would be if, for some reason, the brain is limited on two fronts simultaneously: both by the carbon implementation and by the fact that other implementations of the same basic design are limited by disruptive light-speed delays.  This would mean that all non-carbon-implementations of the brain take us up close to the lightspeed limit before we get much of a speedup over the brain.  This would require a coincidence of limiting factors (two limiting factors just happening to kick in at exactly the same level), that we find quite implausible, because it would imply a rather bizarre situation in which evolution tried both the biological neuron design, and a silicon implementation of the same design, and after doing a side-by-side comparison of performance, chose the one that pushed the efficiency of all the information transmission mechanisms up to their end stops.


Objection 7: Human-Level Intelligence May Require Quantum (or more exotic) Computing

Finally we consider an objection not on Sandberg’s list, but raised from time to time in the popular and even scientific literature.  The working assumption of the vast majority of the contemporary AGI field is that human-level intelligence can eventually be implemented on digital computers, but the laws of physics as currently understood imply that, to simulate certain physical systems without dramatic slowdown, requires special physical systems called “quantum computers” rather than ordinary digital computers.

There is currently no evidence that the human brain is a system of this nature.  Of course the brain has quantum mechanics at its underpinnings, but there is no evidence that it displays quantum coherence at the levels directly relevant to human intelligent behavior.  In fact our current understanding of physics implies that this is unlikely, since quantum coherence has not yet been observed in any similarly large and “wet” system.  Furthermore, even if the human brain were shown to rely to some extent on quantum computing, this wouldn’t imply that quantum computing is necessary for human-level intelligence — there are often many different ways to solve the same algorithmic problem.  And (the killer counterargument), even if quantum computing were necessary for human-level general intelligence, that would merely delay the intelligence explosion a little, while suitable quantum computing hardware was developed.  Already the development of such hardware is the subject of intensive R&D.

Roger Penrose, Stuart Hameroff and a few others have argued that human intelligence may even rely on some form of “quantum gravity computing”, going beyond what ordinary quantum computing is capable of.  But this is really a complete blue-sky speculation with no foundation in current science, so not worth discussing in detail in this context.  The simpler versions of this claim may be treated according to the same arguments as we’ve presented above regarding quantum computing.  The strongest versions of the claim include an argument that human-level intelligence relies on extremely powerful mathematical notions of “hyper-Turing computation” exceeding the scope of current (or maybe any possible) physics theories; but here we verge on mysticism, since it’s arguable that no set of scientific data could ever validate or refute such an hypothesis.


The Path from AGI to Intelligence Explosion Seems Clear

Summing up, then — the conclusion of our relatively detailed analysis of Sandberg’s objections is that there is currently no good reason to believe that once a human-level AGI capable of understanding its own design is achieved, an intelligence explosion will fail to ensue.

The operative definition of “intelligence explosion” that we have assumed here involves an increase of the speed of thought (and perhaps also the “depth of thought”) of about two or three orders of magnitude.  If someone were to insist that a real intelligence explosion had to involve million-fold or trillion-fold increases in intelligence, we think that no amount of analysis, at this stage, could yield sensible conclusions.  But since an AGI with intelligence = 1000 H might well cause the next thousand years of new science and technology to arrive in one year (assuming that the speed of physical experimentation did not become a significant factor within that range), it would be churlish, we suggest, not to call that an “explosion”.  An intelligence explosion of such magnitude would bring us into a domain that our current science, technology and conceptual framework are not equipped to deal with; so prediction beyond this stage is best done once the intelligence explosion has already progressed significantly.

Of course, even if the above analysis is correct, there is a great deal we do not understand about the intelligence explosion, and many of these particulars will remain opaque until we know precisely what sort of AGI system will launch the explosion.  But our view is that the likelihood of transition from a self-understanding human-level AGI to an intelligence explosion should not presently be a subject of serious doubt.  And we also feel that the creation of a self-understanding human-level AGI is a high-probability outcome, though this is a more commonplace assertion and we have not sought to repeat the arguments in its favor here.

Of course, if our analysis is correct, there are all sorts of dramatic implications for science, society and humanity (and beyond) — but many of these have been discussed elsewhere, and reviewing this body of thought is not our purpose here.  These implications are worth deeply considering — but the first thing is to very clearly understand that the intelligence explosion is very probably coming, just as I.J. Good foresaw.


Richard Loosemore is a professor in the Department of Mathematical and Physical Sciences at Wells College, Aurora, NY, USA. He graduated from University College London, and his background includes work in physics, artificial intelligence, cognitive science, software engineering, philosophy, parapsychology and archaeology.
Print Email permalink (18) Comments (6613) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


Re: “does it make sense to talk about the period leading up to that arrival—the period during which that first real AGI was being developed and trained—as part of the intelligence explosion proper? We would argue that this is not appropriate…”

Of course it’s appropriate.  An explosion is an explosion.  Call a spade a spade.





“Of course the brain has quantum mechanics at its underpinnings, but there is no evidence that it displays quantum coherence at the levels directly relevant to human intelligent behavior.  In fact our current understanding of physics implies that this is unlikely, since quantum coherence has not yet been observed in any similarly large and “wet” system.”

When you say “quantum coherence has not yet been observed in any similarly large and “wet” system” are you somehow excluding evidence of biologically useful coherence in photosynthesis (potentially acting as a more efficient “search engine” for energy paths)?

“Coherently wired light-harvesting in photosynthetic marine algae at ambient temperature.” By Elisabetta Collini, Cathy Y. Wong, Krystyna E. Wilk, Paul M. G. Curmi, Paul Brumer & Gregory D. Scholes. Nature, Vol. 463 No. 7281, Feb. 4, 2010.

Further,  given that we are just starting to look for these effects in nature, that they are found in very primitive systems, and that they also seem to be operative in olfaction (technically just electron tunnelling), and magnetoreception, it seems unwise to place overly negative bets on whether quantum mechanics plays a non-trivial role in how brains work. 

A final point, there is nothing like consensus amongst scientists on how the brain and consciousness are related (and hence human like intelligence).  Chalmers’ hard problem still stands, we are no closer to solving it. 





As far as quantum processing in the brain I think it is important to remember a few things.

1. Cells have now been found to use quantum processes (photosynthesis and entanglement in the eyes of birds.)

2. Quantum computers seem to be coming around a lot faster than people thought possible (RezQu and others).

3. Single cell organisms at times seem to remarkably intelligent.  For example they can hunt for pray or avoid predators.

4. Neurons have multiple Cell signaling pathways other than just synaptic.

All in all the processing power of the brain remains unclear.  It could be that the processing power will be available relatively soon, or each neuron could turn out to be a incredibly powerful quantum processor which is in fact doing massive amounts of processing and is basically only sending and receiving the solution to computation.





@Tim.  Regarding the time when the “explosion” actually starts.  What we were trying to do in the comment you quote was make a distinction between the manufacture of the technology needed to effect the explosion, and the actual moment when the fuse is lit.  I am really not sure that it makes sense to speak of all the AGI development work as an explosion, because nothing happens until the moment when the intelligence is ready.  Does that not make sense?





@Karl.  About the quantum coherence issue.  This is definitely a big area (plenty of room for a very lengthy digression here), so let’s try to narrow things down to the aspects that bear on AGI feasibility.  When we talk about quantum coherence not being observed at the scale of an intelligence, what are trying to imply is there appears to be no evidence that very large scale coherence is a limiting factor, without which intelligence cannot occur.  “Very large scale coherence” really means more than the molecular processes that are found in photosynthesis, etc.—where, as far as we understand it, any quantum coherence effects are just acting to facilitate or enhance chemical reactions.  Although these may turn out to be important effects for chemical reactions, we believe that intelligence mechanisms are a result of informational systems at a much, much higher level.

Now, the only way I can see quantum coherence as being important (as a limiting factor) would be if it turned out that the brain was effectively doing massive amounts of quantum computation, in order to examine quasi-infinite regions of various search spaces in a way that would simply not be feasible if it tried to do the same thing with ordinary neural signaling.  I am happy to grant this as a logical possibility.  I see three major problems with this idea.

1)  There seems to be no evidence that this is really happening.  The existence of quantum coherence in photosynthesis is a far cry from concluding that it really is happening (and is vital) in the brain.

2)  The quantum effects really would have to be distributed across a large physical volume, because if they were confined to the tiny events happening at the molecular level I find it hard to see how the necessary amounts of useful information could get into and out of those nano-level processes….  in the time needed to execute a single thought, it is difficult to get more than a few hundred bits into and out of a single neuron, and if the quantum event were happening in a molecule inside the neuron, the information from that event would somehow have to get through that bottleneck.  If the coherence were, on the other hand, to embrace thousands or millions of neurons, we are positing relatively large scale coherence of a sort that (as we say in the article) we find difficult to credit at the moment.

3)  We also believe that there is no pressing need for quantum coherence, to explain the mechanisms of cognition.  We seem to be approaching the level at which our systems show meaningful amounts of intelligence, without even a glimmer of a need for quantum coherence effects.

Lastly, your point about consciousness and the hard problem needs to be addressed.  In fact, I wrote a paper for the 2009 AGI conference that confronted this issue head-on (see http://richardloosemore.com/docs/2009a_ConsciousnessTheory_rpwl.pdf), and my conclusion was that the hard problem can actually be solved.  (In fact I believe that it *has* been solved, in that paper… but that is something that history will have to judge).  So, yes, the hard problem is important, but I do not think it will impact our ability to produce fully intelligent, conscious machines.





@Bob.  Most of the reply I would give to you is contained in the last comment, addressed to Karl.  I would add, though, that I do not think your four points add up to an pressing need for, or evidence of, quantum computing.

1)  Cells may use quantum processes, but that fact would not push us to say that, for example, some other complex technology cannot be achieved without quantum processes (try “Building a viable air-traffic control system cannot be done without using quantum coherence”.... this strains credulity, I think).  If it does not push us to say that other complex technologies must use QC, why would we be especially compelled to say that AGI needs it?

2)  QC machines may well be coming along.  But again, the essay was about possible limits that might prevent the construction of an AGI (with subsequent explosion), so that begs the question why QC is both necessary for AGI, and potentially a limiting factor on the explosion.  I don’t see the logical connection, there.

3)  Single cell intelligence does not, I believe, seems to require mechanisms beyond those normal chemical processes that we already understand.  Is there any concrete evidence that those processes are not enough to explain the behavior?  As far as I know, the answer to that question is “no” at the present time.

4)  Neurons may indeed have multiple signaling pathways, but so far the alternatives do not seem to show evidence that they rely on quantum coherence of a sort that would then be hard to achieve, or hard to duplicate, in an AGI.

I know we said in the paper that we would try to keep our own research out of the argument, but I guess I have to admit that at the moment I see an understanding of human cognition slowly emerging without any need for vast amounts of computation.





Nothing happens?  Nothing happens?!?  Something is definitely happening - e.g. see the Flynn Effect and the Global Brain.





The Flynn effect and the global brain are both relatively marginal, and there is no indication that they can continue.

These are just minor optimizations of the ability of human brains to (a) do certain problem-solving and pattern recognition tasks, and (b) get access to information and certain information tools.  They simply do not add up to much, compared with the changes we are talking about in an intelligence explosion, and there are good reasons to believe that these increases simply cannot be sustained.  The Flynn effect is already thought to be fading out in developed countries—exactly what you would expect if it is nothing more than access to better education that has brought more people up to the level that only elites could previously get access to.

For all sorts of reasons, these kinds of changes do not even begin to compare with what happens when an AGI can be built in such a way that its clock speed has the potential for a 1000x speedup over the human thought speed.  Is the Flynn effect likely to continue to the point where people start their schooling at the age of 4, and then one day later they finish their Ph.D.?  I somehow doubt it.  Also, the replicability of AGI systems means that one expert can, if needed, be duplicated a thousand or a million times over.  Impossible with humans.

And, as for the “global brain” idea:  just an empty metaphor.  Have Google and other information tools is good for technology—no argument from me on that one—but calling the sum total of those extra tools a “brain” is, in my opinion, a meaningless extrapolation.  It does not think, does not do anything that is more than just the sum of what the brains are doing, out of which it is made.

No explosion right now, just improvement (in some quarters) as usual.





@ Richard..

Do you not see the Global Brain as phenomenological? At least, that is, until the supercomputers come online? I am implying that global interconnection and interfacing in realtime with the human collective may be a most efficient means for an AGI to emerge? An alternate option, and counter to building and boxing an AGI complete, (a vast endeavour for any organisation), may be to integrate open source intelligent algorithms via the web and use supercomputer processing to interconnect and combine and use them. In fact, such an enterprise would maximise international cooperation, interest, responsibility, innovation and standardisation. It also takes the long view towards continued innovation and relieves much financial strain on smaller corps, companies and individuals who may have much to contribute to the project as viewed as a “whole”?





@CygnusX1.  It is hard to know where to begin with this global brain idea.

If, today, we do not understand the structure and mechanisms that make up a mind, why should anyone believe that a huge collection of people and computers—none of whom understands that structure and those mechanisms—will somehow make the structure and mechanisms appear, as if by magic?

There seems to be an assumption that if only we had enough really dumb systems trying to do something a little bit smart, that somehow an AGI will emerge from that mess.  But this is just voodoo, no?  grin  I see no reason to believe it.

From everything I know about how minds work, there is absolutely no reason to think that a bunch of humans doing some kind of crowdsourcing would constitute a mind.  It is hard for me to say much more against the idea, because no substantive suggestions have (as far as I know) ever been put forward to justify why something intelligent would emerge.  It would be just as reasonable to say that every large corporation tends to behave like an intelligent mind, with the intelligence being proportional to the size of the corporation.  If that were true, you should be able to give a corporation an IQ test.  Do you suppose that any corporation on the planet could take an IQ test?  That it could pass?  That it could do any better than the best of its employees?  WHY would think that it could take such a test?

The questions are endless.

I think “global brain” is just a poetic phrase devoid of substance.





@ Richard Loosemore

I read the paper and was dissappointed that you didn’t answer the numerous criticisms specifically.  You just put forth the conjecture that a careful analysis would show such criticisms to be false because humans internal analysis mechanisms have a fault which renders them incapable of explaining qualia.  It seems like this is a circular argument.  If I understand you qualia are qualia because the analysis mechanism says they are unanalyzable correct?  If this is true then if you could experience somethings without analysis would the experience have qualia?  Advanced meditators that would likely argue that such an experience would still contain qualia.  These meditators can hold one concept (atom in your lexicon) in mind with out waver excluding all other concscious thought (and hence analysis).  Even so, in this state there is still “something it is like” to be the meditator.

Another problem about qualia your paper didn’t answer for me was why qualia are different from each other.  Smells are different from colors even though they both are turned into elcetrical and chemical signals within the brain.  I think that you explain qualia as an un-analyzable atom (which explains why we can’t clearly describe them).  This makes them the most reduced level in your model and prevents an infinite regress of explanation.  Generally in science reduction serves to decrease the number of ontologically distinct categories (there are tons of elements but they are all made of protons, neutrons, and electrons or there are more words than letters).  Because any given qualia can be recognized as distinct from most other qualia they can’t all be placed into exactly the same ontological category (they would at least all need subcategories, each flavor, each color, each sound qualia would need its own separate subcategory).  Because each qualia is distinct your model creates serves almost infinite explosion of these putatively basic elements.  Your model replaces infinte regress with infinite non-interchangeable “atoms”.  This indicates something is skewampis with the model to me.


Another concern is whether qualia can be considered un-analyzable/irreducible in the first place.  Sensory experiences are encoded through different combinations of neural activity. In your model the qualia are also encoded in this population of neural activity.  Given that brains can be trained to excite and inhibit individual neurons (efferent and afferent in both the CNS and PNS)  it seems suspicious to say that qualia are irreducible because you could just start subtracting or adding neurons to a given experience. 

My criticism is that you assume (incorrectly) that all subjective experience is a result of processes within the individuals body.  40 years of research into anomolous correlations between cognition and the envirnoment has shifted any competent and informed observers bayesian priors such that your assumption isn’t a foregone conclusion.  While this doesn’t address qualia directly it does complicate the problem by expanding the realm of causation externally to the immediate processing system.  Because most researchers in both neuroscience and AI are ignorant (and have strong cognitive biases against) of this body of research it is unlikely that they will be able to recapitulate it in artificial systems.





@Karl.  First a technical point:  I was limited in the space available in that consciousness paper, so it was impossible to do a detailed analysis of all the counterarguments.  That will come in a future version.

Unfortunately, I think that your interpretation of the main thesis (as captured above) is going too fast through the fine details—and much of the real import is in those fine details.

So, let me address as many of your points as I can:

1)  I did not really “put forth the conjecture that a careful analysis would show such criticisms to be false because humans internal analysis mechanisms have a fault which renders them incapable of explaining qualia”.  That is a paraphrase that begs quite a lot of questions!  I claimed that criticisms of the idea, if presented explicitly, would each turn out to hinge on a USE of the analysis mechanism.  This means that any criticism of the theory can be examined, and when examined it will be found to eventually fall back on an APPEAL to the analysis mechanism, and if this is true it would mean that those criticisms were being circular themselves, in that they simply appeal to the seeming impossibility of saying anything about the target qualia.  But since I am making this prediction (that all such critiques will fall back on that circularity) and since I have given an account of WHY there should be some concepts for which the analysis mechanism fails, I am making a substantive, rather than a circular claim.  I am saying “The analysis mechanism must clearly break in these places”, and “All counterarguments of the type ‘this is an external explanation that does not actually address the direct question of why such a breakage of the analysis mechanism should be experienced the way it is’ are actually a usage of the very same breakage by the philosopher who makes that counterargument (and I can show it, given any details of such a counterargument).

That is very different from ordinary circularity, because I have given an outside reason for why the loop should occur.  If I had simply declared the explanation by fiat (i.e. “If you find my explanation unconvincing, it is because your comprehension of my explanation is not sufficient, and if your comprehension of my explanation is not sufficient, I would expect you to say that you find my explanation unconvincing”), that would be one thing.  But I do not:  I posit an inherent, EXPECTED point of failure, and I say that those counterarguments all end up appealing to a use of the broken mechanism.  If someone can find a counterargument that transparently does not appeal to the mechanism, that would suffice to undermine my argument.

In point of fact, all the counterarguments that I know about are very simple.  They consist of just that phrase and nothing deeper:  “Your explanation addresses the external, objective, observable features of the qualia experience (i.e. the breakage in the analysis mechanism), but it does not say why this should make the experience feel the way it does”.  This is clearly an appeal to the broken mechanism:  the objector is saying that when she internalizes the concepts involved in my explanation, she comes up with a concept of what qualia are, which when compared with her internal concept of any given quale, yields a result “not the same”.  But that non-correspondence is what we would expect.  That is what MUST happen, because the concept analysis mechanism is breaking my proposed concepts down, then trying to break the quale concept down, and finding that there is no way to set up a correspondence of the sort that normally occurs when one concept “explains” another.

In the end, then, this type of objection always ends up (when stripped of any superficial details) appealing to the lack of explanatory adequacy.

2)  Advanced meditators might claim that they were able to clear their mind in such a way that no analysis was taking place.  Understood.  But their experience during meditation is not the issue:  it is their retrospective analysis of that state that is the issue.  They look back and say “I remember having an ineffable experience during meditation, but I was not analyzing it at the time”.  Well, they may not have been then, but they are *now* analyzing something that is, in act, a memory of an experience.  On the face of it, they are not presenting clear evidence that they are not simply analyzing the memory.

This is, I grant you, a very subtle area.  We do not, in fact, analyze experiences in the moment we are having them.  ALL philosophical analysis that purports to show that certain phenomenological things are ineffable are postmortems on previous experiences!  This is counterintuitive, but it does mean that all the inexplicability is in the retrospect, not in the moment.  Same for meditators and ordinary folks alike.

I think I had better stop here, or this comment will grow into an enormous paper just by itself.  I hope that at least I have said enough to show that I have thought about some of the issues you raise, and that there are answers (at least answers that convince me!).  A better venue for really getting to grips with this would be a book or a very long paper.  Both of those are in the works…..

I have to say thanks for bringing this up.  If you email me directly perhaps we can keep in touch about this from time to time.

Oh, and one quick PS.  Your last paragraph seems to be talking about anomalous information transmission….?  If so, be mindful of the fact that did research in that area early in my career, and I am quite comfortable with it.  I see no incompatibility between that and AGI, and indeed I full expect it to occur just as frequently there as in the human case.  My present opinion is that it does not impact the qualia argument at all, but that opinion is open to revision.





There *is* an explosion going on right now - and the article even says as much:

“It is arguable that the “intelligence explosion” as we consider it here is merely a subset of a much larger intelligence explosion that has been happening for a long time. You could redefine terms so as to say, for example, that * Phase 1 of the intelligence explosion occurred before the evolution of humans.”

I think the idea that the intelligence explosion lies in the future is a very misleading on .  It divorces the concept from its historical precedents, making it more difficult to understand in the process.  Also, it misrepresents the current era, making it harder to understand the changes taking place in our own times.

The Flynn effect is only one component of the explosion - which also includes increases in collective intelligence via networked computers and humans, Moore’s law, progress in algorithms, etc.  This idea is, in fact, very appropriate.  What is inappropriate is reserving the term “intelligence explosion” for the machine intelligence component of the existing explosion.  That is not Good.





@ Richard..

I respect your position, but let me take this a little further and stick my neck out some ways.

First off, we are talking at cross purposes somewhat – yours, as per your article, is proposing the emergence of intelligence and intelligent systems, whereas mine was responding to the hypothesis of the Global mind as realisation, (and I believe as reality), through phenomenological consciousness.

I say specifically phenomenological and subjective consciousness, (as opposed to my belief also that “Consciousness” is fundamental phenomenon and agent and arbiter at the quantum level). There is no contradiction here, because phenomenological consciousness is still reliant and layered upon phenomenal “Consciousness” interactions from the quantum level up? (Note, this is my philosophical position and not scientific fact).

We know that the human brain is comprised of neurons and an efficient neural network with much redundancy built in, (which also serves an important purpose – namely flexibility and for contingency). If we extrapolate this model and topology onto the Global information network via comms and the web, then we can envisage the emergence of a Global brain and network topology connected to the human collective with information exchange at the speeds of electron flow, (gateways, servers and hardware merely acting as limitations or bottlenecks, yet still efficient as to not affect this information exchange).

Size matters..

If we extrapolate this idea of the human collective and individual minds as neurons in a complex interconnected topology, then the Global mind becomes clearly visible? Because the individual minds of the human collective are not party to access to the entire Global interconnection of human minds, their information, responsibilities and actions are limited and restricted, (just like individual neurons in the brain). And not merely limited – as I am connected to the collective, my subjective consciousness and thoughts and actions are reliant upon the Global mind itself. Social networks are a prime example of my submission to become receptive, my free will to think and act then becomes restricted to my own subjectivity whilst connected to the Global mind.

Yet I am also acting as a semi-autonomous part of a feedback system to the Global mind, depending upon my actions and my own creativity to access and seek information, contribute direct feedback and information exchange and ideas. The holistic view is that the Global collective mind ebbs and flows with multiple directives and is able to directly affect individual minds connected, (neurons), yet is also party to feedback and causality from the feedback of these individual minds. And this crowd sourcing feedback can indeed be put to great use by a supercomputer? Trending of world opinions does by way of fact contribute and affect Global phenomenological consciousness, news and media, economics and business and etc etc.

Can we not see that this once again models how the individual human brain and mind and it’s reflexive consciousness is both arbiter of limited decision making, and it’s subjective consciousness and free will is affected by the feedback of it’s own neurons?

If a supercomputer or CEV is connected to the human Global collective mind, then it may be able to extrapolate large numbers of variables and responses as both output from feedback input, and therefore contribute and affect the subjective consciousness of the Global collective mind.

Now how can we transform this efficient feedback system and mutual information exchange into a model for an intelligent system? I think all the clues are here, as it proves we do not need to worry about either phenomenological consciousness or complex topologies and interconnected intelligent neural feedback mechanisms, (human minds).

We just need to worry about defining “intelligence” by way of what we want from a system. And I don’t see this as a major problem given the future potential speed of computation, hardware and information processing. The CEV must surely be a reality?

I think we are somewhat in alignment that consciousness, (both phenomenological and as natural phenomenon), is not a “hard problem” to overcome, because it is a given? Therefore we need not worry about any supercomputing intelligent system becoming aware, as it will, by default, be connected to an already complex self-aware and self-reflexive Global collective, (of human minds)? Further simulation, assimilation and processing of how this Global collective mind reacts should ultimately result in a fair and efficient simulation of an autonomous self-reflexive learning system?





It feels nice to dream about a world where Intelligence explosion is happening. But I am unable to digest many of the possibilities here.

>>>“There is currently no good reason to believe that once a human-level AGI capable of understanding its own design is achieved, an intelligence explosion will fail to ensue.”

    I do not have any disagreements here. This will definitely be possible in the near future. However, by the time we want to create such an AGI, we might have solved most of our basic problems(except science).

———————————————
>>>1) “A level of general intelligence 2-3 orders of magnitude greater than the human level (say, 100H or 1,000H, using 1H to denote human-level general intelligence).”
    2) “A thousand years of new science and technology could arrive in one year.”
    3) “In a 1,000H world, AGI scientists could go from high-school knowledge of physics to the invention of relativity in a single day (assuming, for the moment, that the factor of 1,000 was all in the speed of thought—an assumption we will examine in more detail later).”
    4) “Notice that both routes lead to greater “intelligence,” because even a human level of thinking and creativity would be more effective if it were happening a thousand times faster than it does now.”
      5) “The operative definition of “intelligence explosion” that we have assumed here involves an increase of the speed of thought (and perhaps also the “depth of thought”) of about two or three orders of magnitude. “
———————————————

    The above five statements are not comprehensible to me. We can compare a digital computer with another digital computer.  But we cannot compare one neural network with another neural network.  We think subjectively to solve objective problems. We solve problems with ideas. An idea from one scientist inspires a thousand other. A scientists network with other scientists is very important. And a scientist’s path of thinking narrows down as he ages. All these would apply to neural networks. We could probably create more neural networks to work on Science. But that is the best thing we could do. I do not think a 10H make any sense in this world. Let alone a 1000H. Of course, I am assuming that an AGI cannot go far away from a neural network design. We only know of digital computing, quantum computing and neural computing. I am assuming that there is no other way of computing in this world. So getting a 1000 year science in an year is probably never possible. I do not think thousand times faster computing helps in a neural network world. In fact it would result in chaos and the system might not be stable. I think our strength is in our depth of thought and not due to the speed. In a post scarcity world, the only thing left to humans might be to do science. And it is probably not any different to leave it to an AGI or to do it ourselves.





@Richard—-

Many thanks to you and Ben Goertzel for a fine and thought-provoking article.

At the end of your piece you discuss briefly whether we can, from our current seats, evaluate the impacts of an intelligence explosion.  You wrote:  “An intelligence explosion of such magnitude would bring us into a domain that our current science, technology and conceptual framework are not equipped to deal with; so prediction beyond this stage is best done once the intelligence explosion has already progressed significantly.”

I found this opting-out of practical considerations a bit disappointing.  Without worrying too much about precisely where we are at present on the explosion timeline, perhaps we can simply agree that we are already on the cusp—in the appraisal stage—of what you call Phase 3. Given that we are, why not begin to sketch out, to the best of our abilities,  the feasibility and risks involved in conjunction with the technical requirements?  I have questions regarding this (for you and for anyone in the IEET community who wishes to weigh in):

1. What efforts can be made to conceive and compile reliability measures for future AGI development? Examples might be: regression testing (to prevent inadvertent degradation of existing knowledge); mean time between failures studies (to prevent an error explosion); or, disaster recovery/version control (in the event that a sequence of machine-based changes multiplies malignant effects).  Or is it assumed that these functions will, as a matter of course, be delegated to the AGI itself?

2. You state in your article that “many of these particulars will remain opaque until we know precisely what sort of AGI system will launch the explosion.”  That is understandable, yet somewhat ambiguous.  Will AGI-builders prioritize between: a) moving forward on building AGI as rapidly as possible, and b) working to dispel the opacity that prevents us from seeing the cascading effects of AGI, possibly by means of some sort of modeling and scenario-testing *before* building? 

3. Do you think the formation of an international organization, somewhat like the IAEA only dedicated to the building of a benign AGI, would be a realistic opportunity to prevent an “AGI race”?

I know it would be foolish to conjure up facile answers at this juncture.  But if an intelligence explosion is probable then I believe the risks, however remote, need to be front and center in the discussion.  Remote inference is, after all, one of the hallmarks of human intelligence!  It strikes me that Professor Good’s concept, while valid on its face, is a bit too rosy and pat to stand completely unchallenged.





Thanks for the detailed and challenging questions.

As to the paragraph quoted above, our comment there was more about the long term consequences of the explosion:  I think with the best will in the world there are difficulties seeing clearly into any future, and what we were really saying was that things only get harder after such a dramatic change.

Having said that, I do have my own thoughts about some of the things that would happen after the explosion.  That would be more than I can squeeze into this comment so I will save it for another day.

The first of your three questions, about reliability measures, is difficult to answer in detail because of the different kinds of AGI under consideration now, and the enormous complexity of getting even one of them to work.  So, for example, it would be no mean feat to get MTBF numbers on such a thing:  what would count as a failure?  Should it be a global failure (machine goes crazy) or a mistake (machine forgets to take out the garbage) or a micro-error (some subroutine throws an exception)?  Having said that, though, my own approach to AGI involves a system design in which every part of the knowledge is always checking itself for consistency against all the rest—indeed, this is not “checking” because this constraint mechanism is at the heart of the normal operation of the system.  In that case the AGI would be continually enforcing a kind of stability on its structure.

And beyond that constraint mechanism, my own plan is to develop the systems in such a way that vast amounts of testing are done throughout the R&D process ... again, not just because I care about safety, but because the particular approach I use has to do with empirical discovery of mechanisms, so I have to build large numbers of different mechanisms and assess their performance by measurement.

Later, after the explosion, these efforts will be delegated to the AGI itself.  That would only happen once we had got to the point where it was clear the AGI understood the issue, and the consequences of neglecting it.  The AGI would not even be commissioned unless it could be seen to be understanding that point.

So, I do have many ideas for how to address your question, even though I can only scratch the surface here.

This does to some extent address your second question, too.  Yes, there will be lots of model building and scenario testing in my own lab.  Will all AGI researchers try to do that?  At the moment I do not know what their attitude is.

Should an international organization be set up to monitor the situation?  Myself, I would have no problem with that.  But since the IAEA has such a hard time doing anything to stop anyone from building nuclear weapons, I would be a little gloomy about the chance of an IAGIA having more power.  Can’t see China volunteering to put its research in shackles, I am afraid.

Overall, I myself am extremely concerned about all the issues you mention.  Yes, they are certainly front and center.  Many AGI researchers agree.  I think the main obstruction to examining this question at the moment might be the enormous incompatibility between different approaches to AGI.  People find it hard to answer concrete questions about AGI safety when the very shape of the AGI is not yet clear (it feels a little like trying to invent airbags, rules of the road and seat belts when our most advanced form of transport is a horse-drawn cart, and we cannot even conceive of trains, cars, planes and spacecraft).

In particular, the long-term safety is hard to model.  We can try to understand the behavior of an AGI and try to ensure that it has human-empathic motivations, but I think that may be the best we can do.  Personally, I think that that will be more than enough to ensure long-term safety (for reasons that are too complicated to go into), but part of the reason I feel that way is because of my own approach to AGI motivation, rather to confidence in the broader sphere of AGI.





I think we should have an agency, but first we need to convince people to take the issue seriously. Most people, and especially policy-makers (and others of their generation), really have no idea what’s (potentially) coming down the road. When they do get a glimpse they find it too weird, scary or far-fetched to think further about.
(At least they’re joking about it though: by the way this is an excellent reductio ad absurdum of negative utilitarianism:
http://www.youtube.com/watch?v=WGoi1MSGu64 )
We urgently need to communicate with these people in a way that they can relate to, and which doesn’t put them off.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Program or Be Programmed

Previous entry: Do Superhero Movies Make Us More or Less fearful of Transhumanism?

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376