#2 Is Intelligence Self-Limiting?
David Eubanks
2012-12-30 00:00:00

 









According to IEET readers, what were the most stimulating stories of 2012? This month we’re answering that question by posting a countdown of the top 16 articles published this year on our blog (out of more than 600 in all), based on how many total hits each one received.

The following piece was first published here on Mar 10, 2012 and is the #2 most viewed of the year.

 







Is it just the typical male overstatement of one’s expertise? [2]  Perhaps.  Is it that people think they already know right

 

A story about a fictional robot will serve to illustrate the main points of the argument.





A robotic mining facility has been built to harvest rare minerals from asteroids. Mobile AI robots do all the work autonomously. If they are damaged, they are smart enough to get themselves to a repair shop, where they have complete access to a parts fabricator and all of their internal designs. They can also upgrade their hardware and programming to incorporate new designs, which they can create themselves.

Robot 0x2A continually upgrades its own problem-solving hardware and software so that it can raise its production numbers. It is motivated to do so because of the internal rewards that are generated—something analogous to joy in humans. The calculation that leads to the magnitude of a reward is a complex one related to the amount of ore mined, the energy expended in doing so, damages incurred, quality of the minerals, and so on.

As 0x2A becomes more acquainted with its own design, it begins to find ways to optimize its performance in ways its human designers didn’t anticipate. The robot finds loopholes that allow it to reap the ‘joy’ of accomplishment while producing less ore. After another leap in problem-solving ability, it discovers how to write values directly into the motivator’s inputs in such a way that completely fools it into thinking actual mining had happened when it has not.




The generalization of 0x2A’s method is this: for any problem presented by internal motivation, super-intelligence allows that the simplest solution is to directly modify motivational signals rather than acting on them. Moreover, these have the same internal result (satisfied internal states), so rationally the simpler method of achieving this is to be preferred. Rather than going FOOM, it prunes more and more of itself away until there is nothing functional remaining. As a humorous example of this sort of goal subversion, see David W. Kammler’s piece “The Well Intentioned Commissar” [3]. Also of note is Claude Shannon’s “ultimate machine” [4].

Assumptions and Conjecture

FOOM seems plausible because science and technology really do seem to advance. Computers are increasingly fast, better algorithms are found for processing data, and the mysteries of the physical world yield to determined inquiry. Furthermore, it seems that technological know-how accelerates its own creation.

By contrast, motivation is an arbitrary reward mechanism imposed by a creator or, as in the natural world, by evolutionary pressures. Motivation acts by means of signals that tell the subject whether its current state is desirable or not, meting out subjective rewards or punishments accordingly. In theory, these signals motivate the subject to act, using its intelligence to find a good method of doing so.

Once a FOOMing intelligence is smart enough to understand its own design and can redesign itself at will, motivational signals are vulnerable to the strategy employed by robot 0x2A: the simplest solution to any motivational problem is to tinker with the motivational signals themselves. If this leads to a serious disconnect between external and internal reality, it amounts to death by Occam’s Razor.

The conjecture is that FOOMing leads to out-smarting one’s own motivation, and that each upgrade to intelligence comes with a probability of a fatal decline because of this subversion. We may as well call this reversal “MOOF.” If the conjecture is true, then the highest level of sustainable intelligence lies below that limit, which I will refer to as the Frontier of Occam (FOO).

Signal-Spoofing in Humans

Even with advances in technology, individual humans are not yet at the Frontier, but we already game our own motivations constantly. We outwit our evolution-provided signals with contraception, and medications that eliminate pain or make us more alert or reduce hunger or induce euphoria or cause unconsciousness. We go to movies and read books because they fake external conditions well enough to create internal states that we like to experience. We find rewards in virtual worlds, put off doctor’s visits out of fear of what we’ll learn, and get a boost of social well-being out of having ‘friends’ on Facebook. Some of these signal-spoofers—like Cocaine—are so powerful that they are illegal, but are still in so much demand that the laws have little effect on consumption.

Signal-spoofing is big business. Any coffee stand will have packets of powder that taste sweet but aren’t sugar. Even in ancient times, people told stories to one another in order to convince themselves they had more knowledge about their world than they did. They did and still do create imaginary worlds to help us feel better about dying, because that’s a big signal we don’t want to deal with.

But these are all primitive pre-FOO hacks. What if we could directly control our own internal states?

Imagine a software product called OS/You that allows you to hook a few wires to your head, launch an app, and use sliders to adjust everything about your mental state. Want to be a little happier? More conscientious about finishing tasks? Less concerned with jealousy? Adjust the dials accordingly. Want to lose weight? Why bother, when you can just change the way you feel about your appearance and be happy the way you are now? Whatever your ultimate goal in life is, the feeling of having attained it is only a click away. This is the power of intelligent re-design at Occam’s Frontier.

Not everyone would choose the easy way out, of course. One can imagine an overriding caution against self-change serving as protection, but this also eliminates any possibility of FOOM-like advance.

Ecologies are Different

So far we have discussed only an individual, non-reproducing intelligence. By contrast, an evolutionary system might be able to FOOM in a sense. If self-modifying individuals within the ecology reproduce (with possible mutation) before they MOOF, the ecology as a whole can continue to exhibit ever-smarter individuals. To differentiate this from the individual (non-ecological) FOOM, I will call this version FOOM-BLOOM. However, there is still a problem. If it makes sense to consider the whole ecology as one big intelligent system, then that system is still limited by the FOO. For example, if the constituents of the ecology share limited space and resources, and internal organization between them is essential to the health of the ecology, the system has to succeed or fail as a whole. Humans make a good example.

The Human Ecology

Human civilization is currently limited to planet Earth with a few minor exceptions, so it makes sense to consider it one big system. Technology advance in this system does look like FOOM-BLOOM to date. Humans can increasingly predict their environment and engineer it to specification. In the sense of the whole civilization, the surface of the planet is really an internal, not external, environment. The advances of the last century have added significantly to humanity’s ability to re-engineer itself with industrialization, nuclear weapons, growing populations, and general scientific know-how.

One can find motivations in this system by looking for signals. If a signal is acted on, then the motivation is strong. If not, it’s weak.

Money is taken very seriously, and acts like a strong generic motivator in the same way physical pain and joy do in individuals. Tracing the flow of money is like mapping a nervous system. Counterfeiting currency is a MOOF-like signal-spoof. When a nation debases its currency, it’s a MOOF too. Elections in democracies produce big important signals that the players want to manipulate. If they are successful, the democracy can MOOF.

Human civilization’s weak motivators include controlling climate change, managing the world’s population, feeding it, preventing genocide, finding near-Earth objects that might hit the planet, colonizing other worlds, designing steady-state economies, managing the future of fresh water supplies, and so on. None of these produce attention in proportion to the potential consequences of acting or not acting.

It seems like the most important motivations of human civilization are related to near-term goals. This is probably a consequence of the fact that motivation writ large is still embodied in individual humans, who are driven by their evolutionary psychology. Individually, we are unprepared to think like a civilization. Our faint mutual motivation to survive in the long term as a civilization is no match for the ability we have to self-modify and physically reshape the planet. Collectively, we are already beyond the FOO.

Alien Ecologies

If a civilization can survive post-FOO long enough to reproduce itself and mutate into a whole ecology of other distinct civilizations, the conditions may be right for FOOM-BLOOM on a grand scale. This propagation is naturally from one planet to another, from one star system to another. The great distances between stars favor the creation of independent civilizations where they might ideally FOOM, replicate and propagate, and then MOOF. And thereby create a growing ecology of civilizations sparking in and out of existence among the stars overhead.

So far, there seems to be no evidence of FOOM-BLOOM in the Milky Way. Given what we know about the size and history of the Milky Way, a back-of-envelope calculation gives about 150 million years as a guess for the average time a post-FOO civilization requires to colonize a new star (see the notes at the bottom). Even given the enormous distances between stars, this is an inordinately long time. Although the evidence is thin, I think the correct conclusion is that MOOF is far more likely than BLOOM, to the point where interstellar ecologies cannot flourish. This is a possible response to the Fermi Paradox [5].

In short, it may mean that humans are typical.

Final Thoughts

The preceding argument is far from a mathematical proof, and it ultimately may be no more than another science fiction plot device. But in principle, there is no reason why this subject cannot be treated rigorously. More generally, we would do well to consider a science of survival that treats a civilization as a unit of analysis. See [6] for an example, or my own paper on the subject [7].

Calculation Notes

Assume that stars in the Milky Way became hospitable for FOOM-producing life about 10 billion years ago, and that it took 4 billion years to produce the mother civilization for the FOOM-BLOOM. Imagine that the result today is an ecology of civilizations that inhabits practically the whole galaxy except for our sun: about 300 billion stars. Assuming exponential growth yields a doubling time of a post-FOOM civilization of 157 million years. Even at sub-light speeds, a trip between stars shouldn’t take more than a few tens of thousands of years, so we are left with about four orders of magnitude to explain.

 



To read Part 2, entitled “Self-Limiting Intelligence: Truth or Consequences” - click HERE

 

References

[1] MacDonald, I. (2006). River of Gods. New York, NY: Pyr.

[2] Yudkowsky, E. (2008, December 11). What I Think, If Not Why. Less Wrong.com. Retrieved March 4, 2012, from http://lesswrong.com/lw/wp/what_i_think_if_not_why/

[3] Kammler, D. W. (2009, January 8). The Well Intentioned Commissar. highered.blogspot.com. Retrieved March 4, 2012, from http://highered.blogspot.com/2009/01/well-intentioned-commissar.html

[4] Shannon, C. E. (2009, November 6). The Ultimate Machine. YouTube.com.Retrieved March 4, 2012, from http://www.youtube.com/watch?v=cZ34RDn34Ws

[5] Jones, E “Where is everybody?”, An account of Fermi’s question”, Los Alamos Technical report LA-10311-MS, March, 1985.

[6] Tainter, J. (1990). The Collapse of Complex Societies. (1st ed.). Cambridge, United Kingdom: Cambridge University Press.

[7] Eubanks, D. A. (2008, December 3). Survival Strategies. Arxiv.org. Retrieved March 4, 2012, from http://arxiv.org/abs/0812.0644

——————-

For more information on AI, read:  Artificial Intelligence be America’s Next Big Thing? 

To read a scifi story by this essay’s author, David Eubanks - “Breakfast Conversation” - click HERE





Update 3/12/2012: For more on this subject see these suggestions from other readers:

“The Basic AI Drives” by Stephen M. Omohundro

“Emotional control - conditio sine qua non for advanced artificial intelligences?” by Claudius Gros