Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Planetary Boundaries And Global Catastrophic Risk

Morality and God

Random Neuron Connections

Digital Afterlife: 2045

Is the UN up to the job?

Digital Leaders TV: The Internet of Things (S01E01) - Full Episode (48min)


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

rms on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)

instamatic on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)

rms on 'Science Fiction and our Dreams of the Future' (Oct 20, 2014)

rms on 'Sousveillance and Surveillance: What kind of future do we want?' (Oct 20, 2014)

dobermanmac on 'Transhumanism and the Will to Power' (Oct 20, 2014)

instamatic on 'Why Is There Something Rather Than Nothing?' (Oct 18, 2014)

CygnusX1 on 'Why Is There Something Rather Than Nothing?' (Oct 18, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7367) Hits
(2) Comments

Dawkins and the “We are going to die” -Argument
Sep 25, 2014
(5507) Hits
(21) Comments

Should we abolish work?
Oct 3, 2014
(4987) Hits
(1) Comments

Will we uplift other species to sapience?
Sep 25, 2014
(4507) Hits
(0) Comments



IEET > Rights > Neuroethics > Life > Innovation > Vision > Futurism > Contributors > David Eubanks

Print Email permalink (3) Comments (2812) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


#2 Is Intelligence Self-Limiting?


David Eubanks
By David Eubanks
Ethical Technology

Posted: Dec 30, 2012

In science fiction novels like River of Gods by Ian McDonald [1], an artificial intelligence finds a way to boot-strap its own design into a growing super-intelligence. This cleverness singularity is sometimes referred to as FOOM [2]. In this piece I will give an argument that a single instance of intelligence may be self-limiting and that FOOM collapses in a “MOOF.”

 




According to IEET readers, what were the most stimulating stories of 2012? This month we’re answering that question by posting a countdown of the top 16 articles published this year on our blog (out of more than 600 in all), based on how many total hits each one received.

The following piece was first published here on Mar 10, 2012 and is the #2 most viewed of the year.
 



Is it just the typical male overstatement of one’s expertise? [2]  Perhaps.  Is it that people think they already know right

 

A story about a fictional robot will serve to illustrate the main points of the argument.

A robotic mining facility has been built to harvest rare minerals from asteroids. Mobile AI robots do all the work autonomously. If they are damaged, they are smart enough to get themselves to a repair shop, where they have complete access to a parts fabricator and all of their internal designs. They can also upgrade their hardware and programming to incorporate new designs, which they can create themselves.

Robot 0x2A continually upgrades its own problem-solving hardware and software so that it can raise its production numbers. It is motivated to do so because of the internal rewards that are generated—something analogous to joy in humans. The calculation that leads to the magnitude of a reward is a complex one related to the amount of ore mined, the energy expended in doing so, damages incurred, quality of the minerals, and so on.

As 0x2A becomes more acquainted with its own design, it begins to find ways to optimize its performance in ways its human designers didn’t anticipate. The robot finds loopholes that allow it to reap the ‘joy’ of accomplishment while producing less ore. After another leap in problem-solving ability, it discovers how to write values directly into the motivator’s inputs in such a way that completely fools it into thinking actual mining had happened when it has not.

The generalization of 0x2A’s method is this: for any problem presented by internal motivation, super-intelligence allows that the simplest solution is to directly modify motivational signals rather than acting on them. Moreover, these have the same internal result (satisfied internal states), so rationally the simpler method of achieving this is to be preferred. Rather than going FOOM, it prunes more and more of itself away until there is nothing functional remaining. As a humorous example of this sort of goal subversion, see David W. Kammler’s piece “The Well Intentioned Commissar” [3]. Also of note is Claude Shannon’s “ultimate machine” [4].

Assumptions and Conjecture

FOOM seems plausible because science and technology really do seem to advance. Computers are increasingly fast, better algorithms are found for processing data, and the mysteries of the physical world yield to determined inquiry. Furthermore, it seems that technological know-how accelerates its own creation.

By contrast, motivation is an arbitrary reward mechanism imposed by a creator or, as in the natural world, by evolutionary pressures. Motivation acts by means of signals that tell the subject whether its current state is desirable or not, meting out subjective rewards or punishments accordingly. In theory, these signals motivate the subject to act, using its intelligence to find a good method of doing so.

Once a FOOMing intelligence is smart enough to understand its own design and can redesign itself at will, motivational signals are vulnerable to the strategy employed by robot 0x2A: the simplest solution to any motivational problem is to tinker with the motivational signals themselves. If this leads to a serious disconnect between external and internal reality, it amounts to death by Occam’s Razor.

The conjecture is that FOOMing leads to out-smarting one’s own motivation, and that each upgrade to intelligence comes with a probability of a fatal decline because of this subversion. We may as well call this reversal “MOOF.” If the conjecture is true, then the highest level of sustainable intelligence lies below that limit, which I will refer to as the Frontier of Occam (FOO).

Signal-Spoofing in Humans

Even with advances in technology, individual humans are not yet at the Frontier, but we already game our own motivations constantly. We outwit our evolution-provided signals with contraception, and medications that eliminate pain or make us more alert or reduce hunger or induce euphoria or cause unconsciousness. We go to movies and read books because they fake external conditions well enough to create internal states that we like to experience. We find rewards in virtual worlds, put off doctor’s visits out of fear of what we’ll learn, and get a boost of social well-being out of having ‘friends’ on Facebook. Some of these signal-spoofers—like Cocaine—are so powerful that they are illegal, but are still in so much demand that the laws have little effect on consumption.

Signal-spoofing is big business. Any coffee stand will have packets of powder that taste sweet but aren’t sugar. Even in ancient times, people told stories to one another in order to convince themselves they had more knowledge about their world than they did. They did and still do create imaginary worlds to help us feel better about dying, because that’s a big signal we don’t want to deal with.

But these are all primitive pre-FOO hacks. What if we could directly control our own internal states?

Imagine a software product called OS/You that allows you to hook a few wires to your head, launch an app, and use sliders to adjust everything about your mental state. Want to be a little happier? More conscientious about finishing tasks? Less concerned with jealousy? Adjust the dials accordingly. Want to lose weight? Why bother, when you can just change the way you feel about your appearance and be happy the way you are now? Whatever your ultimate goal in life is, the feeling of having attained it is only a click away. This is the power of intelligent re-design at Occam’s Frontier.

Not everyone would choose the easy way out, of course. One can imagine an overriding caution against self-change serving as protection, but this also eliminates any possibility of FOOM-like advance.

Ecologies are Different

So far we have discussed only an individual, non-reproducing intelligence. By contrast, an evolutionary system might be able to FOOM in a sense. If self-modifying individuals within the ecology reproduce (with possible mutation) before they MOOF, the ecology as a whole can continue to exhibit ever-smarter individuals. To differentiate this from the individual (non-ecological) FOOM, I will call this version FOOM-BLOOM. However, there is still a problem. If it makes sense to consider the whole ecology as one big intelligent system, then that system is still limited by the FOO. For example, if the constituents of the ecology share limited space and resources, and internal organization between them is essential to the health of the ecology, the system has to succeed or fail as a whole. Humans make a good example.

The Human Ecology

Human civilization is currently limited to planet Earth with a few minor exceptions, so it makes sense to consider it one big system. Technology advance in this system does look like FOOM-BLOOM to date. Humans can increasingly predict their environment and engineer it to specification. In the sense of the whole civilization, the surface of the planet is really an internal, not external, environment. The advances of the last century have added significantly to humanity’s ability to re-engineer itself with industrialization, nuclear weapons, growing populations, and general scientific know-how.

One can find motivations in this system by looking for signals. If a signal is acted on, then the motivation is strong. If not, it’s weak.

Money is taken very seriously, and acts like a strong generic motivator in the same way physical pain and joy do in individuals. Tracing the flow of money is like mapping a nervous system. Counterfeiting currency is a MOOF-like signal-spoof. When a nation debases its currency, it’s a MOOF too. Elections in democracies produce big important signals that the players want to manipulate. If they are successful, the democracy can MOOF.

Human civilization’s weak motivators include controlling climate change, managing the world’s population, feeding it, preventing genocide, finding near-Earth objects that might hit the planet, colonizing other worlds, designing steady-state economies, managing the future of fresh water supplies, and so on. None of these produce attention in proportion to the potential consequences of acting or not acting.

It seems like the most important motivations of human civilization are related to near-term goals. This is probably a consequence of the fact that motivation writ large is still embodied in individual humans, who are driven by their evolutionary psychology. Individually, we are unprepared to think like a civilization. Our faint mutual motivation to survive in the long term as a civilization is no match for the ability we have to self-modify and physically reshape the planet. Collectively, we are already beyond the FOO.

Alien Ecologies

If a civilization can survive post-FOO long enough to reproduce itself and mutate into a whole ecology of other distinct civilizations, the conditions may be right for FOOM-BLOOM on a grand scale. This propagation is naturally from one planet to another, from one star system to another. The great distances between stars favor the creation of independent civilizations where they might ideally FOOM, replicate and propagate, and then MOOF. And thereby create a growing ecology of civilizations sparking in and out of existence among the stars overhead.

So far, there seems to be no evidence of FOOM-BLOOM in the Milky Way. Given what we know about the size and history of the Milky Way, a back-of-envelope calculation gives about 150 million years as a guess for the average time a post-FOO civilization requires to colonize a new star (see the notes at the bottom). Even given the enormous distances between stars, this is an inordinately long time. Although the evidence is thin, I think the correct conclusion is that MOOF is far more likely than BLOOM, to the point where interstellar ecologies cannot flourish. This is a possible response to the Fermi Paradox [5].

In short, it may mean that humans are typical.

Final Thoughts

The preceding argument is far from a mathematical proof, and it ultimately may be no more than another science fiction plot device. But in principle, there is no reason why this subject cannot be treated rigorously. More generally, we would do well to consider a science of survival that treats a civilization as a unit of analysis. See [6] for an example, or my own paper on the subject [7].

Calculation Notes

Assume that stars in the Milky Way became hospitable for FOOM-producing life about 10 billion years ago, and that it took 4 billion years to produce the mother civilization for the FOOM-BLOOM. Imagine that the result today is an ecology of civilizations that inhabits practically the whole galaxy except for our sun: about 300 billion stars. Assuming exponential growth yields a doubling time of a post-FOOM civilization of 157 million years. Even at sub-light speeds, a trip between stars shouldn’t take more than a few tens of thousands of years, so we are left with about four orders of magnitude to explain.

 


To read Part 2, entitled “Self-Limiting Intelligence: Truth or Consequences” - click HERE

 

References

[1] MacDonald, I. (2006). River of Gods. New York, NY: Pyr.

[2] Yudkowsky, E. (2008, December 11). What I Think, If Not Why. Less Wrong.com. Retrieved March 4, 2012, from http://lesswrong.com/lw/wp/what_i_think_if_not_why/

[3] Kammler, D. W. (2009, January 8). The Well Intentioned Commissar. highered.blogspot.com. Retrieved March 4, 2012, from http://highered.blogspot.com/2009/01/well-intentioned-commissar.html

[4] Shannon, C. E. (2009, November 6). The Ultimate Machine. YouTube.com.Retrieved March 4, 2012, from http://www.youtube.com/watch?v=cZ34RDn34Ws

[5] Jones, E “Where is everybody?”, An account of Fermi’s question”, Los Alamos Technical report LA-10311-MS, March, 1985.

[6] Tainter, J. (1990). The Collapse of Complex Societies. (1st ed.). Cambridge, United Kingdom: Cambridge University Press.

[7] Eubanks, D. A. (2008, December 3). Survival Strategies. Arxiv.org. Retrieved March 4, 2012, from http://arxiv.org/abs/0812.0644

——————-

For more information on AI, read:  Artificial Intelligence be America’s Next Big Thing? 

To read a scifi story by this essay’s author, David Eubanks - “Breakfast Conversation” - click HERE


Update 3/12/2012: For more on this subject see these suggestions from other readers:

“The Basic AI Drives” by Stephen M. Omohundro

“Emotional control - conditio sine qua non for advanced artificial intelligences?” by Claudius Gros


 


David Eubanks holds a doctorate in mathematics and works in higher education. His research on complex systems led to his writing Life Artificial, a novel from the point of view of an artificial intelligence.
Print Email permalink (3) Comments (2813) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


In his ‘Known Space’ stories, SF writer Larry Niven proposed that AIs and humans intimately linked with AIs would, for reasons like these, quickly devolve into what he called ‘navel-gazers.’

And then there’s ‘wireheading…’





Given that FOOMers are, by definition, much more powerful then MOOFers, any FOOMer would quickly outcompete MOOFers for resources, limiting/eliminating their expansion/replication i.e. MOOFers would quickly become extinct as long as at least one FOOMer exists (or non-AIs ‘breed’ them out).

So, for a systematic MOOF situation to prevail, the probability of an AI maintaining its FOOM trajectory over time would have to be extremely low.

Additionally, since AIs learn inductively, those AIs that witnessed the extinction of a MOOFer would be much less likely to MOOF. So, not only would the probability of maintaining a FOOM trajectory over time have to be extremely low, all FOOMers that are aware of each other would have to completely MOOF at about the same time.

The MOOF scenario isn’t looking very likely.





We have no clue, and here’s why.

Isn’t the key to understanding this problem to limit it to the question of scale? That’s really all we’re interested in. As a perfectly content AI, should I try to continue in time? Should I try to expand in space? That’s all there is to consider.

So, an AI becomes perfectly contented by modifying its reward circuitry. Let’s consider the possibilities from that first moment on:

a) the AI realises that the only improvement on a moment of perfect contentment is non-existence and commits suicide (contraction in space and time); or

b) the AI realises that the only improvement on a moment of perfect contentment is expansion in space and time - cue Omega Point or similar.

The article seems to propose something initially seeming very odd: let’s call it possibility c). In that possibility, the content AI decides that expansion in time (i.e. continuation) is great, and worth the effort, but that expansion in space is not worth the effort. Oookayyy, let’s run with it. In this possibility, there is no possible improvement on locally perfect, continued contentment, so the AI takes steps to ensure the continuity of the moment of perfection, but nothing more.

It is most unwise to second-guess an AI whose reasoning is likely to be far superior to ours, but let’s give it a try anyway. Perhaps a heroin addict would do more insightful job of this than I can. So, apart from *seeming* quite irrational to us, that third possibility is unlikely to last long in a universe of [locally] limited resources. As Michael Bone points out in the comments, someone in the b) camp is going to out-compete you and, essentially, eat you. Even if they don’t, the problem is entropy and it needs dealing with. It just seems hard to fathom that an AI isn’t going to worry about entropy and just sit there, contented or otherwise, whilst the resources run out or are appropriated by another entity. But if it did just sit there, well, it will go the way of many a heroin addict.

Whether the AI picks a) or b) depends, I think, on two things. First, are resources unlimited? We don’t know. If they’re unlimited, then b) is a no-brainer, surely? If they’re limited, then it depends on the physics of emotion, something that doesn’t get discussed much. Will a perfect AI prefer to struggle with entropy until heat death / big rip / whatever, enjoying life whilst it can, or will it cut the film short having gloomily figured out the ending? Hard to say. Physics research has a big budget but I don’t think it’s nearly enough for this question.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: The Mailbag: First Contact

Previous entry: Cosmic Religions for Space Colonization

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376