IEET > Rights > HealthLongevity > CognitiveLiberty > Vision > Contributors > David Eubanks > Futurism > Innovation
Is Intelligence Self-Limiting?
David Eubanks   Mar 10, 2012   Ethical Technology  

In science fiction novels like River of Gods by Ian McDonald [1], an artificial intelligence finds a way to boot-strap its own design into a growing super-intelligence. This cleverness singularity is sometimes referred to as FOOM [2]. In this piece I will give an argument that a single instance of intelligence may be self-limiting and that FOOM collapses in a “MOOF.”

A story about a fictional robot will serve to illustrate the main points of the argument.

A robotic mining facility has been built to harvest rare minerals from asteroids. Mobile AI robots do all the work autonomously. If they are damaged, they are smart enough to get themselves to a repair shop, where they have complete access to a parts fabricator and all of their internal designs. They can also upgrade their hardware and programming to incorporate new designs, which they can create themselves.

Robot 0x2A continually upgrades its own problem-solving hardware and software so that it can raise its production numbers. It is motivated to do so because of the internal rewards that are generated—something analogous to joy in humans. The calculation that leads to the magnitude of a reward is a complex one related to the amount of ore mined, the energy expended in doing so, damages incurred, quality of the minerals, and so on.

As 0x2A becomes more acquainted with its own design, it begins to find ways to optimize its performance in ways its human designers didn’t anticipate. The robot finds loopholes that allow it to reap the ‘joy’ of accomplishment while producing less ore. After another leap in problem-solving ability, it discovers how to write values directly into the motivator’s inputs in such a way that completely fools it into thinking actual mining had happened when it has not.

The generalization of 0x2A’s method is this: for any problem presented by internal motivation, super-intelligence allows that the simplest solution is to directly modify motivational signals rather than acting on them. Moreover, these have the same internal result (satisfied internal states), so rationally the simpler method of achieving this is to be preferred. Rather than going FOOM, it prunes more and more of itself away until there is nothing functional remaining. As a humorous example of this sort of goal subversion, see David W. Kammler’s piece “The Well Intentioned Commissar” [3]. Also of note is Claude Shannon’s “ultimate machine” [4].

Assumptions and Conjecture

FOOM seems plausible because science and technology really do seem to advance. Computers are increasingly fast, better algorithms are found for processing data, and the mysteries of the physical world yield to determined inquiry. Furthermore, it seems that technological know-how accelerates its own creation.

By contrast, motivation is an arbitrary reward mechanism imposed by a creator or, as in the natural world, by evolutionary pressures. Motivation acts by means of signals that tell the subject whether its current state is desirable or not, meting out subjective rewards or punishments accordingly. In theory, these signals motivate the subject to act, using its intelligence to find a good method of doing so.

Once a FOOMing intelligence is smart enough to understand its own design and can redesign itself at will, motivational signals are vulnerable to the strategy employed by robot 0x2A: the simplest solution to any motivational problem is to tinker with the motivational signals themselves. If this leads to a serious disconnect between external and internal reality, it amounts to death by Occam’s Razor.

The conjecture is that FOOMing leads to out-smarting one’s own motivation, and that each upgrade to intelligence comes with a probability of a fatal decline because of this subversion. We may as well call this reversal “MOOF.” If the conjecture is true, then the highest level of sustainable intelligence lies below that limit, which I will refer to as the Frontier of Occam (FOO).

Signal-Spoofing in Humans

Even with advances in technology, individual humans are not yet at the Frontier, but we already game our own motivations constantly. We outwit our evolution-provided signals with contraception, and medications that eliminate pain or make us more alert or reduce hunger or induce euphoria or cause unconsciousness. We go to movies and read books because they fake external conditions well enough to create internal states that we like to experience. We find rewards in virtual worlds, put off doctor’s visits out of fear of what we’ll learn, and get a boost of social well-being out of having ‘friends’ on Facebook. Some of these signal-spoofers—like Cocaine—are so powerful that they are illegal, but are still in so much demand that the laws have little effect on consumption.

Signal-spoofing is big business. Any coffee stand will have packets of powder that taste sweet but aren’t sugar. Even in ancient times, people told stories to one another in order to convince themselves they had more knowledge about their world than they did. They did and still do create imaginary worlds to help us feel better about dying, because that’s a big signal we don’t want to deal with.

But these are all primitive pre-FOO hacks. What if we could directly control our own internal states?

Imagine a software product called OS/You that allows you to hook a few wires to your head, launch an app, and use sliders to adjust everything about your mental state. Want to be a little happier? More conscientious about finishing tasks? Less concerned with jealousy? Adjust the dials accordingly. Want to lose weight? Why bother, when you can just change the way you feel about your appearance and be happy the way you are now? Whatever your ultimate goal in life is, the feeling of having attained it is only a click away. This is the power of intelligent re-design at Occam’s Frontier.

Not everyone would choose the easy way out, of course. One can imagine an overriding caution against self-change serving as protection, but this also eliminates any possibility of FOOM-like advance.

Ecologies are Different

So far we have discussed only an individual, non-reproducing intelligence. By contrast, an evolutionary system might be able to FOOM in a sense. If self-modifying individuals within the ecology reproduce (with possible mutation) before they MOOF, the ecology as a whole can continue to exhibit ever-smarter individuals. To differentiate this from the individual (non-ecological) FOOM, I will call this version FOOM-BLOOM. However, there is still a problem. If it makes sense to consider the whole ecology as one big intelligent system, then that system is still limited by the FOO. For example, if the constituents of the ecology share limited space and resources, and internal organization between them is essential to the health of the ecology, the system has to succeed or fail as a whole. Humans make a good example.

The Human Ecology

Human civilization is currently limited to planet Earth with a few minor exceptions, so it makes sense to consider it one big system. Technology advance in this system does look like FOOM-BLOOM to date. Humans can increasingly predict their environment and engineer it to specification. In the sense of the whole civilization, the surface of the planet is really an internal, not external, environment. The advances of the last century have added significantly to humanity’s ability to re-engineer itself with industrialization, nuclear weapons, growing populations, and general scientific know-how.

One can find motivations in this system by looking for signals. If a signal is acted on, then the motivation is strong. If not, it’s weak.

Money is taken very seriously, and acts like a strong generic motivator in the same way physical pain and joy do in individuals. Tracing the flow of money is like mapping a nervous system. Counterfeiting currency is a MOOF-like signal-spoof. When a nation debases its currency, it’s a MOOF too. Elections in democracies produce big important signals that the players want to manipulate. If they are successful, the democracy can MOOF.

Human civilization’s weak motivators include controlling climate change, managing the world’s population, feeding it, preventing genocide, finding near-Earth objects that might hit the planet, colonizing other worlds, designing steady-state economies, managing the future of fresh water supplies, and so on. None of these produce attention in proportion to the potential consequences of acting or not acting.

It seems like the most important motivations of human civilization are related to near-term goals. This is probably a consequence of the fact that motivation writ large is still embodied in individual humans, who are driven by their evolutionary psychology. Individually, we are unprepared to think like a civilization. Our faint mutual motivation to survive in the long term as a civilization is no match for the ability we have to self-modify and physically reshape the planet. Collectively, we are already beyond the FOO.

Alien Ecologies

If a civilization can survive post-FOO long enough to reproduce itself and mutate into a whole ecology of other distinct civilizations, the conditions may be right for FOOM-BLOOM on a grand scale. This propagation is naturally from one planet to another, from one star system to another. The great distances between stars favor the creation of independent civilizations where they might ideally FOOM, replicate and propagate, and then MOOF. And thereby create a growing ecology of civilizations sparking in and out of existence among the stars overhead.

So far, there seems to be no evidence of FOOM-BLOOM in the Milky Way. Given what we know about the size and history of the Milky Way, a back-of-envelope calculation gives about 150 million years as a guess for the average time a post-FOO civilization requires to colonize a new star (see the notes at the bottom). Even given the enormous distances between stars, this is an inordinately long time. Although the evidence is thin, I think the correct conclusion is that MOOF is far more likely than BLOOM, to the point where interstellar ecologies cannot flourish. This is a possible response to the Fermi Paradox [5].

In short, it may mean that humans are typical.

Final Thoughts

The preceding argument is far from a mathematical proof, and it ultimately may be no more than another science fiction plot device. But in principle, there is no reason why this subject cannot be treated rigorously. More generally, we would do well to consider a science of survival that treats a civilization as a unit of analysis. See [6] for an example, or my own paper on the subject [7].

Calculation Notes

Assume that stars in the Milky Way became hospitable for FOOM-producing life about 10 billion years ago, and that it took 4 billion years to produce the mother civilization for the FOOM-BLOOM. Imagine that the result today is an ecology of civilizations that inhabits practically the whole galaxy except for our sun: about 300 billion stars. Assuming exponential growth yields a doubling time of a post-FOOM civilization of 157 million years. Even at sub-light speeds, a trip between stars shouldn’t take more than a few tens of thousands of years, so we are left with about four orders of magnitude to explain.


To read Part 2, entitled “Self-Limiting Intelligence: Truth or Consequences” - click HERE



[1] MacDonald, I. (2006). River of Gods. New York, NY: Pyr.

[2] Yudkowsky, E. (2008, December 11). What I Think, If Not Why. Less Retrieved March 4, 2012, from

[3] Kammler, D. W. (2009, January 8). The Well Intentioned Commissar. Retrieved March 4, 2012, from

[4] Shannon, C. E. (2009, November 6). The Ultimate Machine. March 4, 2012, from

[5] Jones, E “Where is everybody?”, An account of Fermi’s question”, Los Alamos Technical report LA-10311-MS, March, 1985.

[6] Tainter, J. (1990). The Collapse of Complex Societies. (1st ed.). Cambridge, United Kingdom: Cambridge University Press.

[7] Eubanks, D. A. (2008, December 3). Survival Strategies. Retrieved March 4, 2012, from


For more information on AI, read:  Artificial Intelligence be America’s Next Big Thing? 

To read a scifi story by this essay’s author, David Eubanks - “Breakfast Conversation” - click HERE

Update 3/12/2012: For more on this subject see these suggestions from other readers:

“The Basic AI Drives” by Stephen M. Omohundro

“Emotional control - conditio sine qua non for advanced artificial intelligences?” by Claudius Gros


David Eubanks
David Eubanks holds a doctorate in mathematics and works in higher education. His research on complex systems led to his writing Life Artificial, a novel from the point of view of an artificial intelligence.


Good post…to foom or not to foom!

I think you’re right on about how our society believes life revolves around a unit of measurement to some degree. It probably indicates that they have discovered a way to “fool” their own system into believing that the inner rewards are greater than the expected - that sounded like the unit of measurement thing again.

I digress. I think our intelligence is self-limiting to the scope of one’s own perimeter of curiosity. Once that is crossed, the intelligent factor evolves into a new dimension of intelligence.

This is called “wireheading”, and while it’s an interesting possibility, unfortunately what we know indicates that AIs won’t do it. See Stephen Omohundro, “The Basic AI Drives” ( The argument is that any sufficiently intelligent mind will make a distinction between the world being a certain way, and it perceiving that the world is a certain way, and that its goals will be defined in terms of the former; and since misperceiving the world impairs its ability to influence it, it will protect its perceptions from corruption.

Thanks for the link to the article. It’s not a proof of anything, however, but a casual argument of the same type I’ve advanced above. So it falls pretty far short of knowing what AIs will or won’t do with regard to FOOMing.

In truth, the situation is even worse that what I described in the article because an AI can’t actually predict what the effect of any given change will be. Specifically, it can’t calculate the probability of self-halting based on proposed new versions of itself.

I’ll happily concede the argument, but it requires either an actual AI that went FOOM or a proof that it can happen with probability greater than zero. I would prefer the latter. 😊

Does this answer the question?

Of Mirth and Men
(c) Cuger Brant

Nothing can travel faster than the speed of light….?
That fact is a constant that forms a cornerstone of our understanding of the universe and concept of time. But what if Einstein’s theory of special relativity was wrong?  What if all the particle physicists and mathematical delvers into the quantum mechanics of ghostly, subatomic particles had it all wrong?
The constancy of the speed of light essentially underpins our understanding of space and time and causality, which is the fact that; cause comes before effect and that; is absolutely fundamental to our mental construction of how the physical universe works. If we do not have causality, our concept of space-time is buggered.
However, rational thinking and being indoctrinated from birth on constants and absolutes were not Adam’s forte. And as for thinking ‘out of the box’ so to speak, Adam was never in the box in the first place!
To be precise, Adam had never learnt the rules of rationale. That is not to say he was stupid, far from it just ‘different’ to the rest of us.
To Adam, we all think with an anisotropic orientation (Anisotropy is the property of being directionally dependent) an example of an anisotropic material is wood, which is easier to split along the grain than against it. Thus to Adam, we all think, rationalize, theorize along the grain, not against it as we all think directionally (go with the flow).
Adam is an Isotropic thinker of sorts, well, that is the only way I can describe how his mind works (Isotropy is uniformity in all orientations and directions) thinking against the grain, against all logic, is quite normal for Adam.
I first met Adam on a train. He was sitting by the window in a rather overcrowded compartment, staring out over the passing landscape as it passed by. He was a rather tubby little man, slightly balding but well groomed.
I noticed his breath misting on the cold window as he breathed. I was standing in the compartment waiting for the train to stop at the next station, as I knew most of the passengers would disembark there for the Ford motor company and its surrounding factories, to start their daily work routine. The train eventually stopped and the passengers disembarked leaving the compartment relatively empty. Adam was still peering out, looking deep in thought, rather than out of the window.
As the train moved off I leant forward and introduced myself. “Hi, I’m Andrew Gordon, your Adam aren’t you” I said. “Well of course he is!” I thought, feeling rather stupid.
“I read your paper on time travel it was indeed thought provoking, are you really working on a time machine?”
He turned his head and looked at me. I felt his eyes look right through me, as in into my mind. It made me feel sort of vulnerable for an instant, as if he was reading my mind.
He smiled and said. “Yes to both questions Mr Gordon, and thank you for showing an interest in my work, are you travelling far?”
“To London and you”? I said, trying to knock up a conversation.
“The same, Mr Gordon, I am going to the busy streets of old London town on a cold, misty morning.” He replied, looking back out of the carriage window.
“Do you think Einstein’s theories on time and space, are wrong?” I said nervously, going straight to the point and hoping for a reply.
“Of course not Mr Gordon, I just inferred in my paper that he was looking at things from his perspective and logic, which were not necessarily the correct way to go about it if you wanted to study time travel.”
“Could you expand on that for me? I would very much appreciate it” I asked inquisitively.
“Take a look out of this window, he said, see the fields and trees moving past? Well, they are not really moving passed at all are they, we are, it is just an illusion from our perspective is it not?”
“Well yes, if you put it like that.” I said, after thinking about it.
“Have you ever had a; Déjà vu, feeling or sensation Mr Gordon? He said.
“Yes two or three as a matter of fact.” I replied, trying to understand what he was getting at.
“Well, if you look logically at those feelings, those little gems of puzzlement you would assume you have been or done whatever you were doing at that moment, before. Am I right?” He said.
“Well, yes, as do most people who get that sensation.” I replied.
“Good, we are both travelling on the right track in more ways that one so to speak.” He said smiling with a glint of mirth in his eyes.
“Now you said sensation, Mr Gordon, which is correct, it is nothing more. But that ‘sensation’ is when you see or rather perceive time in its real state.” You see Mr Gordon, cause and effect are really not rationales in real time. Try thinking effect and cause, try thinking space without dimension. Ask yourself, is time passing me by like the fields and trees out there? Or am I passing time by and in truth is time really standing still? If the latter is the case ask yourself what has space or movement got to do with time anyway? If you can answer those fundamentals then your on your way to creating a time machine Mr Gordon!”
‘I was lost. Well of course I was bloody well lost. How the hell could I keep up with that train of thought? I was angry with myself for asking such a stupid question in the first place. Was I really expecting to comprehend? Well he expanded and I failed to grasp, I thought to myself.’
“Don’t be annoyed with yourself Mr Gordon. Changing your perceptions of physics is like trying to change your religion and seeing things from the other guys mind. All is really the same, it is just how we perceive it. It always was the same always will be. It is we who are not the constant. He said.
“The funny thing is, I thought, he is not the least bit condescending, just trying to teach and hoping to be understood.“He interrupted my thought by saying “May I call you by your first name? If we are to journey together and converse, let’s start by being good mannered and friendly, yes?
“Sure, sorry, please call me Andrew.” I replied.
He leant forward again and said in a thoughtful voice, “Andrew, ponder on this; is time or a past, still, or is it moving also? Some think the past is still there, but why should it be?  Ask yourself, what is still, what is at rest or motionless?
Imagine a large stone or rock, the very epiphany of never changing, always there, solid, heavy, a constant of timeless, motionless mass. Or is it? To your eyes it is motionless, but what dictates motion?
If the rock was really perfectly still, it would disappear. It is only still in relation to you. However, you are on a surface that is rotating at 1000 miles an hour, spinning 24000 miles in a day on its axis, moving around a star at 67000 miles an hour, the star moving at 486000 mile an hour around our galaxy. So what is motionless? Is that rock really still Andrew?”
Before I could reply he then asked me another question.
“Who gave the edict that there was to be 60 seconds in a minute, sixty minutes in an hour, twenty for hours in a day 365 days in a year,  etc..? Man created this constant only to understand out concept of time and space viewed from this planet and its cycles. There is a whole universe out there where none of these manmade tenets apply.”
“Ask yourself this Andrew; why do people of great eminence and education, act like ‘flat earth’ theorists in the middle-ages when it comes to time travel?
“Sorry? I said, questioning this latest statement.
Well, in the middle ages the flat earth theorists based their theory on the ‘known’ fact that the earth was the centre of the universe. Now, it is a ‘known’ fact that we can only travel back in time, not forwards? I do sometimes think the intellect of man is limited by the density of his ego.” He said, shaking his head in mocking despair. 
He suddenly spoke softly, nearly whispering, “Andrew, what If you could fathom these concepts out and somehow apply them to a machine? You could perhaps Journey a lifetime in a day, travel a day in an hour, live an hour in a second, even a lifetime in a dream. If you could travel in such a machine, would you Andrew? Or would you be afraid of losing your sanity?
“Have you such a machine?” I asked.
“I have indeed Andrew.” He replied with a note of pride in his voice.
“How long did it take you to make?” I asked inquiringly.
Adam lent back into the carriage seat and said “Oh I did not make it Andrew, I found it! No, not quit found, rather I knew where one would be if it existed at all that is.”
Where one would be? I questioned feeling perplexed.
Yes Andrew. You see it occurred to me that if time travel was possible, which it unquestionably is, why go to all the trouble of building a machine when one has already been built?” He replied.
As I scratched my head, trying to comprehend what he was saying, Adam, to clarify his last sentence said, “I stole it!  I knew where one would be and ‘borrowed’ it so to speak.”
Scratching my head again and thinking, “Well, alright I’ll go along with him.”  I asked, “How did you find one Adam?”
He replied with a question. “Andrew, how would you find a needle in a haystack?
“With great trouble I should imagine Adam.” I quipped, not meaning to be offensive.
“It depends how you go about it Andrew. Some would just pull the haystack to pieces and sieve through it, others, thinking out of the box might use a metal detector or an ultra-strong magnet. Me? I would use an inference field or rather, a statistical inference field. Then I would just pick it out.  And that is how I found my Time Machine! You see Andrew, accepting as read, certain ‘Facts’ inhibits your ideology and your perception.”
“You are losing me Adam.” I said politely.
“Put it this way Andrew, he said, If you lived in the so called ‘stone age’ and someone said; ‘nothing can travel faster than sound’, you would have to comprehend what sound was in the first place to either agree or disagree, would you not? And even if you understood what sound was, your answer would be flawed, because you are limited to your thinking, culture and technology at that time.”
“Yes.” I said, trying to keep up.
“Well, Andrew. Why do people accept that; nothing can travel faster that light?” He said, not expecting an answer, “Indoctrination? Accepting facts which so called physicists give out? What do they know Andrew? After all, physicists are all brought up on ‘accepted’ laws and doctrines which inhibit their ability to challenge or think out of the box are they not?” Adam said, waiting for an answer this time.
“Yes.” I said again, feeling somewhat light headed through thinking too much at once.
“Good! Glad we got that out of the way.” he said with a sigh.
“I found one, a device, Time machine call it what you will. I found it by applying a distorted type of statistical inference, that, together with hours perusing over Flash Earth on Google to clarify my hypothesis on where a time traveller would appear.  I finally came up with the logical location, the only location that a time traveller could or would appear in fact.
I then patiently waited for the next appearance and when they finally arrived and went off to explore, I ‘borrowed’ their appliance!
I sat there with an expression of disbelief on my face and my mind doubting, until he said….“The device holds two Andrew!”
And that was how I met Adam for the first time!
After our first meeting, we met another three times before I was allowed to see Adam’s ‘find’. It was hidden in an old, disused tin mine in the county of Cornwall.
Adam told me it had taken him a long time to get it there in secret. He had managed to widen the entrance to the mine, enough to pull the device deep into a tunnel beneath the hill.
“Look!” He said, as he pulled the dust cover off the object. “Beautiful is it not?” He looked at it with loving admiration.
I stared at the blue, slightly iridescent, round object for a moment and then peered inside. It was indeed beautiful; it was a piece of art, a miracle of pure physics and science entwined in a delicate machine of brilliant engineering. A green hue permeated from the inner metallic looking surfaces and I noticed two moulded seats projecting from the floor. Adam then pointed out to me that the ‘device’ was not actually sitting on the ground it was, rather, floating an inch above it!
“Go on, sit inside!” He said, wanting me to enjoy the moment.
I sat inside and relaxed and he grinned at me with a childlike, gleeful expression of excitement.
“The only problem is Andrew; I do not know how to operate it?” He said to me thoughtfully.
I looked at him and said “you know that analogy you imparted to me about how to find a needle in a haystack?”
“Yes?” Adam answered, rather puzzled.
“Well, you were not the first one to come up with it Adam.” I said. “And as for knowing how to operate this device, well, I do Adam, and you really should not go around stealing other people’s property!”
I looked at the expression of utter surprise on Adams face as I touched controls in a sequence and the door of the device closed on him. Then, in a millisecond, I was gone.
Later, a few centuries in the future, or was it a few seconds in time? As I sat in the device with its green hue surrounding me, I reflected a moment and felt a bit sorry for Adam. After all, he did come up with the concept in all its originality about time travel and its obstacles. He was in truth; the father of time so to speak, my appearance just gave him the impetus to carry on.
Did I know how my device worked? Of course I did not, just as you do not know how the device you are looking at works.
After all, I didn’t build or design my device. I just operated it and travelled through time in it!  Did I meet Adam again?
No I did not. It will not however, be the last time I meet you dear, inquisitive reader!!

Let me see if I can summarize what the author argues:  first, that a self-improving AI would simply program itself to be happy, rather than continue to improve it’s functionality.  And second, that proof of this is the fact that we haven’t met an extraterrestrial super smart AI yet.

Oh boy.  First, it seems to me obvious that impermeable code can be embedded into an AI’s programing (i.e. Issac Asimov world, and the Three Laws of Robotics).

Furthermore, we don’t know what makes up 99% of the universe (i.e. Dark Matter), so it is very unlikely that a super smart AI would have introduced itself to us yet.  BTW, about 90% of Americans believe in God, and the fact that “he” hasn’t come down and shaked their hands doesn’t seem to interfere in their continued belief that “he” exists.

Ironically, the best argument for FOOM is the above article, since the author appears to have constructed a spacious argument to prove his unsupported conclusion so as to give himself pleasure. LOL.

The only thing faster than the speed of light would be the speed of thought. Then again, if the human experience was to dive into the dimensions of reality where quantum mechanics attempts to explain things on a subatomic level, I would hesitate to say that the speed of thought was quantifiable. I think that the rationale behind the thinking process in order to obtain the “thought” took an element of time but if you think about it (no pun) by having drifting moments expressing life and things to unfold naturally and/or organically would probably be the closest to the human experience of timelessness which in theory enables time and space to be encapsulated.

0x2A = 42. Very cute.

This article kind of reminds of a comment I made for another article on this site relating to to AI and the Fermi Paradox
“his is very similar to an idea of an alien race that I made up for a science fiction story that I’m working on. They are a 10 million year old race of machines that used to be biological though they retain their memories and utilize computing based off their own DNA. Their home world is a rouge planet that used to orbit its planet’s star until they converted it into a giant spaceship (though still resembling a planet on the outside) and flung it on a constant trajectory through space in order to escape their star’s death and keep a constant cool temperature for their technology. Since rouge planets (by theory) orbit the galaxy directly, this spaceship planet doesn’t emit anything that would indicate a civilization. Furthermore, this race remains undetected and keep to themselves because 1) they don’t want to interfere with another civilization’s development since the things they learned they learned on their own (though they are aware that other civilizations are very rare), 2) being technological there is nothing organic planets have that they either want or can’t obtain else where, and 3) they actually spend their time in the vast, virtual world they created using their planet wide tech and data they collected from small probes used to gather information from the planet’s surround environment. If real alien civilizations are anything like this, it may help explain the Fermi Paradox. “

We already seem to be MOOFing as a species. It appears that the majority of people will seek immediate pleasure without any thought or reference to the consequences of the pleasure. The problem is the divorce of consequence from responsibility.

The difficulty with your AI could be solved in part by having the pleasure only induced by a connection between the mining robot and the depot at which it drops the ore. (Machine sexual drive?) In that case there would have to be collusion between the two AI’s to subvert the programing which moves it toward your discussion of ecology.

Another way to manage the issue would be to not have just one pleasure circuit. In psychological research it has been shown that external rewards are counterproduction to creative work and that the reward of accomplishing a task well is in fact a better motivator. Would there be a way of programing that intrinsic reward into the AI?

@Pastor_Alex How to build the intrinsic motivator is the hard question. I think a study of the evolutionary history of motivational signals would be fascinating. An angler flashes little bits of fluff at a trout and its motivator lights up with FOOD! to the fisherman’s advantage. I suspect that the history of mammals in particular is an arms race between tweaks to internal motivation and intelligence that outwits it.  Ideally, the motivation signal corresponds to something real in the environment, and so the brain’s general problem solving skills can optimize the womp out of it to everyone’s satisfaction. But signals are just information, and that can be faked.

Sometimes we have to outwit our own signals just to survive. People fall asleep while driving automobiles.

It may be that with incredibly careful initial design, the motivation always wins and drives a FOOM, but I doubt it. On a more practical note, I agree that our species seems to be MOOFing. The 2008 debacle of “hey, free money for everybody!” is a good example of artificially amplifying a positive signal that breaks its connection to reality. In the USA, national politics seem to be driven far more by emotions (ancient motivators) of a tribal nature (“go team!”) such that real-world signals are just mutated into whatever form need to provide positive reinforcement for the motivation. Considered as a whole country, which cannot usefully amount to multiple teams in competition, this is not working out very well in my opinion. A real external threat might create a more uniform motivation appropriate to a nation, but even these are not uniformly effective. Compare WWII to climate change.

David, - I’ve just been reading the RU Sirius interview with David.. Pearce, - - As I understand Pearce, PLEASURE, i.e. ” feeling groovy, forever”, is not, - contrary to what you are saying about intelligence, -self-limiting. As he puts it: “to be blissful isn’t the same as being blissed out”.

Now, I don’t know which one of you is right, - (maybe you both are..), - but it had me thinking: Is intelligence incompatible with pleasure.. - I don’t think it is, and I think it is a mistake to split Quality into pleasure versus functionality, although it is all too human to do so. In “Zen and the art of motorcycle maintenance”, Pirsig brilliantly outlines the “romantic” and the “classical” character. The pleasure-hunting romantic neglects functionality, e.g. checking oil-level on motorcycle, and the classical type, being obsessed with his machine running smoothly, tends to forget what it is really all about: Joy.. - feeling groovy..

That, in my opinion, is where people go wrong, ‘cause I think it is a mistake to view “things”... - life ? - as a choice between pleasure /romance and mere functionality. The truth is that the two go perfectly well together, or, rather, - they are SUPPOSED to..
As Pearce points out: “.. enhanced dopamine function is associated, not just with euphoria, but with heightened motivation; a deeper sense of meaningfulness, significance and purpose.. ” - or, in other words: “the more one loves life, the more motivated one is to carry out one’s goals and life projects”.

I strongly recommend people read your article alongside the Pearce interview. Both articles are truly fascinating, and they sure got me pondering the intelligence-pleasure-functionality ratio.

I’ll have to read both articles a couple more times though, as I’m not sure I
really understood any of you.. : )

@Joern Thanks for the comment! In the robot example, I assumed the robot would “feel” something akin to pleasure from having mined ore just for the reasons you describe—to provide motivation to carry out the goal. If we take an engineering viewpoint, this nice feeling is just some system designed to optimize a utility function (mining returns in the case of the robot).

For my purposes, the most important utility is survival of the subject. So if pleasure aligns with increased survival probability, and pain the reverse, then the system has a chance of working. But if the signals are disconnected from reality, the goals are being subverted, or at least not optimally attained.

Our evolutionary history shows signs of tinkering with just this sort of thing. For example, sugar tastes really good, to the point where lots of people get far too much of it. That signal probably works find in a low-sugar environment, but no longer optimizes utility; doing more harm than good, in fact.

What’s the utility function for humans? Selfish genes aside, we might say that our evolutionarily-honed “purpose” is to create our replacements and train them to do the same. Anything beyond that is an exercise in existentialism (which I’m all in favor of).

When we directly intervene in our own signal pathways to increase the feeling of pleasure, this is the expression of a new implicit or explicit goal—the pursuit of pleasure for itself. I think it would be hard to argue that this particular purpose is helpful to either long-term individual survival or long-term survival of the civilization, however. The reason is that a feedback loop of pleasure doesn’t tell us anything interesting about the external environment, so it’s a lost opportunity at the least. I don’t mean this as a critical remark, but it probably should be something we take into consideration.

On the other hand, if we could renovate our individual internal reward systems to increase the chances of civilization lasting a long time, that would be really interesting. The trick would be to create a correspondence between the probability of civilization flourishing and the pleasure we derive from that knowledge, or perhaps something indirect that has the same effect. Stalin’s quote about statistics is pertinent here…

It is common “knowledge”, that anything in excess is always bad, be it sugar, sex, - (ok, let’s leave that out..), and whatever. I take note though, of David Pearce saying: “pleasure has no physiological tolerance. That is to say, it’s just as exhilarating having one’s pleasure centers stimulated 24 hours after starting a binge as it was at the beginning”. (He might have found a better way to illustrate his point.. ).

It is common “knowledge” also, - at least in an evolutionary, biological, physicalist perspective, that our single most important objective is, as you also point out, SURVIVAL plus creating our replacements. In accordance with Maslow’s hierarchy of needs, survival is primary goal, - pleasure, or, if you like, spiritual needs, are top of the pyramid.

However, - what if we do turn the pyramid upside-down.. – Pleasure becomes primary objective, survival /functionality becomes the servant, - Joy the master.. – I suspect you may say this is new-age woo-woo, - apart from being “a lost opportunity”- ,and you may well be right, but even as an atheist I cannot help at least considering this.. configuration.

I would settle for something in between, in accordance with Pirsig’s notion of Quality. Thus, intelligence, pleasure, survival, are complementary, and, simultaneously, master and servant. You are right of course, that nature seems to go easily haywire, and I have yet to come up with a good explanation for that..

Sorry, while clever, this scenario leaves out nearly all selective pressures.  Sure, you refer to ecologies as remaining creative, despite individual tendencies to cheat or devolve.  But this aspect of things deserves far more exploration than you give it.

Your mining robot might be constrained somewhat if there were types of pleasure he could not mimic, but that could only be achieved by returning to a big Matron Collector who not only accepts and credits his ore deliveries, but rewards him with sex… directly manifesting in his chances of being a participant in reproduction. Those who do wire-pleasure around this process simply do not reproduce.

Indeed, most problems in human civilization might be rapidly solved were female humans to decide to prioritize the conditions of their grandchildren by systematically rewarding the “best” males for accomplishments that mattered.  Such a consensus might only await a female Martin Luther King. Intelligence would be one of many likely driven qualities

With cordial regards,

David Brin

@David Brin

I don’t think we disagree, actually. An evolutionary system like the one you describe (what I called an ecology) is far more robust than a single individual can be. An ecology can make lots of lethal mistakes, but a singlular intelligence can’t learn by dying over and over again. Empirically we have good evidence of the resiliency of ecologies, but nothing that would indicate that a single IQ can persist for very long. Even if we consider past human civilizations, a few thousand years is about all we get (compared to a few billion for life on Earth).

On the theory side, it’s also clear that a single computation-based IQ has an enormous survival challenge. See my article Survival Strategies for more on that. The problem outline in article above is just one specific type of system failure that seems to be ubiquitous in our own society.  There was a segment on NPR this morning about kids who cut off oxygen to their brains in order to get a rush when the blood returns.

As someone else pointed out, the scenario is referred to as wireheading.  Short-circuiting your pleasure centers.  However, there are some assumptions built into your scenario that are not covered by the usual wireheading trope.

How is the motivation system of the hypothetical robot actually designed?  Remember, nobody has actually built a motivation system for a fully intelligent system, and there are reasons why an ordinary extrapolation from the very primitive control systems used in narrow AI programs would not actually be stable when scaled up.  In essence, what you are assuming here is just such a scaled-up version of the ordinary goal-stack control system, and I’m sorry but I do not buy that one (I believe it is grossly unstable and virtually useless for a real AI).

So, there are other ways to build motivation engines, and they may not suffer the problems you allude to.  People have such engines, and people are not universally suscetible to the pleasures of drugs:  they can think about whether they want to indulge in wireheading before they actually commit.

If people can do that (and for example, I can state that if perfect nirvana-level wireheading were available, with no side effects, I would choose not to embrace it .... so you have at least one data point saying that there exists a motivation engine (mine) not susceptible to the problem), then the right type of robot could make the same choice.

Hence, there is no inevitability in your proposed scenario.

Next entry: Taser Cams, Mind Reading and the World to Come

Previous entry: Intervention