Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

Digital Immortality Map: Reconstruction of the Personality Based on its Information Traces

Reverse Missionaries: Are African Churches Exporting Homophobia to the West?

Could new gene therapies help us live for much longer?

Europeans Are Desperate for Babies

Scenario 2099: Rewilding, Population Implosion, Artificial Photosynthesis, and - Global Cooling?

Predictive Commercial Technologies Will Alter the Way We Live

ieet books

Surviving AI: The promise and peril of artificial intelligence
Calum Chace


Alexey Turchin on 'Digital Immortality Map: Reconstruction of the Personality Based on its Information Traces' (Oct 6, 2015)

rms on 'A Techno-Optimist Movement: For an Evenly Distributed Future
' (Oct 5, 2015)

CygnusX1 on 'The Simulated Future' (Oct 4, 2015)

Roy Alexander on 'Could Artificial Morals and Emotions Make Robots Safer?' (Oct 4, 2015)

etienne thillaye on 'Egalitarianism is not Radical' (Oct 3, 2015)

Rick Searle on 'How Nature Plays the Lottery' (Oct 3, 2015)

instamatic on 'Envy of the Future' (Oct 2, 2015)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Seven Emerging Technologies That Will Change the World Forever
Sep 29, 2015
(115946) Hits
(4) Comments

The price of the Internet of Things will be a vague dread of a malicious world
Sep 25, 2015
(27634) Hits
(3) Comments

This Artificially Intelligent Boss Means the Workplace Will Never Be the Same
Sep 18, 2015
(16538) Hits
(1) Comments

Religion and Superintelligence
Sep 12, 2015
(8430) Hits
(0) Comments

Comment on this entry

#2 Is Intelligence Self-Limiting?

David Eubanks

Ethical Technology

December 30, 2012

In science fiction novels like River of Gods by Ian McDonald [1], an artificial intelligence finds a way to boot-strap its own design into a growing super-intelligence. This cleverness singularity is sometimes referred to as FOOM [2]. In this piece I will give an argument that a single instance of intelligence may be self-limiting and that FOOM collapses in a “MOOF.”


Complete entry


Posted by Frank Glover  on  12/30  at  12:15 PM

In his ‘Known Space’ stories, SF writer Larry Niven proposed that AIs and humans intimately linked with AIs would, for reasons like these, quickly devolve into what he called ‘navel-gazers.’

And then there’s ‘wireheading…’

Posted by Michael Bone  on  12/30  at  01:26 PM

Given that FOOMers are, by definition, much more powerful then MOOFers, any FOOMer would quickly outcompete MOOFers for resources, limiting/eliminating their expansion/replication i.e. MOOFers would quickly become extinct as long as at least one FOOMer exists (or non-AIs ‘breed’ them out).

So, for a systematic MOOF situation to prevail, the probability of an AI maintaining its FOOM trajectory over time would have to be extremely low.

Additionally, since AIs learn inductively, those AIs that witnessed the extinction of a MOOFer would be much less likely to MOOF. So, not only would the probability of maintaining a FOOM trajectory over time have to be extremely low, all FOOMers that are aware of each other would have to completely MOOF at about the same time.

The MOOF scenario isn’t looking very likely.

Posted by Darren Reynolds  on  12/31  at  08:34 AM

We have no clue, and here’s why.

Isn’t the key to understanding this problem to limit it to the question of scale? That’s really all we’re interested in. As a perfectly content AI, should I try to continue in time? Should I try to expand in space? That’s all there is to consider.

So, an AI becomes perfectly contented by modifying its reward circuitry. Let’s consider the possibilities from that first moment on:

a) the AI realises that the only improvement on a moment of perfect contentment is non-existence and commits suicide (contraction in space and time); or

b) the AI realises that the only improvement on a moment of perfect contentment is expansion in space and time - cue Omega Point or similar.

The article seems to propose something initially seeming very odd: let’s call it possibility c). In that possibility, the content AI decides that expansion in time (i.e. continuation) is great, and worth the effort, but that expansion in space is not worth the effort. Oookayyy, let’s run with it. In this possibility, there is no possible improvement on locally perfect, continued contentment, so the AI takes steps to ensure the continuity of the moment of perfection, but nothing more.

It is most unwise to second-guess an AI whose reasoning is likely to be far superior to ours, but let’s give it a try anyway. Perhaps a heroin addict would do more insightful job of this than I can. So, apart from *seeming* quite irrational to us, that third possibility is unlikely to last long in a universe of [locally] limited resources. As Michael Bone points out in the comments, someone in the b) camp is going to out-compete you and, essentially, eat you. Even if they don’t, the problem is entropy and it needs dealing with. It just seems hard to fathom that an AI isn’t going to worry about entropy and just sit there, contented or otherwise, whilst the resources run out or are appropriated by another entity. But if it did just sit there, well, it will go the way of many a heroin addict.

Whether the AI picks a) or b) depends, I think, on two things. First, are resources unlimited? We don’t know. If they’re unlimited, then b) is a no-brainer, surely? If they’re limited, then it depends on the physics of emotion, something that doesn’t get discussed much. Will a perfect AI prefer to struggle with entropy until heat death / big rip / whatever, enjoying life whilst it can, or will it cut the film short having gloomily figured out the ending? Hard to say. Physics research has a big budget but I don’t think it’s nearly enough for this question.

Add your comment here:




Remember my personal information

Notify me of follow-up comments?


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-297-2376

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @