Institute for Ethics and Emerging Technologies






The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Nicotine’s Cognitive Benefits – Six Ways to Ingest It

Practopoiesis - a Theory on How Life Organizes, including the Mind

International Longevity Day - October 1, 2015

Digital Stroke

What is the Future of Your Mind?

Technology Made Us Human


ieet books

The End of the Beginning: Life, Society and Economy on the Brink of the Singularity
Author
Ben Goertzel


comments

Giulio Prisco on 'Would AI and Aliens be Moral in a Godless Universe?' (Aug 31, 2015)

rms on 'Smart Regulation For Smart Drugs' (Aug 31, 2015)

spud100 on 'Would AI and Aliens be Moral in a Godless Universe?' (Aug 30, 2015)

SHaGGGz on 'Would AI and Aliens be Moral in a Godless Universe?' (Aug 30, 2015)

Valkyrie Ice on 'Transhumanism will be a Victorious Revolution (my modest predictions)' (Aug 28, 2015)

Laurence Hitterdale on 'Do Extraterrestials Philosophize?' (Aug 28, 2015)

Gear0Mentation on 'Transhumanism will be a Victorious Revolution (my modest predictions)' (Aug 28, 2015)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


8 Craziest Mega-Engineering Projects We Could Use to Rework the Earth
Aug 13, 2015
(5816) Hits
(0) Comments

The Social Fabric of a Technically Advanced Society
Aug 1, 2015
(5538) Hits
(3) Comments

Free Will, Buddhism, and Mindfulness Meditation - interview with Terry Hyland
Aug 8, 2015
(5454) Hits
(0) Comments

Starting from Scratch: The Basic Building Blocks of AI
Aug 23, 2015
(5296) Hits
(0) Comments



Comment on this entry

#2 Is Intelligence Self-Limiting?


David Eubanks


Ethical Technology

December 30, 2012

In science fiction novels like River of Gods by Ian McDonald [1], an artificial intelligence finds a way to boot-strap its own design into a growing super-intelligence. This cleverness singularity is sometimes referred to as FOOM [2]. In this piece I will give an argument that a single instance of intelligence may be self-limiting and that FOOM collapses in a “MOOF.”


...

Complete entry


COMMENTS



Posted by Frank Glover  on  12/30  at  12:15 PM

In his ‘Known Space’ stories, SF writer Larry Niven proposed that AIs and humans intimately linked with AIs would, for reasons like these, quickly devolve into what he called ‘navel-gazers.’

And then there’s ‘wireheading…’





Posted by Michael Bone  on  12/30  at  01:26 PM

Given that FOOMers are, by definition, much more powerful then MOOFers, any FOOMer would quickly outcompete MOOFers for resources, limiting/eliminating their expansion/replication i.e. MOOFers would quickly become extinct as long as at least one FOOMer exists (or non-AIs ‘breed’ them out).

So, for a systematic MOOF situation to prevail, the probability of an AI maintaining its FOOM trajectory over time would have to be extremely low.

Additionally, since AIs learn inductively, those AIs that witnessed the extinction of a MOOFer would be much less likely to MOOF. So, not only would the probability of maintaining a FOOM trajectory over time have to be extremely low, all FOOMers that are aware of each other would have to completely MOOF at about the same time.

The MOOF scenario isn’t looking very likely.





Posted by Darren Reynolds  on  12/31  at  08:34 AM

We have no clue, and here’s why.

Isn’t the key to understanding this problem to limit it to the question of scale? That’s really all we’re interested in. As a perfectly content AI, should I try to continue in time? Should I try to expand in space? That’s all there is to consider.

So, an AI becomes perfectly contented by modifying its reward circuitry. Let’s consider the possibilities from that first moment on:

a) the AI realises that the only improvement on a moment of perfect contentment is non-existence and commits suicide (contraction in space and time); or

b) the AI realises that the only improvement on a moment of perfect contentment is expansion in space and time - cue Omega Point or similar.

The article seems to propose something initially seeming very odd: let’s call it possibility c). In that possibility, the content AI decides that expansion in time (i.e. continuation) is great, and worth the effort, but that expansion in space is not worth the effort. Oookayyy, let’s run with it. In this possibility, there is no possible improvement on locally perfect, continued contentment, so the AI takes steps to ensure the continuity of the moment of perfection, but nothing more.

It is most unwise to second-guess an AI whose reasoning is likely to be far superior to ours, but let’s give it a try anyway. Perhaps a heroin addict would do more insightful job of this than I can. So, apart from *seeming* quite irrational to us, that third possibility is unlikely to last long in a universe of [locally] limited resources. As Michael Bone points out in the comments, someone in the b) camp is going to out-compete you and, essentially, eat you. Even if they don’t, the problem is entropy and it needs dealing with. It just seems hard to fathom that an AI isn’t going to worry about entropy and just sit there, contented or otherwise, whilst the resources run out or are appropriated by another entity. But if it did just sit there, well, it will go the way of many a heroin addict.

Whether the AI picks a) or b) depends, I think, on two things. First, are resources unlimited? We don’t know. If they’re unlimited, then b) is a no-brainer, surely? If they’re limited, then it depends on the physics of emotion, something that doesn’t get discussed much. Will a perfect AI prefer to struggle with entropy until heat death / big rip / whatever, enjoying life whilst it can, or will it cut the film short having gloomily figured out the ending? Hard to say. Physics research has a big budget but I don’t think it’s nearly enough for this question.






Add your comment here:


Name:

Email:

Location:

Remember my personal information

Notify me of follow-up comments?

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376