Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

The Future of Robotic Automated Labor

Consciousness and Neuroscience

Fusion: “Posthuman” - 3D Printed Tissues and Seeing Through Walls!

Philosopher Michael Lynch Says Privacy Violations Are An Affront To Human Dignity

Transhumanism: The Robot Human: A Self-Generating Ecosystem

Indefinite Life Extension and Broader World Health Collaborations (Part II)


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
Martine Rothblatt


comments

cacarr on 'Book review: Nick Bostrom's "Superintelligence"' (Oct 24, 2014)

jasoncstone on 'Ray Kurzweil, Google's Director Of Engineering, Wants To Bring The Dead Back To Life' (Oct 22, 2014)

pacificmaelstrom on 'Why “Why Transhumanism Won’t Work” Won’t Work' (Oct 21, 2014)

rms on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)

instamatic on 'Smut in Jesusland: Why Bible Belt States are the Biggest Consumers of Online Porn' (Oct 21, 2014)

rms on 'Science Fiction and our Dreams of the Future' (Oct 20, 2014)

rms on 'Sousveillance and Surveillance: What kind of future do we want?' (Oct 20, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


Google’s Cold Betrayal of the Internet
Oct 10, 2014
(7527) Hits
(2) Comments

Dawkins and the “We are going to die” -Argument
Sep 25, 2014
(5729) Hits
(21) Comments

Should we abolish work?
Oct 3, 2014
(5156) Hits
(1) Comments

Will we uplift other species to sapience?
Sep 25, 2014
(4600) Hits
(0) Comments



Comment on this entry

#2 Is Intelligence Self-Limiting?


David Eubanks


Ethical Technology

December 30, 2012

In science fiction novels like River of Gods by Ian McDonald [1], an artificial intelligence finds a way to boot-strap its own design into a growing super-intelligence. This cleverness singularity is sometimes referred to as FOOM [2]. In this piece I will give an argument that a single instance of intelligence may be self-limiting and that FOOM collapses in a “MOOF.”


...

Complete entry


COMMENTS



Posted by Frank Glover  on  12/30  at  12:15 PM

In his ‘Known Space’ stories, SF writer Larry Niven proposed that AIs and humans intimately linked with AIs would, for reasons like these, quickly devolve into what he called ‘navel-gazers.’

And then there’s ‘wireheading…’





Posted by Michael Bone  on  12/30  at  01:26 PM

Given that FOOMers are, by definition, much more powerful then MOOFers, any FOOMer would quickly outcompete MOOFers for resources, limiting/eliminating their expansion/replication i.e. MOOFers would quickly become extinct as long as at least one FOOMer exists (or non-AIs ‘breed’ them out).

So, for a systematic MOOF situation to prevail, the probability of an AI maintaining its FOOM trajectory over time would have to be extremely low.

Additionally, since AIs learn inductively, those AIs that witnessed the extinction of a MOOFer would be much less likely to MOOF. So, not only would the probability of maintaining a FOOM trajectory over time have to be extremely low, all FOOMers that are aware of each other would have to completely MOOF at about the same time.

The MOOF scenario isn’t looking very likely.





Posted by Darren Reynolds  on  12/31  at  08:34 AM

We have no clue, and here’s why.

Isn’t the key to understanding this problem to limit it to the question of scale? That’s really all we’re interested in. As a perfectly content AI, should I try to continue in time? Should I try to expand in space? That’s all there is to consider.

So, an AI becomes perfectly contented by modifying its reward circuitry. Let’s consider the possibilities from that first moment on:

a) the AI realises that the only improvement on a moment of perfect contentment is non-existence and commits suicide (contraction in space and time); or

b) the AI realises that the only improvement on a moment of perfect contentment is expansion in space and time - cue Omega Point or similar.

The article seems to propose something initially seeming very odd: let’s call it possibility c). In that possibility, the content AI decides that expansion in time (i.e. continuation) is great, and worth the effort, but that expansion in space is not worth the effort. Oookayyy, let’s run with it. In this possibility, there is no possible improvement on locally perfect, continued contentment, so the AI takes steps to ensure the continuity of the moment of perfection, but nothing more.

It is most unwise to second-guess an AI whose reasoning is likely to be far superior to ours, but let’s give it a try anyway. Perhaps a heroin addict would do more insightful job of this than I can. So, apart from *seeming* quite irrational to us, that third possibility is unlikely to last long in a universe of [locally] limited resources. As Michael Bone points out in the comments, someone in the b) camp is going to out-compete you and, essentially, eat you. Even if they don’t, the problem is entropy and it needs dealing with. It just seems hard to fathom that an AI isn’t going to worry about entropy and just sit there, contented or otherwise, whilst the resources run out or are appropriated by another entity. But if it did just sit there, well, it will go the way of many a heroin addict.

Whether the AI picks a) or b) depends, I think, on two things. First, are resources unlimited? We don’t know. If they’re unlimited, then b) is a no-brainer, surely? If they’re limited, then it depends on the physics of emotion, something that doesn’t get discussed much. Will a perfect AI prefer to struggle with entropy until heat death / big rip / whatever, enjoying life whilst it can, or will it cut the film short having gloomily figured out the ending? Hard to say. Physics research has a big budget but I don’t think it’s nearly enough for this question.






Add your comment here:


Name:

Email:

Location:

Remember my personal information

Notify me of follow-up comments?

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-297-2376