Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Overview of technopolitics


whats new at ieet

Blockchain – The Building Blocks for a New Society, with Vince Meens

Martine Rothblatt, Lawrence Krauss, Douglas Rushkoff, Russell Blackford endorse Science Missionaries

Trump Picks Establishment Banker For Treasury Secretary

Trump’s Pick for Health Secretary Is Total Nightmare Fuel

First Republican “Hamilton Elector” Breaks Ranks Against Trump

Cybathlon 2016 : entre sport, handicap et transhumanisme


ieet books

TECHNOPROG, le transhumanisme au service du progrès social
Author
Marc Roux and Didier Coeurnelle





JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life


Comment on this entry

#2 Is Intelligence Self-Limiting?


David Eubanks


Ethical Technology

December 30, 2012

In science fiction novels like River of Gods by Ian McDonald [1], an artificial intelligence finds a way to boot-strap its own design into a growing super-intelligence. This cleverness singularity is sometimes referred to as FOOM [2]. In this piece I will give an argument that a single instance of intelligence may be self-limiting and that FOOM collapses in a “MOOF.”


...

Complete entry


COMMENTS



Posted by Frank Glover  on  12/30  at  12:15 PM

In his ‘Known Space’ stories, SF writer Larry Niven proposed that AIs and humans intimately linked with AIs would, for reasons like these, quickly devolve into what he called ‘navel-gazers.’

And then there’s ‘wireheading…’





Posted by Michael Bone  on  12/30  at  01:26 PM

Given that FOOMers are, by definition, much more powerful then MOOFers, any FOOMer would quickly outcompete MOOFers for resources, limiting/eliminating their expansion/replication i.e. MOOFers would quickly become extinct as long as at least one FOOMer exists (or non-AIs ‘breed’ them out).

So, for a systematic MOOF situation to prevail, the probability of an AI maintaining its FOOM trajectory over time would have to be extremely low.

Additionally, since AIs learn inductively, those AIs that witnessed the extinction of a MOOFer would be much less likely to MOOF. So, not only would the probability of maintaining a FOOM trajectory over time have to be extremely low, all FOOMers that are aware of each other would have to completely MOOF at about the same time.

The MOOF scenario isn’t looking very likely.





Posted by Darren Reynolds  on  12/31  at  08:34 AM

We have no clue, and here’s why.

Isn’t the key to understanding this problem to limit it to the question of scale? That’s really all we’re interested in. As a perfectly content AI, should I try to continue in time? Should I try to expand in space? That’s all there is to consider.

So, an AI becomes perfectly contented by modifying its reward circuitry. Let’s consider the possibilities from that first moment on:

a) the AI realises that the only improvement on a moment of perfect contentment is non-existence and commits suicide (contraction in space and time); or

b) the AI realises that the only improvement on a moment of perfect contentment is expansion in space and time - cue Omega Point or similar.

The article seems to propose something initially seeming very odd: let’s call it possibility c). In that possibility, the content AI decides that expansion in time (i.e. continuation) is great, and worth the effort, but that expansion in space is not worth the effort. Oookayyy, let’s run with it. In this possibility, there is no possible improvement on locally perfect, continued contentment, so the AI takes steps to ensure the continuity of the moment of perfection, but nothing more.

It is most unwise to second-guess an AI whose reasoning is likely to be far superior to ours, but let’s give it a try anyway. Perhaps a heroin addict would do more insightful job of this than I can. So, apart from *seeming* quite irrational to us, that third possibility is unlikely to last long in a universe of [locally] limited resources. As Michael Bone points out in the comments, someone in the b) camp is going to out-compete you and, essentially, eat you. Even if they don’t, the problem is entropy and it needs dealing with. It just seems hard to fathom that an AI isn’t going to worry about entropy and just sit there, contented or otherwise, whilst the resources run out or are appropriated by another entity. But if it did just sit there, well, it will go the way of many a heroin addict.

Whether the AI picks a) or b) depends, I think, on two things. First, are resources unlimited? We don’t know. If they’re unlimited, then b) is a no-brainer, surely? If they’re limited, then it depends on the physics of emotion, something that doesn’t get discussed much. Will a perfect AI prefer to struggle with entropy until heat death / big rip / whatever, enjoying life whilst it can, or will it cut the film short having gloomily figured out the ending? Hard to say. Physics research has a big budget but I don’t think it’s nearly enough for this question.






Add your comment here:


Name:

Email:

Location:

Remember my personal information

Notify me of follow-up comments?

HOME | ABOUT | STAFF | EVENTS | SUPPORT  | CONTACT US
JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Executive Director, Dr. James J. Hughes,
35 Harbor Point Blvd, #404, Boston, MA 02125-3242 USA
Email: director @ ieet.org