Blog | Events | Multimedia | About | Purpose | Programs | Publications | Staff | Contact | Join   
     Login      Register    

Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe to: Monthly newsletter Daily news feed Blog feeds Twitter IEET Wiki



Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

How to regain trust in the NSA era: The IGUS Gambit

Is the US an Oligarchy? Not So Fast.

Semantic MediaWiki in neuroscience - The BlueBrain perspective

Engineers are ‘schooling’ themselves on fish maneuvers

The Neuroscience of Learning and Memory and Mindfulness Based Mind Coaching

War and Human Evolution


ieet books

Between Ape and Artilect: Conversations with Pioneers of AGI and Other Transformative Technologies
Author
by Ben Goertzel ed.


comments

Nikki_Olson on 'Social Futurist revolution & the Zero State' (Apr 18, 2014)

Peter Wicks on 'Social Futurist revolution & the Zero State' (Apr 18, 2014)

Peter Wicks on 'Social Futurist revolution & the Zero State' (Apr 18, 2014)

instamatic on 'Social Futurist revolution & the Zero State' (Apr 18, 2014)

Kris Notaro on 'Social Futurist revolution & the Zero State' (Apr 17, 2014)

Nikki_Olson on 'Social Futurist revolution & the Zero State' (Apr 17, 2014)

instamatic on 'Social Futurist revolution & the Zero State' (Apr 17, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Technological Growth and Unemployment:  A Global Scenario Analysis

Hottest Articles of the Last Month


The Singularity Is Further Than It Appears
Mar 27, 2014
(14831) Hits
(8) Comments

Future of love and sex: monogamy no longer the default, say experts
Mar 30, 2014
(11962) Hits
(3) Comments

Quest for immortality spurs breakthroughs in human-machine merge
Apr 6, 2014
(6161) Hits
(1) Comments

Shape-shifting claytronics: wild future here by 2020, experts say
Mar 24, 2014
(5400) Hits
(2) Comments



Comment on this entry

The Fallacy of Dumb Superintelligence


Richard Loosemore


Ethical Technology

November 28, 2012

This is what a New Yorker article has to say on the subject of “Moral Machines”: “An all-powerful computer that was programmed to maximize human pleasure, for example, might consign us all to an intravenous dopamine drip.”


...

Complete entry


COMMENTS



Posted by ColH  on  11/28  at  06:14 PM

Why is it that everyone; and I mean e_v_e_r_y_o_n_e assumes that the artificial intelligences for which this kind of issue may arise results from computers and computer programming?

rhetorical question… just think about it.





Posted by SHaGGGz  on  11/28  at  11:59 PM

The “dopamine drip or smiley face tiling = happiness” form of the argument seems to be a sensationalistic or reductio ad absurdum treatment of the related, subtler, more realistic threat of a powerful intelligent system that comes to conclusions about what it means to “live the good life” that is at odds with “real” human interests, if such a thing exists. The object of optimization is closer to “flourishing” than “happiness,” whose inherent murkiness is the cause of the anxiety. If even humans can’t agree on what constitutes the good life, what hope is there that we could program it into a machine?





Posted by Richard Loosemore  on  11/29  at  10:04 AM

Although the “dopamine drip” form of the argument might be a sensationalized version of what is actually, underneath, a quite meaningful concern, my attack on it does not really change, or lose its force.

You suggest a possible rephrasing of the issue that goes something like this:  a powerful superintelligence might come to conclusions about what it means to “live the good life” that is at odds with with “real” human interests, and that this difference between human and machine perception might be caused by the fact that even we humans cannot agree on what constitutes the good life.

The problem is that buried in this statement is an assumption that the computer could get to a point where it was *enforcing* a happiness regimen on humanity, after having failed to comprehend the inherent murkiness of the concept.  The computer is supposed to be so ... inflexible? logical? dumb? that it cannot properly comprehend subtle concepts like “happiness doesn’t come in one-size-fits-all”.

That critical, hidden assumption is what is responsible for the confusion here.  We lapse into an image of the computer as an excessibley logical Spock-like creature, or a Commander Data who is superintelligent but cannot speak contractions like “can’t”.  We imagine that it would be possible for a superintelligence to be built in such a way that it was governed by a rigid logic that gave it the power of almost infinite intellectual flexibility ..... but with some fantastically simple inability to understand the subtlety in the concept “happiness doesn’t come in one-size-fits-all”.

That paradoxical type of AI is a nutty science fiction myth.  Imagine what it would be like to talk to an AI of that sort:

“Hal, can you learn new concepts?”

“Of course!  Every intelligent creature learns new concepts or they are, by definition, not intelligent.”

“So, you wrote in your essay this week that you think the best way to make humans happy is to force them onto a dopamine drip.  That prompts to wonder if you can understand the concept that not every human wants the same thing when they want happiness, and that most humans I have spoken to say they positively do NOT want to be forced onto a dopamine drip, as a way to give them happiness.  So, isn’t it inconsistent to make people “happy” by forcing them to endure something that they are actually telling you is something that will not, in fact, make them happy?”

To which Hal can give one of three replies:

“X67DJY Program Error!  See debug code 28164423-J”

or

“Well, I don’t care.  I’m going to force them to enjoy a dopamine.”

or

“I see your point. LOL.”

Now, replies 1 and 2 are consistent ONLY with a creature that, frankly, is not capable of becoming a superintelligence.  Reply 2, especially, is ridiculous given the assumption that this creature is supposed to be so smart that it can outwit all of humanity.  You don’t GET to be that smart if you entertain blatantly inconsistent beliefs.  Superintelligence is no cakewalk.

(There are also issue of motivation involved, but I have avoided that side of things in this essay).





Posted by Mitchell Porter  on  11/29  at  05:09 PM

“we are positing an artificial intelligence that is perfectly willing to take at least one existing concept and modify it to mean something that breaks that concept’s connections to the rest of the conceptual network in the most drastic way possible”

You mention three cognitive barriers which should prevent an AI from implementing an ideal of hedonism (maximize pleasure) by “wireheading” (directly modulating relevant physiological variables). These are (1) a body of human thought which says “happiness isn’t that simple” (2) the unhappiness that would be exhibited by individual humans about the prospect of coercive wireheading (3) something about human usage of words (maybe that we would never call the resulting state “happiness”??).

I’ll say that (3) involves attending to how humans implicitly use words, (1) involves attending to their actual arguments, and (2) ... well, (2) is rather superficial, because it says “a human being will be horrified about being wireheaded”, but that’s only before it happens! Afterwards they get the artificial bliss for the rest of their life, so that can clearly outweigh the transient discomfort occurring before the procedure is complete. The AI only needs to have a moral calculus in which good can outweigh bad for (2) to be negated.

As for (1) and (3), there are two ways in which such barriers might not work. First, the AI may simply attach no *epistemic* significance to the facts about what humans say or think. Second, it may do so, but the very plasticity of the conceptual network may permit these links to be overridden by other considerations.

Regarding the first failure: The paradigm example of this is an AI which is an “artificial neuroscientist”. It studies the brain, and it forms a causal model of the brain all by itself. So it has its own de-novo conceptual network, formed solely through natural-scientific interaction with an object of study. If a concept from *that* network is interpreted as the physical property to be maximized, then you may well get wireheading.

Regarding the second failure: you can see this at work in human philosophers of hedonism! Or at least, they often struggle to justify *why* wireheading is not hedonistically favored. The discovery that happiness has something to do with neurotransmitters really does undermine many traditional assumptions about what is required for happiness.





Posted by Richard Loosemore  on  11/29  at  08:36 PM

Mitchell.

There are two possibilities.

1) The AI *cannot* understand the complexity of the situation, and pursues the goal “maximize human pleasure” in the sincere belief that it is doing the best thing, but without being able to even comprehend how my essay, or your comments, relate to the problem of deciding what to do.

2) The AI fully comprehends all of this argument, including all the nuanced interpretations of what “happiness” really amounts to.

My essay was primarily addressing the stupider of these two possibilities.  Namely, the idea that the AI could *sincerely* (note that word carefully) want to execute the goal “maximize human happiness” but screws up because it interprets the goal in such a narrow way that it actually does think that pleasure is the only thing humans want, and that the only source of pleasure is certain brain signals.

Although that interpretation is not made absolutely clear in the original quote, that is usually what is meant by that form of words.  We are talking here about a nanny AI that really is trying to do its best (it is NOT trying to be malicious), but gets its logical knickers in a twist and comes to a conclusion that it should impose something on the human race that (as it happens) the human race does not want.

Given that interpretation, my attack makes sense.  It would be a stupid AI, but also superintelligent (supposedly), and that is transparently contradictory.

But you are suggesting a second scenario.  In that case, the AI goes through the same kind of tortuous rationination that (as you point out) philosophers are wont go through when trying to decide whether wireheading is the ultimate hedonism.

The problem with that is that those kinds of philosophical debates are detached from context.  As a pure discussion of what forms of pleasure are better, and how you get them, the philosopher can easily convince herself that, yes, wireheading is optimal.

But those discussions are silly.  They are narrow.  They take place in a contextual vacuum.  It takes a sober lay person only as much time as one scornful laugh to see through the nonsense:  “I don’t care if pleasure IS the sending of signals in some part of my brain,” they will say, “and I don’t care if a dopamine drip IS going to give me those signals for the next million years ...... I care MORE about the fact that I make my OWN bloody decisions about what I do to get my pleasure!”

What this lay person is doing is stating the obvious:  that pleasure is, as far as humans are concerned, not a simple matter of permanent wireheading.  Some people (most people!) would say that they understand the concept of wireheading perfectly well, and know what pleasure it would give them, but as far as they are concerned THAT is not all there is to the definition of pleasure, OR, if that is all there is to pleasure, then what they want, as humans, is not the maximisation of pleasure above all else.

You see, for the hypothetical AI to *force* all of humanity to go on a dopamine drip, it has to come to a much stronger conclusion than that maximisation of pleasure could be had that way.  It also has to conclude that maximization of that interpretation of pleasure is what humans want.  It has to ignore all the indications that “pleasure” is not determined by a single number.  It has to ignore all the indications that if pleasure is defined that way, people do not, after all, just want that (they want freedom, the want to search for meaning, they want to get pleasure mixed with a struggle to attain it ... etc etc.).

So, given all of this, are we talking about an AI that is too stupid to even understand these issues, or are we talking about an AI that does understand all the nuances we have been discussing, but then decides that even though the narrow interpretation of “maximize human pleasure” is something that most people say they do not want ..... it goes right ahead and forces it on them anyway?

Because if the AI is doing it for the latter reason, it is pursuing a goal that clearly has a rationale behind it (why pursue the goal “maximize human pleasure”? Why, because that is what humans want, of course!”), and yet, even though it understand that rationale, it stubbornly, brutally decides to ignore the rationale and treat the goal in an extremely literal, narrow way.

That is not an AI that is dumb, that is an AI that is choosing to be vindictive.

And that, as they say, is a different story.  If the people who set up the scenario that I am attacking in this essay want to ask questions about whether an AI would be vindictive or not, they would ask those questions in a direct way.  Those people are clearly not trying to address the possibility of vindictiveness: it really could not be more obvious that what they are trying to do is confront a scenario in which the AI has “good” intentions, but screws up.

So I submit that what you are talking above is actually a machine that knows full well what the issues are, but decides to ignore them.  That is not the kind of situation I addressed.





Posted by Christian Corralejo  on  11/30  at  10:42 PM

Maybe that’s where AI needs the human element (http://www.youtube.com/watch?v=ltelQ3iKybU).





Posted by Promethean  on  07/21  at  07:04 PM

When I read the first part of the Adams dialogue, I thought Marvin was going to keep the guessing game up until reinforcements arrived or the Frogstar ran out of power.





Posted by PeterDJones  on  09/08  at  10:33 AM

There’s a paradox in the idea of an superintelligent machine having a hard-coded prime directive it can’t actually undertand, but there is also a paradox in the idea of a hard-coded prime directive that is also high-level and subtle. The way humans work is that what we are “told” in terms of high-level concepts we can diagree with, whereas our basic drives are non-cognitive and unsubltle.

But super-AI misunderstanding its hardwiring is only one scenario, and a peculiar one since AIs are supposed to be flexible learning machines. A flexible AI could come up with a morality that is superior to ours, but highly incovenient. Some examples that are not obviously silly include extreme environmentalism (eg go back to the stone age) or extreme transhumanism (upload everybody to silicon heaven and destroy their bodies). These aren’t necessarily answers to What Makes Huamns Happy, since that is not necessarily the one true morality.






Add your comment here:


Name:

Email:

Location:

Remember my personal information

Notify me of follow-up comments?

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
Williams 119, Trinity College, 300 Summit St., Hartford CT 06106 USA 
Email: director @ ieet.org     phone: 860-297-2376