Support the IEET

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

Singularity 1 on 1: Quantum Thief Trilogy

Stanford Laptop Orchestra (1hr 30min)

The Nature of Categories and Concepts (1hr 30min)

Enhancing Virtues: Caring (part 2)

On Steven Pinker’s “The Better Angels of our Nature”

Cyberwarfare ethics, or how Facebook could accidentally make its engineers into targets

ieet books

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom


Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)

instamatic on 'How our police became Storm-troopers' (Aug 31, 2014)

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)

instamatic on 'How our police became Storm-troopers' (Aug 31, 2014)

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)

instamatic on 'How our police became Storm-troopers' (Aug 31, 2014)

Rick Searle on 'How our police became Storm-troopers' (Aug 31, 2014)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month

Enhancing Virtues: Self-Control and Mindfulness
Aug 19, 2014
(7974) Hits
(0) Comments

Is using nano silver to treat Ebola misguided?
Aug 16, 2014
(6844) Hits
(0) Comments

“Lucy”: A Movie Review
Aug 18, 2014
(5870) Hits
(0) Comments

High Tech Jainism
Aug 10, 2014
(5356) Hits
(5) Comments

IEET > Security > Cyber > Rights > Personhood > Vision > Futurism > Contributors > Michael Anissimov

Print Email permalink (3) Comments (3680) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

Inevitable Positive Outcome with AI?

Michael Anissimov
Michael Anissimov
Accelerating Future

Posted: Jan 9, 2009

For those who believe that human-level AI isn’t far off and that a rosy scenario isn’t inevitable, 2009 is a somewhat sad and depressing time. Popular opinion is that AI won’t be here for centuries, but that isn’t a huge problem or issue. (In fact, it makes things easier by limiting the number of people involved in AI research, thus allowing me and my confederates to keep a closer eye on them.)

What is disturbing is the medium-sized and growing group of folks who believe that AI could be here within a few decades, but that the challenge of programming it for benevolence or moral common sense is trivial or already solved. I’m currently reading Wendell Wallach and Colin Allen’s new book Moral Machines: Teaching Robots Right from Wrong, published by Oxford University Press, which is arguably the first actual book on Friendly AI. In the book, they mention that every time they talk to people about the challenge of AI morality, they hear “didn’t Asimov already solve that problem?” This is silly in more ways than one, the most obvious being that Asimov made up his list of laws with the intention of them breaking down, to provide fodder for the stories. Anyway, Anissimov is telling you that Asimov didn’t solve the problem.

Another common error, rampant among transhumanists, is that human beings will magically fuse with AI the instant it is created, and these humans (who they are obviously imagining as themselves) will make sure that everything is fine and dandy. Kurzweil is the primary source of this fallacy. This belief has the added benefit of making humans feel important, giving them a guaranteed role in the post-AI future, no extra effort needed. Technology makes it happen — automatically. This helps heal the anxiety inherent in transitioning between a human-only world and a world with much greater physical and cognitive diversity.

Problem is, it doesn’t make sense. While it is possible that the first Artificial Intelligence will be created in a way that it is completely at the service of augmented human(s), it seems highly unlikely. Here is why.

1) Most new technologies are created as stand-alone objects. It would be incredibly more difficult to create a technology completely fused with the deep volition and will of a 100 billion neuron human brain than to just create the technology by itself. Is it easier to create a toaster, or create a toaster whose every element is in complete harmony with a human being, who views the toaster as an extension of himself?

Because AI is complex, mysterious, and has to do with the mind, people seem to assume that making an AI and making an AI that is a harmonious extension of human will are close enough that the latter would not be much more difficult (in some cases, required) than the former. Seriously, there is probably someone reading this right now that actually believes that AI will only be possible if it is created as an extension of human brains. This is because they see humans as the source of the “special sauce” of all that is good, holy, and intelligent, and find it impossible to imagine a stand-alone artifact displaying intelligence without direct and constant human involvement. This is anthropocentric silliness.

2) Human biological neurons are not inherently compatible with silicon computer chips and code. This is a pretty obvious one. Perhaps some thinkers can only imagine AI being created in the exact image of humans, after exhaustive research of the brain, so if AI is possible, then perfect human-computer interfaces should be too. But, was the first flying machine a perfect copy of a bird? No. So why should we expect the first AI to be an exact copy of ourselves? Even if it was, connecting a human being to an AI in a close and intimate way would not be a 1-2-3 endeavor. It would make complete sense if the first million attempts only result in some insane or non-functional amalgam. In the space of all mind-like data arrangements, only a tiny sector corresponds to what we would consider as normalcy. We are fooled into thinking that a large portion of this space contains normalcy because evolution killed off most of the non-functional or insane brains millions of years ago. We see the (mostly) positive outcome, we don’t see the quadrillions of failures.

3) The way things are going now, the first AI is likely to be created for some niche, money-making application — like predicting stocks or planning battles. Cognitive features that are superfluous to the crucial activity at hand will be postponed to implementation at a later date (if ever). The problem with this scenario is that basic goal-formulating activity in these AIs will likely lead to spontaneous attempts at the accumulation of power and the concealment of that accumulation from those who might threaten it. Paranoid? No. This category of behaviors is sometimes known as convergent subgoals — basic goals that make everything else easier, so most minds pursuing goals that require matter and energy would have an incentive to fulfill them. Unfortunately, it seems nearly impossible for anyone to wrap their heads around the idea, leaving 99% of futurists with completely anthropomorphic notions of how Artificial Intelligence will behave.

Blind optimists like to imagine AI popping into existence completely functional, reasonable, human-like, and ready to help out around the household, chatting up little Tommy just like any member of the family. If the first AIs are not like this, and are instead monomaniacal day traders, then they presume that such AIs will be kept in check until the day that Rosie the Robot Maid is online and ready to go. However, it needn’t be the case. Like the supercomputer in Colossus: the Forbin Project, the monomaniacal day trader might find itself thinking so far outside the box that it decides to take control of the entire stock exchange, or even the world economy, and manipulate it precisely to maximize its personal utility, meatbags be damned. What to a human day trader would seem “absurd” would seem “obvious” to an AI with very little background morality or understanding of the nuances of human values and meaning. While a human philosopher might spend hours upon hours debating the fine points of morality, a recursively self-improving AI might simply say, “Why argue? I already know what good is. It’s 45 lines of code that forms the top level of my goal system.” The human philosophers might then say, “But Kant said…” as they are steamrolled over for extra space.

We have spent so much time dealing with humans that we assume that human psychology is typical of minds in general and that humans are the center of the cognitive universe. In much transhumanist futurist lore, nascent AI minds are portrayed as practically falling over themselves to seamlessly merge together with us and create a Kurzweilian Utopia, or that AI morality is as simple a matter as turning a switch from “Naughty” to “Nice”.

AIs will not automatically merge together with us and become extensions of our mind, like friendly cognitive light sabers. Minds do not slide into each other like legos. There are early efforts to make an AI goalset that does actually serve as an extension of the minds of humanity, but it remains to be seen whether this can be translated into actual math, and whether or not the specific implementation in a space of 10^120 possibilities actually provides the desired outcome as planned.

But, it’s worth trying. Of course, human intelligence enhancement should be pursued too, and narrow AI may have a role to play in this, but if human intelligence is as difficult to enhance as I think it is, DARPA will have developed AGI long before we can give old Lenny a smart pill to turn his hillbilly mind into that of a theoretical physicist.

Michael Anissimov is a science writer, transhumanist activist and frequent public speaker on futurism living in San Francisco. He writes a popular blog on futurist issues, Accelerating Future. He co-founded the Immortality Institute and the SF Bay Area chapter of the WTA, BA-Trans. He is also active in the Lifeboat Foundation, the Singularity Institute for Artificial Intelligence, the Institute for Accelerating Change, and the Center for Responsible Nanotechnology’s Global Task Force.

Print Email permalink (3) Comments (3681) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


It’s interesting you bring up day-trading as an example. The difficulty with all automated trading logics, even those backed by sophisticated and adaptive algorithms, is that human irrationally en masse is too chaotic and non-linear to adapt to without some large-scale modeling of the trading population. Therefore, the real impediment to that is data (the stream of where volume is going) or capital (the ability to move the market enough for systematic and adapting manipulation).

At this point, maybe a day-trading AI is the best ticket. Maybe the AI can apportion resources to be deployed by humans, or roll up the whole global markets and give it all out fresh.

I agree with much of this article, especially regarding the difficulty of “fully” integrating AI with the human mind. (Whatever that means)

However, it seems to imply that if this can’t be achieved, then we may destined to standalone, “monomaniacal” AI, simply because it isn’t fused with human brains. This is giving in to science fiction ideas that make little sense in the real world.

In fact, a case could easily be made that AI that is fully fused with the human mind would actually be more dangerous than stand-alone advanced AI, because a fully-fused AI/human brain construct would presumably have a dominant or at least strong component of irrationality that currently drives human brains - because our evolutionarily-derived brains would be part of this construct, and evolution rewards irrational, dominant, often sexually-oriented behaviors. Evolution rewards this basic behavior pattern not just with humans, but most creatures on earth.

But AI will be a designed technology, very different in its inherent nature. In some ways immensely more powerful, in other ways motivationless, soulless, arid - but that’s not a bad thing, because these will be tools, like all technologies, not some deliberately created “replacement species” for humans, that is a ludicrous concept.

Stand-alone AI would almost certainly be driven by a form of rationality far purer than anything we are familiar with in our minds. The idea of a “monomaniacal day trader” that “decides” in a fit of irrational dominant impulse to take control of the entire stock exchange is a deeply improbable sci-fi fantasy.

But again, most of the article describing the difficulties of fully integrating AI and human brains, and criticizing the breezy way that Kurzweil and others predict this will come to pass, is good content.

“1) Most new technologies are created as stand-alone objects.”

Stand alone from what?  Of course from direct physical interaction with the human brain (so far).  But not from human brains indirectly, and not from other objects.  All tools and entertainment techs are extensions of and/or interact with the human body, either physically and/or with information processing (the brain).  Technologies range from completely human controlled to completely autonomous.  Badly-designed user interfaces, catastrophic bugs and malicious programming cause technology to be “naughty.”  So, I don’t see how the difficulties of neural interfaces has anything to do with the naughtiness of intelligent machines.

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: We Have the Technology to Rebuild Ourselves

Previous entry: Daschle on the Myth of American Health Care Quality


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
Williams 119, Trinity College, 300 Summit St., Hartford CT 06106 USA 
Email: director @     phone: 860-297-2376