Support the IEET




The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.



Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

American Society for Engineering Education: Why Diversity is so Important

Why there is no mind/body problem

Why Solitary Confinement Is The Worst Kind Of Psychological Torture

The Trifecta of Roommate Selection Technology: Privacy, Prejudice, And Diversity

The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation

Chiding CEOs at Walgreens and Other Corporate Defectors


ieet books

Virtually Human: The Promise—-and the Peril—-of Digital Immortality
Author
by Martine Rothblatt


comments

instamatic on 'Why We’ll Still Be Fighting About Religious Freedom 200 Years From Now!' (Jul 25, 2014)

instamatic on 'The Sad Passing of a Positive Futurist' (Jul 25, 2014)

Giulio Prisco on 'Why We’ll Still Be Fighting About Religious Freedom 200 Years From Now!' (Jul 25, 2014)

Giulio Prisco on 'The Sad Passing of a Positive Futurist' (Jul 25, 2014)

instamatic on 'Should we have a right not to work?' (Jul 24, 2014)

instamatic on 'The Sad Passing of a Positive Futurist' (Jul 24, 2014)

DrJohnty on 'LEV: The Game – Play to Win Indefinite Life' (Jul 24, 2014)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Transhumanism and Marxism: Philosophical Connections

Sex Work, Technological Unemployment and the Basic Income Guarantee

Technological Unemployment but Still a Lot of Work…

Hottest Articles of the Last Month


Nanomedical Cognitive Enhancement
Jul 11, 2014
(5796) Hits
(0) Comments

Is it possible to build an artificial superintelligence without fully replicating the human brain?
Jun 25, 2014
(5744) Hits
(1) Comments

Interview with Transhumanist Biohacker Rich Lee
Jul 8, 2014
(5566) Hits
(0) Comments

Virtually Sacred, by Robert Geraci – religion in World of Warcraft and Second Life
Jul 3, 2014
(4268) Hits
(0) Comments



IEET > Security > SciTech > Rights > Personhood > Vision > CyborgBuddha > Technoprogressivism > Virtuality > Affiliate Scholar > Richard Eskow

Print Email permalink (1) Comments (4606) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


#17: Cerebral Imperialism


Richard Eskow
Richard Eskow
Ethical Technology

Posted: Dec 15, 2010

Could it be that there is no intelligence without a body? That there’s only computation? That cognition is the byproduct of biological processes, and never the driver of them?



#17
According to IEET readers, what were the most stimulating stories of 2010? This month we’re answering that question by posting a countdown of the top 31 articles published this year on our blog (out of more than 600 in all), based on how many total hits each one received.

The following piece was first published here on June 3, and is the #17 most viewed of the year.



Cerebral imperialism is the culturally-biased perspective that says we are our cognition, rather than the sum total of our physical and mental inputs and outputs.  Computer scientists who practice it run the risk of creating “unfriendly artificial intelligences.”  And when it comes to “unfriendly AIs,” the mystics and the corporate CEOs got there first.

The present is where the future comes to die, or more accurately, where an infinite array of possible futures collapse into one.  We live in a present where artificial intelligence hasn’t been invented, despite a quarter century of optimistic predictions.

John Horgan, writing in Scientific American, suggests we’re a long way from developing it, despite all the optimistic predictions (although when it does come it may well be as a sudden leap into existence, a sudden achievement of critical mass).

However and whenever (or if ever) it arrives, it’s an idea worth discussing today. But, a question: Does this line of research suffer from “cerebral imperialism”?

The idea of “cerebral imperialism” came up in an interview I did for the current issue of Tricycle, a Buddhist magazine, with transhumanist professor and writer James Hughes (Executive Director of the IEET).  One exchange went like this:

Eskow: There seems to be a kind of cognitive imperialism among some transhumanists that says the intellect alone is “self.” Doesn’t saying “mind” is who we are exclude elements like body, emotion, culture, and our environment? Buddhism and neuroscience both suggest that identity is a process in which many elements co-arise to create the individual experience on a moment-by-moment basis. The Transhumanists seem to say, “I am separate, like a data capsule that can be uploaded or moved here and there.”

Hughes: You’re right. A lot of our transhumanist subculture comes out of computer science—male computer science—so a lot of them have that traditional “intelligence is everything” view. As soon as you start thinking about the ability to embed a couple of million trillion nanobots in your brain and back up your personality and memory onto a chip, or about advanced artificial intelligence deeply wedded with your own mind, or sharing your thoughts and dreams and feelings with other people, you begin to see the breakdown of the notion of discrete and continuous self.

An intriguing answer—one of many Hughes offers in the interview—but I was going somewhere else:  toward the idea that cognition itself, that thing which we consider “mind,” is over-emphasized in our definition of self and therefore is projected onto our efforts to create something we call “artificial intelligence.”

Is the “society of mind” trying to colonize the societies of body and emotion?

Why “artificial intelligence,” after all, and not an “artificial identity” or “personality”? The name itself reveals a bias. Aren’t we confused computation with cognition and cognition with identity?  Neuroscience suggests that metabolic processes drive our actions and our thoughts to a far greater degree than we’ve realized until now.  Is there really a little being in our brains, or contiguous with our brains, driving the body? 
image
To a large extent, isn’t it the other way around? Don’t our minds often build a framework around actions we’ve decided to take for other, more physical reasons?  When I drink too much coffee I become more aggressive.  I drive more aggressively, but am always thinking thoughts as I weave through traffic:  “I’m late.” “He’s slow.” “She’s in the left lane.” “This is a more efficient way to drive.”

Why do we assume that there is an intelligence independent of the body that produces it?  I’m well aware of the scientists who are challenging that assumption, so this is not a criticism of the entire artificial intelligence field.  There’s a whole discipline called “friendly AI” which recognizes the threat posed by the Skynet/Terminator “computers come alive and eliminate humanity” scenario.  A number of these researchers are looking for ways to make artificial “minds” more like artificial “personalities.”

Why not give them bodies?  Sure, you could create a computer simulation of a body, but wouldn’t they just override that?

Intelligence co-developed with other processes embedded in the body and designed for evolutionary advancement - love, for example, and empathy.  A non-loving and non-empathetic humanlike empathy is a terrifying thing.

In fact, we already have non-loving, non-empathetic autonomous creations that function by using humanlike intelligence.  They’re powerful and growing, and they operate along perfectly logical lines in order to ensure their own survival and well-being.  Here are two of them:  British Petroleum and Goldman Sachs.  Each of them is an artificially intelligent “being” (whose intelligence is borrowed from a number of human brains), designed by humans but now acting strictly in their own self-interests.

How’s that working out?

This isn’t a “science” vs. “religion” argument, either.  “Cerebral imperialism” in its present form is a computer science phenomenon, but religion runs the same risks—on a far greater or more immediate scale, in fact.  Religious fanaticism is selfless heroism when viewed through a certain lens of belief.  And the Eastern religions that so many of us hold in warm regard have the potential, if misused, to turn anybody into an “unfriendly AI.”  Buddhism and Hinduism revere life.  But by emphasizing the insubstantiality of life and the relative nature of human values, any of these religious philosophies run the risk of encouraging participants toward amorality.

Aum Shinrikyo, the Japanese cult that conducted sarin gas attacks on Tokyo’s subways, blended some Christian iconography with a melange of Buddhist and other concepts.  They were able to lead their followers through a step-by-step process that stripped them of their attachment to transient existence and then removed their resistance to violence.  It’s a remarkable testament to the power of the Eastern spiritual tradition that there haven’t been dozens of such groups during its history.

The Fourth Century Christian schismatics known as Donatists had a group called the “Lord’s Athletes” or Agonistici, who attacked the “impure” Catholics and other believers, driving them from sacred sites the way the Taliban does to Sufis in Pakistan today.  And Sufism, the loving and gentle branch of Islam, is open to similar forms of abuse.  Hassan-i-Sabbah was reportedly influenced by Sufism when he formed the hashashin group (of original “assassin”) in the 10th Century.  Sufis have been among the most gentle and loving of historical figures, and the Persian Sufi poet Rumi is the most popular poet in North America, seven centuries after his death (although mostly in highly bowdlerized New Age translations).  Yet this popular quote is attributed to Rumi:  “Out beyond right and wrong there is a field.  I’ll meet you there.”

Um, no thanks.

When mystics like Rumi or the Buddhist masters discuss going “beyond right and wrong,” it’s after a rigorous framework of training and is based on a cosmology that inclines toward benevolence.  “Friendly AI” researchers may want to study these philosophies.  If “artificial intelligence” isn’t rooted in a body, it might be a good idea to make sure they’re Sufis or Buddhists.

I’ve written before about the Turing Test’s value and its cultural and religious roots.  Conversation is an output of mind, but that doesn’t mean conversation is impossible without mind.  The whole discussion seems to confusion “selfhood” with “mind,” and “mind” with the products of mind.  At best, it confuses output with structure or essence.

After all, the factory that produces synthetic leather isn’t an “artificial cow.”

Couldn’t this over-emphasis on cognition as the core part of identity really be an attempt to suppress unruly and unwelcome emotions?  That would be the same impulse that leads people to misuse the mystical experience like the hashashin and Aum Shinrikyo did.  “Unfriendly AI” is a frightening prospect, but the most immediate danger is to live in a society where we are collectively detached from our emotions—one where we create a false ideal of cognition and then worship to the exclusion of other values.  That’s how we got BP and Goldman Sachs, two far more immediate dangers, isn’t it?

Gehirn, Gehirn Über Alles!  Brain, brain above all ... we might want to give that a second thought.  Our current “unfriendly AIs,” the mega-corporations that control our world, have already given us as much disembodied, emotionless logic as we can stand.


Richard Eskow, an Affiliate Scholar of the IEET and Senior Fellow with the Campaign for America's Future, is CEO of Health Knowledge Systems (HKS) in Los Angeles.
Print Email permalink (1) Comments (4607) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


I was thinking about this the other day.  You don’t need to go beyond the masculine to find folks who value forms of information proccessing that aren’t like regular cognition.  Consider what Michael Jordon would value about himself, or that matter a sterotypical redneck firearms enthusiast.  Neither of those men, for good or ill, would necessarily be in touch with the feminine, but both would assign a high value to physical skills.

Also most people, male or female, value their ability to carry out their responsibilities more than they do their ability to perform abstract reasoning.  If I have to get dementia, I’d prefer Alzheimer’s Disease to Front Temporal Dementia, as it impairs social functioning less.  This is true even though early FTD can enhance artistic skills, something which I value.





YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: Epoch of Plasticity

Previous entry: IEET Readers Say They Value Fairness Over Abundance

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

Contact: Executive Director, Dr. James J. Hughes,
Williams 119, Trinity College, 300 Summit St., Hartford CT 06106 USA 
Email: director @ ieet.org     phone: 860-297-2376