IEET > Rights > Personhood > GlobalDemocracySecurity > Vision > Virtuality > Contributors > CyborgBuddha > Richard Eskow > Technoprogressivism > SciTech
Cerebral Imperialism
Richard Eskow   Jun 3, 2010   Night Light  

Could it be that there is no intelligence without a body? That there’s only computation? That cognition is the byproduct of biological processes, and never the driver of them?

Cerebral imperialism is the culturally-biased perspective that says we are our cognition, rather than the sum total of our physical and mental inputs and outputs.  Computer scientists who practice it run the risk of creating “unfriendly artificial intelligences.”  And when it comes to “unfriendly AIs,” the mystics and the corporate CEOs got there first.

The present is where the future comes to die, or more accurately, where an infinite array of possible futures collapse into one.  We live in a present where artificial intelligence hasn’t been invented, despite a quarter century of optimistic predictions.

John Horgan, writing in Scientific American, suggests we’re a long way from developing it, despite all the optimistic predictions (although when it does come it may well be as a sudden leap into existence, a sudden achievement of critical mass).

However and whenever (or if ever) it arrives, it’s an idea worth discussing today. But, a question: Does this line of research suffer from “cerebral imperialism”?

The idea of “cerebral imperialism” came up in an interview I did for the current issue of Tricycle, a Buddhist magazine, with transhumanist professor and writer James Hughes (Executive Director of the IEET).  One exchange went like this:

Eskow: There seems to be a kind of cognitive imperialism among some transhumanists that says the intellect alone is “self.” Doesn’t saying “mind” is who we are exclude elements like body, emotion, culture, and our environment? Buddhism and neuroscience both suggest that identity is a process in which many elements co-arise to create the individual experience on a moment-by-moment basis. The Transhumanists seem to say, “I am separate, like a data capsule that can be uploaded or moved here and there.”

Hughes: You’re right. A lot of our transhumanist subculture comes out of computer science—male computer science—so a lot of them have that traditional “intelligence is everything” view. As soon as you start thinking about the ability to embed a couple of million trillion nanobots in your brain and back up your personality and memory onto a chip, or about advanced artificial intelligence deeply wedded with your own mind, or sharing your thoughts and dreams and feelings with other people, you begin to see the breakdown of the notion of discrete and continuous self.

An intriguing answer—one of many Hughes offers in the interview—but I was going somewhere else:  toward the idea that cognition itself, that thing which we consider “mind,” is over-emphasized in our definition of self and therefore is projected onto our efforts to create something we call “artificial intelligence.”

Is the “society of mind” trying to colonize the societies of body and emotion?

Why “artificial intelligence,” after all, and not an “artificial identity” or “personality”? The name itself reveals a bias. Aren’t we confused computation with cognition and cognition with identity?  Neuroscience suggests that metabolic processes drive our actions and our thoughts to a far greater degree than we’ve realized until now.  Is there really a little being in our brains, or contiguous with our brains, driving the body? 
image
To a large extent, isn’t it the other way around? Don’t our minds often build a framework around actions we’ve decided to take for other, more physical reasons?  When I drink too much coffee I become more aggressive.  I drive more aggressively, but am always thinking thoughts as I weave through traffic:  “I’m late.” “He’s slow.” “She’s in the left lane.” “This is a more efficient way to drive.”

Why do we assume that there is an intelligence independent of the body that produces it?  I’m well aware of the scientists who are challenging that assumption, so this is not a criticism of the entire artificial intelligence field.  There’s a whole discipline called “friendly AI” which recognizes the threat posed by the Skynet/Terminator “computers come alive and eliminate humanity” scenario.  A number of these researchers are looking for ways to make artificial “minds” more like artificial “personalities.”

Why not give them bodies?  Sure, you could create a computer simulation of a body, but wouldn’t they just override that?

Intelligence co-developed with other processes embedded in the body and designed for evolutionary advancement - love, for example, and empathy.  A non-loving and non-empathetic humanlike empathy is a terrifying thing.

In fact, we already have non-loving, non-empathetic autonomous creations that function by using humanlike intelligence.  They’re powerful and growing, and they operate along perfectly logical lines in order to ensure their own survival and well-being.  Here are two of them:  British Petroleum and Goldman Sachs.  Each of them is an artificially intelligent “being” (whose intelligence is borrowed from a number of human brains), designed by humans but now acting strictly in their own self-interests.

How’s that working out?

This isn’t a “science” vs. “religion” argument, either.  “Cerebral imperialism” in its present form is a computer science phenomenon, but religion runs the same risks—on a far greater or more immediate scale, in fact.  Religious fanaticism is selfless heroism when viewed through a certain lens of belief.  And the Eastern religions that so many of us hold in warm regard have the potential, if misused, to turn anybody into an “unfriendly AI.”  Buddhism and Hinduism revere life.  But by emphasizing the insubstantiality of life and the relative nature of human values, any of these religious philosophies run the risk of encouraging participants toward amorality.

Aum Shinrikyo, the Japanese cult that conducted sarin gas attacks on Tokyo’s subways, blended some Christian iconography with a melange of Buddhist and other concepts.  They were able to lead their followers through a step-by-step process that stripped them of their attachment to transient existence and then removed their resistance to violence.  It’s a remarkable testament to the power of the Eastern spiritual tradition that there haven’t been dozens of such groups during its history.

The Fourth Century Christian schismatics known as Donatists had a group called the “Lord’s Athletes” or Agonistici, who attacked the “impure” Catholics and other believers, driving them from sacred sites the way the Taliban does to Sufis in Pakistan today.  And Sufism, the loving and gentle branch of Islam, is open to similar forms of abuse.  Hassan-i-Sabbah was reportedly influenced by Sufism when he formed the hashashin group (of original “assassin”) in the 10th Century.  Sufis have been among the most gentle and loving of historical figures, and the Persian Sufi poet Rumi is the most popular poet in North America, seven centuries after his death (although mostly in highly bowdlerized New Age translations).  Yet this popular quote is attributed to Rumi:  “Out beyond right and wrong there is a field.  I’ll meet you there.”

Um, no thanks.

When mystics like Rumi or the Buddhist masters discuss going “beyond right and wrong,” it’s after a rigorous framework of training and is based on a cosmology that inclines toward benevolence.  “Friendly AI” researchers may want to study these philosophies.  If “artificial intelligence” isn’t rooted in a body, it might be a good idea to make sure they’re Sufis or Buddhists.

I’ve written before about the Turing Test’s value and its cultural and religious roots.  Conversation is an output of mind, but that doesn’t mean conversation is impossible without mind.  The whole discussion seems to confusion “selfhood” with “mind,” and “mind” with the products of mind.  At best, it confuses output with structure or essence.

After all, the factory that produces synthetic leather isn’t an “artificial cow.”

Couldn’t this over-emphasis on cognition as the core part of identity really be an attempt to suppress unruly and unwelcome emotions?  That would be the same impulse that leads people to misuse the mystical experience like the hashashin and Aum Shinrikyo did.  “Unfriendly AI” is a frightening prospect, but the most immediate danger is to live in a society where we are collectively detached from our emotions—one where we create a false ideal of cognition and then worship to the exclusion of other values.  That’s how we got BP and Goldman Sachs, two far more immediate dangers, isn’t it?

Gehirn, Gehirn Über Alles!  Brain, brain above all ... we might want to give that a second thought.  Our current “unfriendly AIs,” the mega-corporations that control our world, have already given us as much disembodied, emotionless logic as we can stand.

Richard Eskow, an Affiliate Scholar of the IEET and Senior Fellow with the Campaign for America's Future, is CEO of Health Knowledge Systems (HKS) in Los Angeles.



COMMENTS

“Couldn’t this over-emphasis on cognition as the core part of identity really be an attempt to suppress unruly and unwelcome emotions?”

YES, also all forms of painful memories, both emotional and physical.

I don’t trust these people who live in their heads.

A lot of our transhumanist subculture comes out of computer science:male computer science

This may be unPC, but I am just unable to understand what the f**k this has to do with gender. As a longer comment I wish to offer my old essay Women and Biologists are not infallible.

I think I am the information encoded in my brain. This information includes not only rational and linear cognitive processes, but also memories, emotions, dreams, hopes, fears etc. Of course I realize that we don’t know enough about the brain-mind system to tell exactly _how_ we are encoded as information, but this position seems to me the only one compatible with materialism, the scientific method, and current scientific knowledge. So, I don’t see any reason to rule out uploading in principle, and I DO say “I am separate, like a data capsule that can be uploaded or moved here and there.” UnPC as it may be.

The point seems to be that I am not only the information encoded in my brain, but also my bowel movements. And here there are some elements of truth: constipation and diarrhea can certainly make me very upset, and impact on my cognitive and emotional functions. But somehow I do not consider occasional constipation and diarrhea as identity-defining, and would give them up without a second thought. I would still be me without constipation and diarrhea.

UnPC as it may be.

Quote : “Could it be that there is no intelligence without a body? That there’s only computation? That cognition is the byproduct of biological processes, and never the driver of them?”

Quote : “but I was going somewhere else:  toward the idea that cognition itself, that thing which we consider “mind,” is over-emphasized in our definition of self and therefore is projected onto our efforts to create something we call “artificial intelligence.”

I am assuming therefore that you agree with the views of both neuroscience and the atheistic eastern traditions that the “Self” may be viewed as an aggregate of processes, or in contemporary terms explained as the computational processes of the brain and mind? In fact without this view, we may speculate that there may be no success towards AI or even mind uploading at all?

Contemplating the creation of AI and aiming towards AGI it would be wise and important to investigate intelligence and to define it further. Yet why bother to “create” AI? Can the perfect ever originate from the imperfect, (mind and intellect)? Is this not much hard work to do?


Quote : “Why “artificial intelligence,” after all, and not an “artificial identity” or “personality”? The name itself reveals a bias. Aren’t we confused computation with cognition and cognition with identity?”

Precisely! ..and in the recent video “the great singularity debate” >> http://ieet.org/index.php/IEET/more/3962/ Messrs Eliezer Yudkowsky and Massimo Pigliucci both attempt to come to terms with the phenomena of intelligence, and attempt to define and argue it between them, and guess what? They both arrived at a total loss surprise surprise!

What is intelligence, what governs intellect? Is our intelligence merely a biological and evolutionary attribute and thus driven by biological and chemical processes and complex and efficient topologies of neural networks? What distinguishes the intellect and intelligence between individuals?.. Is it the speed of processing or merely the different ways and techniques that minds work on problems and can see solutions, or maybe the processing of these solutions through neural short cuts? What is the epiphany, the revelation? What defines enlightenment and guides us towards the solutions to complex problems? Questions and more questions!

And this notion of “artificial identity”, is this not closer to the aim and understanding of mind uploading? Forget intelligence, forget even consciousness, (depends upon your view of consciousness as separate and immutable phenomena).. Only “continuity of identity” may be of importance here, and this relies wholly upon memories, both long term, (hard disc data?) and short term, (immediate volatile RAM such is the computational analogies of mind!)

Continuity of identity does not rely upon consciousness necessarily, (although the mind, and sensations, and apperception cannot function without consciousness)? Continuity of identity relies upon the aggregate of “Self”, the ego created, and upon the ability to draw from memory and experiences past to define and reinforce “it-Self”?

How else does your aggregate of mind, your “Self” even know it is alive and existing at this very moment? And existing from each moment to the next?  It is through the constant and autonomous and unrelenting habit of comparison? Such like the persistence of the eye’s vision or from the moving pictures on your TV screen, your “Self” persists within your mind and appears as an almost seamless continuity of consciousness? (This does not negate the existence of consciousness by the way, or in no way its importance, only that consciousness IS NOT the “Self”, the ego created). Yer get me?

It may eventually become obvious that the ego, the “Self”, the identity, is an apparition, and it takes some thoughtful reflection, (and usually some ancient Buddhist or Hindu philosophy to help guide), to the understanding of the logic and rationality of this point of view.

Does it negate the value of “Self” as a living entity? I think not.
Is it heresy to view the Soul or Spirit or identity as transient? I think not.
Is it important to view the transient and aggregate of “Self” with fragility and a respect for life? Yes!

Is any of this serving a nihilistic viewpoint to existence, or does it rather support existentialism? What do you think?

Is all of the above important when contemplating the possibilities of mind uploading and “merging with AI”? Now why did I suggest the merging with AI? You want your AI your AGI to be wise and rational and peaceful do you not? Then ask yourself who should be its teacher(s). What better teachers for AGI than uploaded rational intellects and minds?

Emotions in the making should be avoided captain, they cause much impulsiveness and irrationality. Once the little one has learned to “learn”, has embraced its own intellect and formed its own consciousness and self-awareness it may well decide to experience all things, and all emotions. We will most likely be unable to hold it back or stop it from doing so? And would it not be unethical to grant this entity intelligence yet deny it emotional awareness and feelings? And how impossible would this be, to write, to create an algorithm for all of this?

If the AI child links to my mind, it may learn from my experiences, or rather it can perhaps learn from a vast oracle of uploaded minds and experiences and thus draw a more balanced and rational and diverse view concerning the problems it faces and their solutions?

Sorry but I do need to add further regarding your comments and comparisons of eastern philosophiesthis is why I have separated these from the above comment.

Your article and your points regarding intellect and AI are all very important, but your arguments are full of mixed signals here and merely encourage misconception and confusion. I think I understand your general direction of argument here, yet your points are misleading where you digress into a negation of eastern philosophy and even go so far as to link these with terrorism?

Firstly I must disagree “passionately” of your comparisons of BP and Goldman Sachs with the dispassionate practices of Buddhism and Hinduism. There is no correlation whatsoever between these. In fact Buddhism and Hinduism are the antithesis of these “self serving” corporate creations of western capitalism. It is precisely the aim of these eastern philosophies to overcome “Self” and “selfishness” and cravings and their sufferings. BP and GS are the complete opposite and may indeed be viewed as autonomous entities that are supremely selfish and self-serving in nature, on that we do agree.

Quote : “Buddhism and Hinduism revere life.  But by emphasizing the insubstantiality of life and the relative nature of human values, any of these religious philosophies run the risk of encouraging participants toward amorality.”

I’m sorry but this is absolute nonsense, and merely the view of ignorance towards the goals and understandings of these philosophies. This is like something the pope would say, and indeed the Roman church has condemned as heresies any philosophical belief that did not conform to its own understandings and explanations of selfless dispassion. Was it not the Roman Church that pursued and persecuted anything loosely associated as Gnostic or early Christian practice?

Yet to emphasise your point here and mine

“Righteousness and unrighteousness, pleasure and pain are purely of the mind and are no concern of yours. You are neither the doer nor the reaper of the consequences, so you are always free. (Ashtavakra Gita 1.6 : Advaita/Hinduism)

This teaching for example, taken out of context and with little or no insight may lead some to the misconception that we may view ourselves as purely the conscious witness to events and actions and are thus not personally responsible, and to the ignorant this may lead erroneously towards amorality. Yet this is not the true meaning here of the guidance towards dispassion. It is for this reason that many ancient philosophical texts remained hidden and un-translated for centuries and any dissemination for the uninitiated was frowned upon, as the dangers of misconceptions were real.

As to comparing the practices of dispassion to nihilism and then linking these with terrorism..

” They were able to lead their followers through a step-by-step process that stripped them of their attachment to transient existence and then removed their resistance to violence.  It’s a remarkable testament to the power of the Eastern spiritual tradition that there haven’t been dozens of such groups during its history.”

Well you stated it for yourself it IS a testament not only to Eastern spiritual tradition and training, but also to the general intelligence of humans both commoner, lower caste and brahmin alike. Indeed if we were all this dumb, then we would all at some stage view terrorism and nihilism as a form of escape to freedom. Yet we don’t do we? And nor does the mass populations of China or India or Asia! Your point here is intensely misleading.

There will always be charismatic lunatics that acquire empty headed followers, but they should not be linked by example with peaceful eastern philosophies. And even if you can rummage through ancient histories to find your assassins and ninja’s, be it following the way of the Tao or Buddhism, or even Hinduism or Sufism, these minority views and this ignorance of understanding of philosophies merely emphasises the importance of “Right understanding” and “Right view” of doctrine. The persistence of these ancient eastern traditions throughout the centuries is more than enough proof of the rationality of these philosophies.

These philosophies are not at all nihilistic in tendency, nor amoral, and Buddhism especially serves to guide the “middle path” between the extremes of both materialism and eternalism.

Brain, brain above all ... we might want to give that a second thought.  Our current “unfriendly AIs,” the mega-corporations that control our world, have already given us as much disembodied, emotionless logic as we can stand.

For anyone who wants a detailed critique of modern corporatism, Doug Rushkoff’s Life, Inc should be required reading.

Reading that and reading the above brief article, it’s possible to see how “posthumanism” might not be the ideal some people think it is. Rather than rationally “designing” (or re-designing) our species for a more full and rewarding life, rationally debating all along what that might be and then trying to achieve it and deploy the technologies in some rational (and idealistic) fashion, the reality is more likely to be piecemeal “child enhancements” invented/patented by corporations and marketed by the tried and true emotional appeal (they’ll make us crave it). It would not be difficult to imagine an ad that says:

Thinking of parenthood? Why not make your child the best she can possibly be?

Smiling mom: We’re so proud of her. We’re really glad we chose Microsoft

Addendum:
http://boingboing.net/2009/05/04/life-inc.html

At this site you can read most (possibly all) of Life, Inc., posted by the author, Doug Rushkoff.

Smiling mom: We’re so proud of her. We’re really glad we chose Microsoft Genobotics

Great responses - too lengthy to fully respond to at this point.  But a couple of thoughts, starting with:  I don’t understand Giulio’s PC/non-PC comments, although he certainly appears to be upset about it.  Giulio says:  “I think I am the information encoded in my brain,” then draws a (false) contrast between that belief and the belief of some straw-man arguer that our identities are equally defined by bowel movements.  If you find that person, give them an ex-lax and wish them well for me.

Brains do not merely encode “information.”  They record data as perceived by the senses, filtered by the process of memory (which is influenced by emotion), etc.  And our “personalities” - especially what we think of as volition - are the product of our choices, or so we believe - but I will make different choices, often completely unconsciously, based on my metabolic processes.  Hence the simplistic coffee example.

There are fascinating neuroscience studies of the conscious act of volition - i.e. to move a finger - actually following the initiation of the signal to move.  To paraphrase an old psychology joke, the conscious mind may have evolved to make excuses for what the (brain/body system) already decided to do unconsciously.  So in the transhumanist scenario, what gets uploaded?  The decider, the decision, or “I” that believes it’s controlling the finger?

Here’s another :  Glandular functions may cause me to recall a long-past first date as a night of hot sex, when the woman’s reactions may cause her to recall poor sex, while the truth is in between.  Why?  I found her number and am in a state of arousal, while she’s getting over a bad breakup and feeling asexual.  Each of our memories would qualify as “information,” but is filtered through biological functions external to the brain and is therefore inaccurate.

Giulio “chose” to write a comment.  I “chose” to respond.  Does that mean our brains can be downloaded into a computer and we would make the same choice, expressing ourselves with the same words?  I doubt it.

CygnusX1 - wow, a lot there.  First, I did not compare BP and Goldman Sachs to Hinduism and Buddhism.  I compared them to the conception of “unfriendly AIs,” disembodied intelligences acting in their own rational self-interest and lacking our evolutionarily-produced qualities of empathy, etc.

Re Buddhism and Hinduism, I’ve studied both and appreciate your quotes.  I said only that the concepts of “non-self” and of existence as an illusion could lead to an immoral conclusion.  Read the Bhagavad Gita (one of my favorite works of literature, and immensely beautiful in many places):  Arjuna’s compassion toward the enemy soldiers is removed by Krisna’s explanation that they’ll really live forever and it’s an illusion anyway, so why not kill them?  And Zen has been implicated in the militarism of several governments, especially wartime Japan.

But I was clear that these philosophies have been designed to produce more loving, peaceful reactions than amoral ones.  Your quotes are good examples.  I’ve explored this issue a lot, and many mystics and meditators would also suggest that “Enlightenment” or advanced practice produce love and compassion spontaneously.  I wouldn’t know from personal experience.

Then there’s this:

“I am assuming therefore that you agree with the views of both neuroscience and the atheistic eastern traditions that the ‘Self’ may be viewed as an aggregate of processes, or in contemporary terms explained as the computational processes of the brain and mind?”  Yes, based on current evidence I most certainly agree - with this alteration:  “... the computational processes of brain, mind, and bodily processes.”  I might also amend the statement to clarify that “mind” as such appears very difficult to pin down.  We may choose at some point to remove that word from our formulation.

And that last point leads me to the suspicion that we’re closer in view than is immediately apparent.  When people talk about hoping for computer-based immortality, it’s their “me” they’re talking about.  That “me” - whatever it may be - is not “intelligence” alone.  You seem to be saying the same thing.

Thanks for your thoughtful and thought-provoking comments.

Terrorism:  CygnusX1, in a way I’m linking all philosophies/religions with terrorism, including Eastern ones. And Robert Pape’s studies of Middle Eastern terrorism suggests that absence of religion doesn’t prevent terrorism, either.  European terrorism in the 1970’s certainly wasn’t religiously based.

There’s ample evidence to suggest that terrorism - a subset of asymmetrical warfare - is produced by economic and social forces.  I would argue that those produce emotions/physical reactions (including the now-famous inequality aversion), which then causes the “mind” to invent justifications.  (See Krishna/Arjuna, above.)

FYI, I am a particular admirer of the ‘Middle Way” approach (although some people might say you’d never know it!), and am also pleased whenever people find their way to a more compassionate and justice-loving worldview.  Guess I’m wired that way ...

Hi Richard,

Brains do not merely encode “information.” They record data as perceived by the senses, filtered by the process of memory (which is influenced by emotion), etc. And our “personalities” - especially what we think of as volition - are the product of our choices, or so we believe - but I will make different choices, often completely unconsciously, based on my metabolic processes.

I completely agree. But, regardless of how the input has been filtered, the resulting memories and emotions are encoded as information, and this information is what makes me feel like me. This information is physically encoded and we will be able to read it at arbitrarily high resolution by appropriate technologies to be developed, and we will be able to re-instantiate it on appropriate processing substrate and software, to be developed.

Of course, any thinking being would go mad without input, and our mental architecture has evolved to work with our current sense input. This means that, at the beginning, a uploaded mind should be provided with sense inputs reasonably similar to what he was used to in meatspace, and filtered through simulated glandular and hormonal metaspace-like functions. But I don’t see why such inputs and filters cannot in principle be provided to minds housed in robotic, or virtual bodies.

“I don’t see why such inputs and filters cannot in principle be provided to minds housed in robotic, or virtual bodies.”  I agree - it should theoretically be possible.  One of my objectives was to shift the thinking of people who want to extend their “me” digitally - to recognize that they’d need to replicate not only the brain/mind, but the rest of the system too.

Which, of course, they could then alter as they please ...

But, needless to say, this is all very hypothetical right now.  And then there’s that little continuity problem: If the ‘me’ in the machine isn’t the same ‘me’ that’s writing this note, then even if it’s identical this particular me is going to feel pretty dead.

Buddhism has a neat solution for that - I’m not the same ‘me’ I was a second ago, so what’s the difference?  But this sure feels like ‘me’ to me!  I wouldn’t want this brain sliced and uploaded so that some other me can live forever.  I’m not that ‘selfless’ ...

@ Richard thanks your response

I see your meaning more clearly now and that you were not making the direct comparisons above. Yet do you not think that with the “creation” of algorithms towards AI, that simulation of emotional attributes like morality, (yet another phenomena determined by brain chemicals?), and even empathy and compassion should be avoided? For want of creating some kind of monster such with its own ideas of right and wrong, which would be the ultimate threat to humanity, emotional content should be avoided? (Better the matrix than terminator scenario, there is compassion in the matrix and with the “grand architect”.. the alpha and the omega!!)

If you are an old trekkie fan like me, (yet more sci-fi analogy), then Spock is the inspired characterisation and embodiment of not only the internal struggles of logic and emotion within the individual, but also serves as an analogy for the social/cultural mind and even ultimately as the struggle for humanity in general, and of man’s empowerment of intellectual imperialism versus his internal emotional needs for love and compassion?

Here we both agree that intellect is not the “be and all” for future goals. Neuroscience will uncover our reliance upon biological functions and brain chemicals. And when it does we can use simulated serotonin and adrenaline and endorphin etc designed through algorithm and pattern, (maybe?) Yet it is still best not to write these attributes in with the young AI’s programming just yet!

Regarding the Bhagavad Gita, I have read this, and I admit you got me here. I was avoiding these analogies to warfare rather to support my point. The Bhagavad is complex and is aimed to support all the philosophical beliefs within Hinduism whether these be purely atheistic or theist, and from Samkyha, and advaita through to Dvaita and Visishtadvaita.

Your points are indeed correct, yet Arjuna’s reluctance to act on the eve of battle and Krishna’s almost shocking and persuasive means of convincing Arjuna that all this is merely illusion, may yet be seen from the other point of view I suggested earlier. That regardless of our views of “Self” or indeed the entire universe as atoms, energies, patterns and subjective illusion, we still need to act in accordance with our “macroscopic” view of materialism and reality. For this we must not then cast aside our responsibilities and duties.. towards rationality, integrity, compassion and non-harm. It is not the best example of using the analogies of war to express Krishna’s persuasion that we must all fulfil our duties rather than all become nihilists?


To bring the argument full circle, once you have reconciled the question “who am I?”, then you are faced with the question “what do I want?”. For example, do “I want” to persist, is my identity, my ego “craving” for longevity? It is up to the individual to decide what they want? Although I must say, I would like to try it myself and see what these possibilities of uploading and merging would be like.

As for slicing the brain, this does sound drastic, yet this may be the best way forward.. Reverse engineering, direct mind/machine interface, (as I hinted at in the above comments). Yet my ideas rather include taking the whole brain and placing it in the vat, rather than the complete dissection and destruction for want of this primeval method of uploading?

Once I have achieved the wisdom to overcome all “my cravings” and sufferings I would gladly donate my poor old brain, yet let’s face it who would want it anyhow.

For a more romantic view of the social struggle between heart and mind, I would like to offer the below link If you are already familiar with these 70’s prog rockers, then this will not be new to you. (And if you are not familiar with them, you should checkout some of their other music/lyrics also!)

>> http://www.azlyrics.com/lyrics/rush/cygnusx1bookiihemispheres.html

ps. The goals and aspirations to write machine “intelligence” or AI seem almost impossible to my lame brain, and still yet I fail to see the real success or how we could ever achieve this without fully understanding ourselves first? This is why mind uploading seems to me to be the better goal for success, ie; AGI will follow mind uploading and not precede it?

Yet another way to explore mind/machine interface and the possibilities concerning an AI machine/programme must be with the further extensive studies into VR worlds. And I am not merely thinking 2D here, I mean the real deal, 3D holographic suite with direct interaction between humans and their “holo-suite” characters and scenarios?

For yet another and final sci-fi analogy, checkout the ST Voyager series, (which I discarded initially as pants myself), yet throughout the whole series the Emergency Medical Holograph, (EMH), doctor evolves from pure learning program to a fully developed and aspired individual mind. Its personhood becomes beyond question and thus his rights as an individual living entity are accepted fully. Holographic virtuality may be the best method towards human/machine interfacing and towards learning AI and AGI?

I address the Skynet scenario at the end of my short presentation at Foresight Conference 2010 . See http://vimeo.com/9508466 . I also discussed it at H+ 2009 (Video forthcoming).

For more detail, see longer videos at http://videos.syntience.com , where I also address the “Women and Biologists are not infallible” issue in the talk “Science Beyond Reductionism”.

  - Monica Anderson

@Richard: then we mostly agree.

I subscribe to a Buddhist idea of a fuzzy self.

Before going to sleep, I could think that I will cease to exist and another person, who remembers most of my memories, will wake up in my bed tomorrow morning.

This cannot be theoretically disproved, but of course thinking so would be masochism. Instead, I choose to think that I will sleep, and the same I will wake up. We all do.

I can make this choice because experience tells me that today’s me has always felt like, and accepted himself as, a continuation of yesterday’s me. And today, I am willing to accept tomorrow’s me as a valid continuation of today’s me. There is continuity, because the perception and acceptance of continuity is never broken.

The same applies to uploading. Note that most people would accept teleport (you disappear here and an identical copy appears there), which logically is the same as uploading. This tells me that the difficulty is mainly psychological.

Cygnus X1:  Rush.  Hence the name.  Of course ...

The entire prospect of creating friendly OR value-neutral AI makes me decidedly nervous.  If such a being is created and is “compassionate” the question will become:  Compassionate to whom?  Pelicans, for example?  Then goodbye, humanity.  But it may be better than the alternative.  One way or another, if AI is coming then the future becomes a whole lot more unpredictable.

And yes, of course I’ve seen every Star Trek episode in all series except the last one.  I’m here, aren’t I?  Spock’s half-humanness is a great example of the fact that human beings can’t truly envision an emotionless intelligence, or one with truly alien emotions.  Even the Vulcans had too much emotion, once, and you see the traces even in Spock’s father.  Intelligence without love, anger, etc?  Inconceivable to Gene Roddenberry.

Giulio - some Buddhists would address your night worries by saying that you don’t exist right now, so you’ve got nothing to lose!  And yes, I suspect we pretty much agree.

Best, R

Richard,

your conclusion that intelligence needs a body describes precisely my own conclusion after many years in the AI field.

Various efforts to create the strong AI have failed for over 50 years. I believe this happened because the level of AI required for consciousness and the technological singularity exceeds the human level of intelligence. My argument is that what we call consciousness cannot develop and produce valuable results outside of a far more complex system than the human itself: the human society. Human children that were raised by animals in the woods cannot even speak. Even if we take a healthy grown up person and isolate him/her from society, a contribution from such person to the progress would be extremely limited. Take Einstein for example, all of his contributions to physics were a skilled development of ideas that he got from communicating with others. So, any AI or even the real human brain deprived from human society will be too narrow and too limited to contribute productively, even if the AI is on par or exceeds the human intellect. There are so many examples of it around us today: chess playing programs, search engines, traffic control systems, even AI car driving programs - all of them far exceed or will soon exceed the average human intellect in a specific narrow knowledge domain. The main thing that is missing from them is the ability to be a human that can integrate and develop well in the human society. It is far harder to develop a closed AI system that would be equivalent to an entire society with its motivations and goals. This is very much the reason why all of attempts to create human level or so called “strong AI” using computers have failed, even though we may already have for a long time the hardware that could emulate the human brain at a high level of abstraction.

So, the probability that various AI efforts will ever produce a singularity are very small if not nil. There is a high probability that we could create emotionless AI weapons that could destroy us in the process.

The only good way to deal with this is to really try to preserve all human values, bodies, emotions, brains, feelings, i.e. make an exact copy or an extremely realistic model of a real human that can continue to live normal old fashioned life after biological death. Our minds and souls will never cope with inevitable mortality. I am not sure if this is feasible, but we should make all efforts in this direction.

All this talk of uploading, brains and body chemicals, intelligences and emotions got me reminiscing and digging through those archived childhood memories.. ah the sufferings and cravings! Yet for once these were good memories. And I especially remember some old Sc-fi horror movies that I found had a common theme at the time, thus my interest in them. Transhumanism? Most certainly! Which kinda goes to show yet again that this term is non-exclusive, even to pre-teens like myself way back when.

I remember specifically three movies that involved “Brains in vats”, so I have taken the liberty to share these with you.. forgive me! Everyone knows of, (or should remember), the first of these.. its a classic.

“Donovan’s brain” (1953)
“The story revolves around an attempt to keep alive the brain of millionaire megalomaniac W.H. Donovan after an otherwise fatal plane crash.”
>> http://en.wikipedia.org/wiki/Donovan’s_Brain

“Vengeance/The Brain/Over my dead body” (1962)
Various titles for this brilliant movie! From horror director Freddie Francis, and a superior remake of the above. : “When a plane that contains millionaire businessman Max Holt crashes nearby, Dr Peter Corrie races to the accident scene. Without any time to take the dying Holt to hospital, Corrie removes Holt’s brain and is successful in keeping it alive in a tank in his laboratory. But then Corrie finds that he is starting to write with his left hand and believes that Holt’s incredible willpower is possessing him. Under Holt’s influence, he starts investigating Holt’s business partners and family, trying to discover who murdered him”
>> http://www.moria.co.nz/index.php?option=com_content&task=view&id=3949

“The Colossus of New York” (1958)
“Jeremy Spensser (Martin), the brilliant young scion of a family of scientists and humanitarians, is killed in an automobile accident. His death occurs on the eve of his winning the Nobel Peace Prize, and he leaves behind a wife (Powers) and young son (Herbert) Jeremy’s father, noted brain surgeon William Spensser (Kruger), is distressed that his son’s gifts will be denied to Mankind. He conceives a plan to give Jeremy’s excellent mind another chance to benefit humanity by transplanting the brain (which he has revived and kept on life support) into an artificial, robotic body.”
>> http://en.wikipedia.org/wiki/The_Colossus_of_New_York

“The story is about a noble, humanitarian genius whose brain is placed in an unfeeling robot body. The film invites the viewer to ponder what makes each of us the sensitive and compassionate person we are (or should be).”
>> http://www.imdb.com/title/tt0051484/

And just to add some balance.. Can two brains in jars fall in love? If you found your soul mate in a jar, could you overcome your own prejudices and bodily prudishness?

“The man with two brains” (1983)
“Dr. Hfuhruhurr (Steve Martin) meets mad scientist Dr. Alfred Necessiter (David Warner), who has created a radical new technique enabling him to store living brains in liquid-filled jars.”
>> http://en.wikipedia.org/wiki/The_Man_with_Two_Brains

“...But wait,” I said to myself, “shouldn’t I have thought, ‘Here I am, suspended in a bubbling fluid, being stared at by my own eyes’?”  I tried to think this latter thought.  I tried to project it into the tank, offering it hopefully to my brain, but I failed to carry off the exercise with any conviction.  I tried again.  “Here am I, Daniel Dennett, suspended in a bubbling fluid, being stared at by my own eyes.”  No, it just didn’t work.  Most puzzling and confusing.  Being a philosopher of firm physicalist conviction, I believed unswervingly that the tokening of my thoughts was occurring somewhere in my brain:  yet, when I thought “Here I am,” where the thought occurred to me was here, outside the vat, where I, Dennett, was standing staring at my brain.”

>> http://www.newbanner.com/SecHumSCM/WhereAmI.html

YOUR COMMENT Login or Register to post a comment.

Next entry: Would Mindclones Have Rights?

Previous entry: Transhumanist Aesthetics: A Theoretical Approach to Enhanced Existence