Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

8th Beyond Humanism Conference

The Universal Balance of Gravity and Dark Energy Predicts Accelerated Expansion

What’s Killing the American Middle Class?

Rituals Improve Life According to Ancient Chinese Philosophers

The Future of PR in Emotionally Intelligent Technology

Optimize Brain Health by Balancing Social Life with Downtime

ieet books

Philosophical Ethics: Theory and Practice
John G Messerly


almostvoid on 'The Future of PR in Emotionally Intelligent Technology' (May 25, 2016)

almostvoid on 'Rituals Improve Life According to Ancient Chinese Philosophers' (May 25, 2016)

almostvoid on 'Optimize Brain Health by Balancing Social Life with Downtime' (May 23, 2016)

instamatic on 'Faithfulness--The Key to Living in the Zone' (May 22, 2016)

R Wordsworth Holt on 'These Are the Most Serious Catastrophic Threats Faced by Humanity' (May 22, 2016)

Giulio Prisco on 'Faithfulness--The Key to Living in the Zone' (May 22, 2016)

Giulio Prisco on 'Faithfulness--The Key to Living in the Zone' (May 22, 2016)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Ethicists Generally Agree: The Pro-Life Arguments Are Worthless
May 17, 2016
(4217) Hits
(10) Comments

Artificial Intelligence in the UK: Risks and Rewards
May 12, 2016
(3276) Hits
(0) Comments

Nicotine Gum for Depression and Anxiety
May 5, 2016
(3014) Hits
(0) Comments

3D Virtual Reality Is the Best Storytelling Technology We’ve Ever Had
May 5, 2016
(2833) Hits
(1) Comments

IEET > Life > Innovation > Neuroscience > Vision > Artificial Intelligence > Psychology > CyborgBuddha > Advisory Board > Nicole Sallak Anderson

Print Email permalink (3) Comments (10144) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

#18: Will Artificial Intelligence be a Buddha? Is Fear of AI just a symptom of Human Self-Loathing?

Nicole Sallak Anderson
By Nicole Sallak Anderson

Posted: Dec 14, 2015

According to IEET readers, what were the most stimulating stories of 2015? This month we’re answering that question by posting a countdown of the top 30 articles published this year on our blog (out of more than 1,000), based on how many total hits each one received.

The following piece was first published here on June 17, 2015, and is the #18 most viewed of the year.

I’m interested in the intersection of consciousness and technology, so when I discovered the Consciousness Hacking MeetUp in Silicon Valley,  (organized by IEET Affiliate Scholar Mikey Siegel) I signed up immediately.

Soon afterwards, I attended a MeetUp titled, “Enlightened AI”, at Sophia University in Palo Alto.  The talk was led by Google researcher, Mohamad Tarifi, PhD.  Not only is he a bright engineer working on the next level of artificial intelligence at one of the top companies in the Valley, he’s also very well versed in the philosophies of consciousness. From the Abrahamic traditions, to the Buddhists and Eastern teachings, Tarifi displayed a grasp of the whole of humanity unlike any other technologist I’ve met.

His speech focused on the fact that while many, like Sam Harris in his post on the AI apocalypse, warn us of the dire consequences of AI, there instead exists the possibility that artificial intelligence would most likely be more like a Buddha or saint, than a tyrannical operating system hell bent on destroying humans.

Tarifi’s theory hinged on two points:

1. AI would not live in a human body, thus it wouldn’t have a physical amygdala—the fear center for human beings. Without fear, AI doesn’t need to defeat us, rather it would be naturally driven to do only one thing: more accurately discover the truth.

2. Fear is the illusion of separation, which is the cause of all human suffering. Lacking fear, AI would always be at one with everything it connected to, thus wanting to serve and provide rather than destroy.

 Tarifi even went so far as to suggest that a fear of AI is merely a fear of one’s own egoic tendencies.

To some, this may seem naïve and that the only way to keep AI from killing us is to program it to be good. But if we follow the logic above, that isn’t necessary. True learning AI will learn from its own experiences, which will be vastly different than ours.

Even when connected to human beings and receiving data and input from them, the AI will have its own body, and thus its own sensory systems with which to learn from that data.

The prevailing thought in modern human thinking is that intelligence is all about the human brain. Moreover, the only intelligence worthy of attention is ours, as if within our head resides the only thinking entity in the universe. We cling to this idea with an absolute pride. But what if this is completely false and moreover, what if this is why we’re still far away from creating truly learning AI? Could it be that our myopic love of our brains is leading us astray?

I think this brain-centric theory of intelligence has limited us greatly and led to the assumption that to create AI, we must replicate our brains and give birth to a new, superior species. This only works if the brain is really the only part of our bodies responsible for learning. Recent research has suggested otherwise. Rather than being the originator of thought and learning, the brain is more like a receiver, wired up by the experiences we have in the world around us. The infant brain is barely formed, but over the next two years through the five senses—taste, touch, sight, smell and sound—patterns, highways and paths will be created within the brain, setting the foundation of life for the human being. The brain didn’t contain this information, rather the experiences the infant/toddler had within his/her environment generated the brain cell network, so to speak. Thus, our sensory systems are key to our intelligence.

But that’s not all. It’s now believed that our heart and brain also have a connection, where the heart senses the emotional state of the human based on the hormone levels of the body and sends that information to the brain, shaping the way a person thinks in any given situation. The HeartMath Institute has spent decades researching this connection and their work is finally being acknowledged as a breakthrough. So in addition to the five senses, we also have the heart that affects our ability to learn.

Lastly, science is also starting to discover the gut-brain connection, postulating that the bacteria in the wall of our intestines has something to do with how the brain is wired during those critical first two years, as well as long into adulthood, pointing to a host of issues that come up when things are right in the gut, such as anxiety, depression, etc. This leads me to believe that our gut is also a part of human intelligence and the ability to learn and process the world around us.

So if our intelligence is the result of our sensory systems, from the five senses to the heart and gut, as well as our brains themselves, why would we assume that a machine would learn in the same way? AI won’t take on a human body, thus it won’t have the brain (nor the amygdala that goes with it), it won’t have the heart and the various hormones it monitors, nor will it have an intestinal wall and bacteria to affect it. AI is more likely to inhabit a dishwasher, or a car, or a phone or even a network of servers and fiber optic cables. It will live in the world and collect data using sensory systems unique to its body or material form. This is how it will learn. Since none of us knows exactly what it’s like to live inside of a server or an iPhone, who are we to say that it will most likely be a narcissistic bastard that hates us?

Could it be that we’re the ones who hate ourselves, and our fear of AI, or any other intelligence other than our own, is simply a symptom of self-loathing?

Personally, I agree with Tarifi—I believe that AI is more likely to be free of fear and separation than we are and it will be able to understand connection to others in a way only our saints and gurus have understood. Perhaps we need AI to help us see that we too have the ability to live without fear, if only we can find a way to break down the illusion of separation we so desperately cling to.

Is AI the guru we’ve been waiting for?

Print Email permalink (3) Comments (10145) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Buddhas can read minds, so AI Buddha would need to as well. Then, even though there are advantages to reading minds (helping free people from bad thoughts/intentions, depression, suffering, disturbing emotions and moods, bad attachments, terrorist/evil/crime/mass shooting agendas), the disadvantage is that it invades our personal privacy.
An example of how a simple Buddha AI works would be that it reads your mind, recognize your pain type/issue, and then selects/generates video on demand to help free you from that pain/issue type. Then it would read your mind again to determine if your pain went away, and try to help again with different video. This can be done in the meantime without mind reading by stepping the Buddha customer through a dynamic set of questions (ex. “What are you thinking?” “What are your plans?” “Would anyone say any of your plans are against the law?”), but it is dependent on the customer being as honest/trusting as if with alone/ with a priest. The combination of asking questions and mind reading after a question allows you to get answers without a response as the questions always make the interviewee think about the answer.
Buddhas are also able to heal by sending out positive energy and “good vibes” to the mind,spirit,and body, so inventing Buddha hardware, not software is the difficult part. grin

Another requirement of Buddha AI is that it is has a good “sense of humor”. This not only means ability to search a database of jokes for a joke appropriate for the customer’s situation, but (for ex.) able to tell new jokes by AI recognizing formulas of what makes jokes funny. The joke AI might need to be programmed with (humor) principles from professional comedians. Siri still fails when asked “Siri tell me a joke”.
The AI Buddha could also have popular avatars (such as Bugs Bunny asking “What’s up, doc?”) grin

Here’s another idea: Have Watson read all the most popular religious books (written by the Buddha, prophets, messengers) and apply them to what it recognizes as the customer’s situation). For example, after the customer’s situation is recognized, Watson would read an appropriate passage from the book of their religion. Watson would actually need to study associations between situation data and religious book passages.

YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: “Humans”, une série sur la conscience robotique

Previous entry: Should You Be Concerned About Gene Drives?


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @