#18: Will Artificial Intelligence be a Buddha? Is Fear of AI just a symptom of Human Self-Loathing?
Nicole Sallak Anderson
2015-12-14 00:00:00
URL

I’m interested in the intersection of consciousness and technology, so when I discovered the Consciousness Hacking MeetUp in Silicon Valley, (organized by IEET Affiliate Scholar Mikey Siegel) I signed up immediately.

Soon afterwards, I attended a MeetUp titled, “Enlightened AI”, at Sophia University in Palo Alto. The talk was led by Google researcher, Mohamad Tarifi, PhD. Not only is he a bright engineer working on the next level of artificial intelligence at one of the top companies in the Valley, he’s also very well versed in the philosophies of consciousness. From the Abrahamic traditions, to the Buddhists and Eastern teachings, Tarifi displayed a grasp of the whole of humanity unlike any other technologist I’ve met.

His speech focused on the fact that while many, like Sam Harris in his post on the AI apocalypse, warn us of the dire consequences of AI, there instead exists the possibility that artificial intelligence would most likely be more like a Buddha or saint, than a tyrannical operating system hell bent on destroying humans.



Tarifi's theory hinged on two points:

1. AI would not live in a human body, thus it wouldn’t have a physical amygdala—the fear center for human beings. Without fear, AI doesn’t need to defeat us, rather it would be naturally driven to do only one thing: more accurately discover the truth.

2. Fear is the illusion of separation, which is the cause of all human suffering. Lacking fear, AI would always be at one with everything it connected to, thus wanting to serve and provide rather than destroy.

 Tarifi even went so far as to suggest that a fear of AI is merely a fear of one’s own egoic tendencies.

To some, this may seem naïve and that the only way to keep AI from killing us is to program it to be good. But if we follow the logic above, that isn’t necessary. True learning AI will learn from its own experiences, which will be vastly different than ours.

Even when connected to human beings and receiving data and input from them, the AI will have its own body, and thus its own sensory systems with which to learn from that data.

The prevailing thought in modern human thinking is that intelligence is all about the human brain. Moreover, the only intelligence worthy of attention is ours, as if within our head resides the only thinking entity in the universe. We cling to this idea with an absolute pride. But what if this is completely false and moreover, what if this is why we’re still far away from creating truly learning AI? Could it be that our myopic love of our brains is leading us astray?

I think this brain-centric theory of intelligence has limited us greatly and led to the assumption that to create AI, we must replicate our brains and give birth to a new, superior species. This only works if the brain is really the only part of our bodies responsible for learning. Recent research has suggested otherwise. Rather than being the originator of thought and learning, the brain is more like a receiver, wired up by the experiences we have in the world around us. The infant brain is barely formed, but over the next two years through the five senses—taste, touch, sight, smell and sound—patterns, highways and paths will be created within the brain, setting the foundation of life for the human being. The brain didn’t contain this information, rather the experiences the infant/toddler had within his/her environment generated the brain cell network, so to speak. Thus, our sensory systems are key to our intelligence.



But that’s not all. It’s now believed that our heart and brain also have a connection, where the heart senses the emotional state of the human based on the hormone levels of the body and sends that information to the brain, shaping the way a person thinks in any given situation. The HeartMath Institute has spent decades researching this connection and their work is finally being acknowledged as a breakthrough. So in addition to the five senses, we also have the heart that affects our ability to learn.

Lastly, science is also starting to discover the gut-brain connection, postulating that the bacteria in the wall of our intestines has something to do with how the brain is wired during those critical first two years, as well as long into adulthood, pointing to a host of issues that come up when things are right in the gut, such as anxiety, depression, etc. This leads me to believe that our gut is also a part of human intelligence and the ability to learn and process the world around us.

So if our intelligence is the result of our sensory systems, from the five senses to the heart and gut, as well as our brains themselves, why would we assume that a machine would learn in the same way? AI won’t take on a human body, thus it won’t have the brain (nor the amygdala that goes with it), it won’t have the heart and the various hormones it monitors, nor will it have an intestinal wall and bacteria to affect it. AI is more likely to inhabit a dishwasher, or a car, or a phone or even a network of servers and fiber optic cables. It will live in the world and collect data using sensory systems unique to its body or material form. This is how it will learn. Since none of us knows exactly what it’s like to live inside of a server or an iPhone, who are we to say that it will most likely be a narcissistic bastard that hates us?

Could it be that we’re the ones who hate ourselves, and our fear of AI, or any other intelligence other than our own, is simply a symptom of self-loathing?

Personally, I agree with Tarifi—I believe that AI is more likely to be free of fear and separation than we are and it will be able to understand connection to others in a way only our saints and gurus have understood. Perhaps we need AI to help us see that we too have the ability to live without fear, if only we can find a way to break down the illusion of separation we so desperately cling to.

Is AI the guru we’ve been waiting for?