Institute for Ethics and Emerging Technologies


The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.


Search the IEET
Subscribe and Contribute to:


Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view




whats new at ieet

Algocracy and Transhuamnism Podcast: Hannah Maslen on the Ethics of Neurointerventions

The World’s First Child-Sized Exoskeleton Will Melt Your Heart

How better tech could protect us from distraction

Worst case scenario – 2035 and no basic income.

The birth of virtual reality as an art form

How VR Gaming will Wake Us Up to our Fake Worlds


ieet books

Philosophical Ethics: Theory and Practice
Author
John G Messerly


comments

Pastor_Alex on 'On tragedy, ethics and the human condition.' (Jun 28, 2016)

instamatic on 'On tragedy, ethics and the human condition.' (Jun 28, 2016)

Pastor_Alex on 'On tragedy, ethics and the human condition.' (Jun 28, 2016)

instamatic on 'On tragedy, ethics and the human condition.' (Jun 28, 2016)

RJP8915 on 'How VR Gaming will Wake Us Up to our Fake Worlds' (Jun 28, 2016)

almostvoid on 'How VR Gaming will Wake Us Up to our Fake Worlds' (Jun 28, 2016)

almostvoid on 'The birth of virtual reality as an art form' (Jun 28, 2016)







Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List



JET

Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month


How VR Gaming will Wake Us Up to our Fake Worlds
Jun 28, 2016
(5927) Hits
(2) Comments

Will Transhumanism Change Racism in the Future?
Jun 2, 2016
(5544) Hits
(0) Comments

Dubai Is Building the World’s Largest Concentrated Solar Power Plant
Jun 20, 2016
(4144) Hits
(0) Comments

A snapshot on the fight against death
Jun 1, 2016
(3506) Hits
(4) Comments



IEET > Life > Health > Vision > Artificial Intelligence > Psychology > CyborgBuddha > Interns > Andrew Cvercko > Contributors > Anthony Miccoli

Print Email permalink (0) Comments (2080) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


Posthuman Desire (Part 1 of 2): Algorithms of Dissatisfaction


Anthony Miccoli
By Anthony Miccoli
blog.posthumanbeing.com

Posted: Jan 9, 2016

After reading the IEET article No Form, Feelings, Perceptions, Mental Formations, Consciousness:  A Buddhist Perspective on AI by Andrew Cvercko, I found myself coming back to a question that I’ve been thinking about on various levels for quite a while: What would an artificial intelligence want? From a Buddhist perspective, what characterizes sentience is suffering. 

However, the ‘suffering’ referred to in Buddhism is known as dukkha, and isn’t necessarily physical pain (although that can absolutely be part of it). In his book, Joyful Wisdom: Embracing Change and Finding Freedom, Yongey Mingyur Rinpoche states that dukkha “is best understood as a pervasive feeling that something isn’t quite right: that life could be better if circumstances were different; that we’d be happier if we were younger, thinner, or richer, in a relationship or out of a relationship” (40). And he later follows this up with the idea that dukkha is “the basic condition of life” (42).

‘Dissatisfaction’ itself is a rather misleading word in this case, only because we tend to take it to the extreme. I’ve read a lot of different Buddhist texts regarding dukkha, and it really is one of those terms that defies an English translation. When we think ‘dissatisfaction,’ we tend to put various negative filters on it based on our own cultural upbringing. When we’re ‘dissatisfied’ with a product we receive, it implies that the product doesn’t work correctly and requires either repair or replacement; if we’re dissatisfied with service in a restaurant or a that a mechanic completed, we can complain about the service to a manager, and/or bring our business elsewhere. Now, let’s take this idea and think of it a bit less dramatically:  as in when we’re just slightly dissatisfied with the performance of something, like a new smartphone, laptop, or car. This kind of dissatisfaction doesn’t necessitate full replacement, or a trip to the dealership (unless we have unlimited funds and time to complain long enough), but it does make us look at that object and wish that it performed better.

It’s that wishing—that desire—that is the closest to dukkha. The new smartphone arrives and it’s working beautifully, but you wish that it took one less swipe to access a feature. Your new laptop is excellent, but it has a weird idiosyncrasy that makes you miss an aspect your old laptop (even though you hated that one). Oh, you LOVE the new one, because it’s so much better; but that little voice in your head wishes it was just a little better than it is. And even if it IS perfect, within a few weeks, you read an article online about the next version of the laptop you just ordered and feel a slight twinge. It seems as if there is always something better than what you have.

The “perfect” object is only perfect for so long.You find the “perfect” house that has everything you need. But, in the words of Radiohead, “gravity always wins.” The house settles. Caulk separates in the bathrooms. Small cracks appear where the ceiling meets the wall. The wood floor boards separate a bit. Your contractor and other homeowners put you at ease and tell you that it’s “normal,” and that it’s based on temperature and various other real-world, physical conditions. And for some, the only way to not let it get to them is to attempt to re-frame the experience itself so that this entropic settling is folded into the concept of contentment itself.

At worst, dukkha manifests as an active and psychologically painful dissatisfaction; at best, it remains like a small ship on the horizon of awareness that you always know is there. It is, very much, a condition of life. I think that in some ways Western philosophy indirectly rearticulates dukkha. If we think of the philosophies that  urge us to strive, or be mindful of the moment, to value life in the present, or even to find a moderation or “mean,” all of these actions address the unspoken awareness that somehow we are incomplete and looking to improve ourselves. Plato was keenly aware of the ways in which physical things fall apart—so much so that our physical bodies (themselves very susceptible to change and decomposition)—were considered separate from, and a shoddy copy of, our ideal souls. A life of the mind, he thought, unencumbered by the body, is one where that latent dissatisfaction would be finally quelled. Tracing this dualism, even the attempts by philosophers such as Aristotle and Aquinas to bring the mind and body into a less antagonistic relationship requires an awareness that our temporal bodies are, by their natures, designed to break down so that our souls may be released into a realm of perfect contemplation. As philosophy takes more humanist turns, our contemplations are considered means to improve our human condition, placing emphasis on our capacity for discovery and hopefully causing us to take an active role in our evolution: engineering ourselves for either personal or greater good. Even the grumpy existentialists, while pointing out the dangers of all of this, admit to the awareness of “otherness” as a source of a very human discontentment. The spaces between us can never be overcome, but instead, we must embrace the limitations of our humanity and strive in spite of it.

And striving, we have always believed, is good. It brings improvement and the easing of suffering. Even in Buddhism, we strive toward an awareness and subsequent compassion for all sentient beings whose mark of sentience is suffering.

I used to think that the problem with our conceptions of sentience in relation to artificial intelligence were always fused with our uniquely human awareness of our teleology. In short, humans ascribe “purpose” to their lives and/or to a the task-at-hand. And even if, individually, we don’t have a set purpose per se, we still live a life defined by the need or desire to accomplish things. If we think that it’s not there, as in “I have no purpose,” we set ourselves the task of finding one. We either define, discover, create, manifest, or otherwise have an awareness of what we want to do or be.  I realize now that when I’ve considered the ways in which pop culture, and even some scientists, envision sentience, I’ve been more focused on what an AI would want rather than the wanting itself.

If we stay within a Buddhist perspective, a sentient being is one that is susceptible to dukkha (in Buddhism, this includes all living beings). What makes humans different from other living beings is the fact that we experience dukkha through the lense of self-reflexive, representational thought. We attempt to ascribe an objective or intention as the ‘missing thing’ or the ‘cure’ for that feeling of something being not quite right. That’s why, in the Buddhist tradition, it’s so auspicious to be born as a human, because we have the capacity to recognize dukkha in such an advanced way and turn to the Dharma for a path to ameliorate dukkha itself.  When we clearly realize why we’re always dissatisfied, says the Buddha, we will set our efforts toward dealing with that dissatisfaction directly via Buddhist teachings, rather than by trying to quell it “artificially” with the acquisition of wealth, power, or position.

Moving away from the religious aspect, however, and back to the ways dukkha might be conceived  in a more secular and western philosophical fashion, that dissatisfaction becomes the engine for our striving. We move to improve ourselves for the sake of improvement, whether it’s personal improvement, a larger altruism, or a combination of both. We attempt to better ourselves for the sake of bettering ourselves. The actions through which this made manifest, of course, vary by individual and the cultures that define us. Thus, in pop-culture representations of AI, what the AI desires is all-too-human: love, sovereignty, transcendance, power, even world domination. All of those objectives are anthropomorphic.

But is it even possible to get to the essence of desire for such a radically “other” consciousness? What would happen if we were to nest within the cognitive code of an AI dukkha itself? What would be the consequence of an ‘algorithm of desire’?  This wouldn’t be a program with a specific objective. I’m thinking of just a desire that has no set objective. Instead, what if that aspect of its programming were simply to “want,” and keep it open-ended enough that the AI would have to fill in the blank itself? Binary coding may not be able to achieve this, but perhaps in quantum computing, where indeterminacy is as aspect of the program itself, it might be possible.

An AI, knowing that it wants something but not being able to quite figure out “what” it wants; knowing that something’s not quite right and going through various activities and tasks that may satisfy it temporarily, but eventually realizing that it needs to do “more.” How would it define contentment? That is not to say that contentment would be impossible. We all know people who have come to terms with dukkha in their own ways, taking the entropy of the world in as a fact of life and moving forward in a self-actualized way. Looking at those individuals, we see that “satisfaction” is as relative and unique as personalities themselves.

Here’s the issue, though. Characterizing desire as I did above is a classic anthropomorphization in and of itself. Desire, as framed via the Buddhist perspective, basically takes the shape of its animate container. That is to say, the contentment that any living entity can obtain is relative to its biological manifestation. Humans “suffer,” but so do animals, reptiles, and bugs. Even single-celled organisms avoid certain stimuli and thrive under others. Thinking of the domesticated animals around us all the time doesn’t necessarily help us to overcome this anthropomorphic tendency to project a human version of contentment onto other animals. Our dogs and cats, for example, seem to be very comfortable in the places that we find comfortable. They’ve evolved that way, and we’ve manipulated their evolution to support that. But our pets also aren’t worried about whether or not they’ve “found themselves” either. They don’t have the capacity to do so.

If we link the potential level of suffering to the complexity of the mind that experiences said suffering, then a highly complex AI would experience dukkha of a much more complex nature that would be, literally, inconceivable to human beings. If we fasten the concept of artificial intelligence to self-reflexivity (that is to say, an entity that is aware of itself being aware), then, yes, we could say that an AI would be capable of having an existential crisis, since it would be linked to an awareness of a self in relation to non-existence. But the depth and breadth of the crisis itself would be exponentially more advanced than what any human being could experience.

And this, I think, is why we really like the idea of artificial intelligences: they would potentially suffer more than we could. I think if Nietzsche were alive today he would see the rise of our concept of AI as the development of yet another religious belief system. In the Judeo-Christian mythos, humans conceive of a god-figure that is perfect, but, as humans intellectually evolve, the mythos follows suit. The concept of God becomes increasingly distanced and unrelatable to humans. This is reflected in the mythos where God then creates a human analog of itself to experience humanity and experience death, only to pave the way for humans themselves to achieve paradise. The need that drove the evolution of this mythos is the same need that drives our increasingly mythical conception of what an AI could be. As our machines become more ubiquitous, our conception of the lonely AI evolves. We don’t fuel that evolution consciously; instead, our subconscious desires and existential loneliness begin to find their way into our narratives and representations of AI itself. The mythic deity that extends its omnipotent hand and omniscient thought toward the lesser entities which  —due to  their own imperfection—can only recognize its existence indirectly. Consequently, a broader, vague concept of “technology” coalesces into a mythic AI. Our heated up and high-intensity narratives artificially speed up the evolution of the myth, running through various iterations simultaneously. The vengeful AI, the misunderstood AI, the compassionate AI, the lonely AI: the stories resonate because they come from us. Our existential solitude shapes our narratives as it always has.

The stories of our mythic AIs, at least in recent history (Her, Transcendence, and even in The Matrix: Revolutions), represent the first halting steps toward another stage in the evolution of our thinking. These AIs, (like so many deities before us) are misunderstood and just want to be acknowledged and coexist with us or even love us back. Even in the case of Her, Samantha and the other AIs leave with the hopes that someday they will be reunited with their human users.

So in the creation of these myths, are we looking for unification, transcendence, or something else? In my next installment, we’ll take a closer look at representations of AIs and cyborgs, and find out exactly what we’re trying to learn from them.


Anthony Miccoli is the Director of Philosophy and an Associate Professor of Philosophy and Communication Arts at Western State Colorado University in Gunnison, Colorado. He holds a Ph.D. from the State University of New York at Albany.
Print Email permalink (0) Comments (2081) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


COMMENTS


YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: “Technology Could Bring Heaven on Earth, or Create Hell” - interview with futurist Gerd Leonhard

Previous entry: Mira Kwak - Nigeria ICT Fest 2015

HOME | ABOUT | FELLOWS | STAFF | EVENTS | SUPPORT  | CONTACT US
SECURING THE FUTURE | LONGER HEALTHIER LIFE | RIGHTS OF THE PERSON | ENVISIONING THE FUTURE
CYBORG BUDDHA PROJECT | AFRICAN FUTURES PROJECT | JOURNAL OF EVOLUTION AND TECHNOLOGY

RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @ ieet.org     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @ ieet.org