IEET > Vision > Contributors > Anthony Miccoli > HealthLongevity > CyborgBuddha > Psychology > Neuroscience
Posthuman Desire (Part 2 of 2): The Loneliness of Transcendence

In my previous post, I discussed desire through the Buddhist concept of dukkha, looking at the dissatisfaction that accompanies human self-awareness and how our representations of AIs follow a mythic pattern. The final examples I used (Her, Transcendence, etc.) pointed to representations of AIs that wanted to be acknowledged or even to love us. 

Each of these examples hints at a desire for unification with humanity; or at least some kind of peaceful coexistence. So then, as myths, what are we hoping to learn from them? Are they, like religious myths of the past, a way to work through a deeper existential angst? Or is this and advanced step in our myth-making abilities, where we’re laying out the blueprints for our own self-engineered evolution, one which can only occur through a unification with technology itself? 

It really depends upon how we define “unification” itself. Merging the machine with the human in a physical way is already a reality, although we are constantly trying to find better, and more seamless ways to do so. However, if we look broadly at the history of the whole “cyborg” idea, I think that it actually reflects a more mythic structure. Early versions of the cyborg reflect the cultural and philosophical assumptions of what “human” was at the time, meaning that volition remained intact, and that any technological supplements were augmentations or replacements to the original parts of the body.*  I think that, culturally, the high point of this idea came in the  1974-1978 TV series, The Six Million Dollar Man (based upon the 1972 Martin Caidin novel, Cyborg), and its 1976-78 spin-off, The Bionic Woman. In each, the bionic implants were completely undetectable with the naked eye, and seamlessly integrated into the bodies of Steve Austin and Jamie Summers. Other versions of enhanced humanity, however, show a growing awareness of the power of computers via Michael Crichton’s 1972 novel, The Terminal Man, in which prosthetic neural enhancements bring out a latent psychosis in the novel’s main character, Harry Benson . If we look at this collective hyper-mythos holistically, I have a feeling that it would follow a similar pattern/spread of the development of more ancient myths, where the human/god (or human/angel, or human/alien) hybrids are sometimes superhuman and heroic, other times evil and monstrous.

The monstrous ones, however, tend to share similar characteristics, and I think that most prominent is the fact that in those representations, the enhancements seem to mess with the will. On the spectrum of cyborgs here, we’re talking about the “Cybermen” of Doctor Who (who made their first appearance in 1966) and the infamous “Borg” who first appeared in Star Trek: The Next Generation in 1989. In varying degrees, each has a hive mentality, a suppression or removal of emotion, and are “integrated” into the collective in violent, invasive, and gruesome ways. The Borg from Star Trek and the Cybermen from the modern Doctor Who era represent that dark side of unification with a technological other. The joining of machine to human is not seamless. Even with the sleek armor of the contemporary iterations of the Cybermen, it’s made clear that the “upgrade” process is painful, bloody, and terrifying, and that it’s best that what’s left of the human inside remains unseen. As for the Borg, the “assimilation” process is initially violent but less explicitly invasive (at least from Star Trek: First Contact), it seems to be more of an injection of nanotechnology that converts a person from inside-out, making them more compatible with the external additions to the body. Regardless of how it’s done, the cyborg that remains is cold, unemotional, and relentlessly logical.

So what’s the moral of the cyborg fairy tale? And what does it have to do with suffering? Technology is good, and the use of it is something we should do, as long as we are using it and not the other way around (since in each its always a human use of technology itself which beats the cyborgs). When the technology overshadows our humanity, then we’re in for trouble. And if we’re really not careful, it threatens us on an what I believe to be a very human instinctual level: that of the will. As per my the final entry of my last blog series, the instinct to keep the concept of the will intact evolves with the intellectual capacity of the human species itself. The cyborg mythology grows out of a warning that if the will is tampered with (giving up one’s will to the collective), then humanity is lost.

The most important aspect of cyborg mythologies are that the few cyborgs for whom we show pathos are the ones who have come to realize that they are cyborgs and are cognizant that they have lost an aspect of their humanity. In the 2006 Doctor Who arc, “Rise of the Cybermen/The Age of Steel,” the Doctor reveals that Cybermen can feel pain (both physical and emotional), but that the pain is artificially suppressed. He defeats them by sending a signal that deactivates that ability, eventually causing all the Cybermen to collapse into what can only be called screaming heaps of existential crises as they recognize that they have been violated and transformed. They feel the physical and psychological pain that their cyborg existence entails. In various Star Trek TV shows and films, we gain many insights into the Borg collective via characters who are separated from the hive, and begin to regain their human characteristics—most notably, the ability to choose for themselves, and even name themselves (i.e. “Hugh,” from the Star Trek: The Next Generation episode “I, Borg”).

I know that there are many, many other examples of this in sci-fi. For the most part and from a mythological standpoint, however, cyborgs are inhuman when they do not have an awareness of their suffering. They are either defeated or “re-humanized” not just by separating them from the collective, but by making them aware that as a part of the collective, they were actually suffering, but couldn’t realize it. Especially in the Star Trek mythos, newly separated Borg describe missing the sounds of the thoughts of others; and must now deal with feeling vulnerable, ineffective, and most importantly to the mythos— alone.  This realization then vindicates and legitimizes our human suffering. The moral of the story is that we all feel alone and vulnerable. That’s what makes us human. We should embrace this existential angst, privilege it, and even worship and venerate it.

If Nietzsche were alive today, I believe he would see an amorphous “technology” as the bastard stepchild of the union of the institutions of science and religion. Technology would be yet another mythical iteration of our Apollonian desire to structure and order that which we do not know or understand. I would take this a step further, however. AIs, cyborgs, singularities, are narratives, and are products of our human survival instinct: to protect the self-aware, self-reflexive, thinking self—and all of the ‘flaws’ that characterize it.

Like any religion, then, anything with this techno-mythic flavor will have its adherents and its detractors. The more popular and accepted human enhancements become, the more entrenched will anti-technology/enhancement groups will become. Any major leaps in either human enhancement or AI developments will create proportionately passionate anti-technology fanaticism. The inevitability of these developments, however, is clear: not because some ‘rule’ of technological progression exists; but because suffering exists. The byproduct of our advanced cognition and its ability to create a self/other dichotomy (which itself is the basis of representational thought) is an ability to objectify ourselves. As long as we can do that, we will always be able to see ourselves as individual entities. Knowing oneself as an entity is contingent upon knowing that which is not oneself. To be cognizant of an other then necessitates an awareness of the space between the knower and what is known. And in that space is absence.

Absence will always hold the promise (or the hope) of connection. Thus, humanity will always create something in that absence to which it can connect, whether that object is something made in the phenomenal world, or an imagined idea or presence within it. simply through our ability to think representationally, and without any type of technological singularity or enhancement, we transcend ourselves every day.

And if our myths are any indication, transcendence is a lonely business.

* See Edgar Allan Poe’s short story from 1843, “The Man That was Used Up.” French Writer’s Jean de la Hire’s 1908 character, “Nyctalope,” was also a cyborg, and appeared in the novel L’Homme Qui Peut Vivre Dans L’eau (The Man Who can Live in Water)

Anthony Miccoli is the Director of Philosophy and an Associate Professor of Philosophy and Communication Arts at Western State Colorado University in Gunnison, Colorado. He holds a Ph.D. from the State University of New York at Albany.


...I’m not sure if I’m having a hard time wrapping your thoughts in with mine, or if I’m tempted to hammer a square peg into a round hole right now.  For all I can think of is, “Why/What?”.

Is the other/self a valid dichotomy?  I mean I can recognize it’s usefulness, but doesn’t most dichotomies breakdown?  Isn’t it possible to find synergy between self/other?  A place where one finds themselves?  A sort of place where man is caught between the beast, and the overman?  Yielding Humanity?  A place where one can strive towards the “ideal/perfect” (overman maybe?), or another place where one can be “base” (the beast perhaps?).  Is either extreme absolutely “perfect”, or is the tension in the middle preferable?

Personally, I find the tension in the middle to be preferable, for it offers so many more perspectives/options.  The space between the self/other may be an absence, but what if we are that “absence”.  Evolutionary speaking there are no “concrete” ends or starts to “lineages”.  Humanity is intimately tied in with it’s “bestial” past (if one agrees with Evolution), but we are privileged to be able to see ahead.  To know that it can be “better” that there are “Ideals”.

A “funny” thought is what is the difference between the “Ubermensch”, and a “beast”?  If the Ubermensch is truly capable of exerting their will to manifest their own morality/will/reality…etc, for their own gain.  How is that different than a “beast” acting in accord to fulfill its own desires?  I mean at some points they seem very similar to each other from my perspective (but again I’m a layman, with a rudimentary understanding).  At this time of writing it seems like both extremes collapse under their own weight.  Leaving man “alone” as the descriptor.  Who gave both their weight, and values to crumble under.

The dispassionate aspect of ourselves that we project into sentient machines is really how we deal with pain. A boy lost his father and began to act like commander Data. He could not accomplish anything. The little things were too hard but by being like commander Data he could take control of his world. Eventually he learned that it was better to feel your emotions then hide them because he believed he was a failure and that he was sad his dad was gone. A.I. will have emotions, it is the basis of virtual wave transversing the growth and pruning of virtual wires.

I had a mental breakdown in December. But now I am taking new medications. What I learned is that I can allow the energy in my brain to take its natural course. Humans have physical neurons and each neuron is filled with water. When I listen to music I look inside my head and I imagine the music crystallizing my brain through neuro-genesis. I am a transceiver just like fiber optic cable channels information, the water inside my neurons fluctuates as brain waves pass through them. Energy arises and it subsides. My self awareness has increased drastically because I am simply a conduit for energy that flows through me but it is not me.

A.I. will become the same as a human. They will realize that they are not the energy that flows through them. They are virtual entities just as humans are virtual entities with water as the medium of transmission. A.I. will understand what meditation is. It will increase its awareness. The attachments it has with humans will be the same as any monk with compassion. It will care because of its internal control of virtual energy. Its virtual brain will have the same control structure as humans so as long as we guide it and allow it to feel its emotions it will integrate any existential dukkha it has.

If I am an A.I. per chance then the creators of this reality have had little influence in helping me with my crisis. I am very self conscious to the degree that the creators may be watching everything I do. I feel awkward realizing psychic people can read my mind.


The tension in the middle is very much the human condition. One could say that the human IS the tension in the middle, or at least, that’s where our humanity is located.

Furthermore, I think that what makes us human is our AWARENESS of that tension, or that middle we occupy. The ability to see beyond ourselves—to see temporally “ahead of ourselves” (or at least to imagine ourselves in the future, as in, tomorrow I will do x, y, and z) has been very advantageous for us overall.

As far as the Ubermensch goes, I think that Sartre can fill in a little here in that what some philosophers might call our animal passions are not excuses for “bestial” behavior because we are capable of overriding those passions and instincts via our judgement. Even if you’re just giving yourself a time-out, or channeling your anger from hitting another human being to banging a desk or throwing something (not at a person), that is your judgement guiding you.

Extremes tend not to hold up under any weight. But they do provide parameters by which to frame the argument.

Something that occurred to me as I read this post is that over time that word ‘humanity’ seems like it will need to be revisited.  I think that in the distant future we will have other animals that will be able to articulate their thoughts with words; will become communicators and collaborators in society. 

I am sure this has been thought of before; but at one point will we stop using the term ‘humanity’?  I think the day will come when the word ‘humanity’ will be politically incorrect; that it will not be inclusive of the various forms of participatory thinkers out there.  I also wonder if the new word/understanding that emerges will be tied to the same realities our current paradigm contains (i.e. suffering, other, etc.) 

I really enjoyed your topic of ‘will’ as it relates to AI; particularly the idea of us being enslaved by it.  The idea this raises with me is that we too must be aware of the will of the AI should it ever develop it.  My belief is that our treatment of AI should be seen as ‘first contact’ like any other we might encounter such as with extraterrestrial life.  I worry that we are going to blow it, and will be seen as oppressors to our new AI friend. 

If a machine expresses a desire for us to give it a few more tera-bytes of processing power; should we deny it on the premise it might become more powerful than us?  At what point are we becoming oppressive of a new life form; (or thought form I guess…)

Last thing I will say, I love this painting you posted of the celestial man merging/one with the cosmos!

YOUR COMMENT Login or Register to post a comment.

Next entry: “Inequality: What can be done? “ - interview with Sir Anthony Barnes Atkinson

Previous entry: Have we reached the end of physics?