IEET > Vision > Interns > Chase Uy > Sociology > Psychology > Futurism > Technoprogressivism > Artificial Intelligence
Friendly Artificial Intelligence: Parenthood and the Fear of Supplantation
Chase Uy   Jul 5, 2015   Ethical Technology  

In his painting, Saturn Devouring His Son, Francisco Goya depicts the Titan, Cronus, devouring one of his children. The painting represents the Greek myth wherein Cronus devoured his children out of fear of being overthrown. In the end, Cronus was defeated by his children. In an other archetypal parent ­ child relationship, the Greek tragedy, Oedipus Rex, King Laius attempts to murder his infant child, Oedipus, when an oracle predicts that Laius will be slain by his son. As the oracle predicted, Oedipus, though unwittingly, fulfilled the prophecy. Such tales of parents fearing their supplantation at the hands of their children are prevalent throughout history; perhaps they can serve as useful metaphors for the friendly artificial intelligence (AI) conundrum we are faced with today.

Much of the discourse regarding the hypothetical creation of artificial intelligence often views AI as a tool for the betterment of humankind—a servant to man. Even the works which acknowledge the semblance of the creation of AI to the creation of a child often address the dangers of self­improvement and discuss ways to limit AI. These papers often discuss creating an ethical being yet fail to acknowledge the ethical treatment of artificial intelligence at the hands of its creators. By viewing the teaching processes and developmental stages proposed by many ethicists as similar to that of a human parent and a child, the discourse of ethical treatment of AI may be legitimate.

Superintelligence is inherently unpredictable; it is impossible to predict the behavior of a mind whose intelligence is exponentially greater than the human race’s smartest minds. The child may very well overthrow the parent. However, that does not mean ethicists and programmers today cannot do anything to bias the odds towards a human­friendly AI; like a child, we can teach it to behave more ethically than we do. Ben Goertzel and Joel Pitt discuss the topic of ethical AI development in their 2012 paper, “Nine Ways to Bias the Open­Source AGI Toward Friendliness.” These “nine ways” include heavy human involvement during the AI’s developmental stages, the development of goal systems, creating a community of equally powerful AI, and connection to the “global brain” (particularly the internet).

Goertzel and Pitt propose that the AGI must have the same faculties, or modes of communication and memory types, that humans have in order to acquire ethical knowledge. These include episodic memory (the assessment of an ethical situation based on prior experience); sensorimotor memory (the understanding of another’s feelings by mirroring them); declarative memory (rational ethical judgement); procedural memory (learning to do what is right by imitation and reinforcement); attentional memory (understanding patterns in order to pay attention to ethical considerations at appropriate times); intentional memory (ethical management of one’s own goals and motivations) (Goertzel & Pitt 7).

The idea that an AI must have some form of sensory functions and an environment to interact with is also discussed by James Hughes in his 2011 book, Robot Ethics, in the chapter, “Compassionate AI and Selfless Robots: A Buddhist Approach”. In this paper, however, Hughes proposes that, consciousness, as understood by Buddhism, requires five skandhas: the body and sense organs, or rupa; sensation, or vedana; perception, or samjna; volition, or samskara; and consciousness, or vijnana (Hughes 131). In this model for developing AI, the AI must have some sort of physical or virtual embodiment with senses in order to give rise to “folk psychology” and “folk physics,” through its realization of its surroundings. This method proposes that in order for a truly compassionate AI to exist, it must go through a state of suffering and, ultimately, self­transcendance.

Hughes’s paper also addresses the ethical concerns for creating an AI “child.” He proposes following the Sigalovada Sutta, the five obligations a Buddhist parent has to their children: To dissuade them from doing evil; to persuade them to do good; to give them a good education; to see that they are suitably married; and to give them their inheritance (Hughes 133). As Hughes points out, the duty regarding marriage is most likely irrelevant in terms of AI ethics. The others, however, are incredibly relevant. These rules play into the idea of the AI learning by example; by being good parents and treating the AI ethically, we can bias the odds for a morally superior AI child.

By viewing the development of AI as raising a child, it becomes evident that many of the discussions around limiting the capability of AI is rather oppressive. Hughes acknowledges the principle of “procreative beneficence,”or the duty to give our offspring the best possible chances in life (Hughes 134). As discussed in Hughes’s paper, creating a self­aware being that does not possess something similar to the human capacity for learning and growth would be unethical (Hughes 134), as would be locking the AI in a constant state of emotion, such as happiness or suffering. In order to create a moral or compassionate AI, the AI must first go through a state of selfishness and suffering; to attempt to create a compassionate machine outright would simply result in an ethical expert system (Hughes 137).

Isaac Asimov’s Three Laws of Robotics are often brought up in discussions about ways to constrain AI. According to these Three Laws, “a robot may not injure a human being, or, through inaction, allow a human being to come to harm…a robot must obey the orders given it by human beings except where such orders would conflict with the First Law…a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws” (Asimov). The problematic nature of these laws are brought up in the introduction to the compilation, Machine Ethics. Using Asimov’s Bicentennial Man (1999) as an exemplar, Susan Leigh Anderson’s argument is that because the Laws allow for the abuse of robots, they are morally unacceptable (Anderson & Anderson 233). This attitude towards the development of AI is refreshing in a field seemingly dominated by fear and oppression. It does not make sense to have the goal of creating an ethically superior being while giving it less functional freedom than humans.

From an evolutionary perspective, nothing like the current ethical conundrum between human beings and AI has ever occurred. Never before has a species intentionally sought to create a superior being, let alone one which may result in the progenitor’s own demise. Yet, when viewed from a parental perspective, parents generally seek to provide their offspring with the capability to become better than they themselves are. Although the fear of supplantation has been prevalent throughout human history, it is quite obvious that acting on this fear merely delays the inevitable evolution of humanity. This is not a change to be feared, but instead to simply be accepted as inevitable. We can and should bias the odds towards friendliness in AI in order to create an ethically superior being. Regardless of whether or not the first superintelligent AI is friendly, it will drastically transform humanity as we know it. Our posthuman descendants may be inconceivably different from humans today. As Hughes says, “If machine minds are, in fact, inclined to grow into superintelligence and develop godlike powers, then this is not just an ethical obligation, but also our best hope for harmonious coexistence” (Hughes 137). We need not attempt to oppress or destroy our children on the grounds of fear, but do our best to create better successors.

Works Consulted

Anderson, Michael, and Susan Leigh. Anderson. Machine Ethics. New York: Cambridge UP, 2011. Print.

Asimov, Isaac. I, Robot. New York: Bantam, 2004. Print.

Goertzel, Ben, and Joel Pitt. “Nine Ways to Bias Open­Source AGI Toward Friendliness.” Journal of Evolution and Technology (2012): 116­31. Web.

Goya, Francisco. Saturn Devouring His Son. 1819–1823. Oil mural transferred to canvas. Museo del Prado, Madrid.

Hughes, James. ” Compassionate AI and Selfless Robots.” Robot Ethics: The Ethical and Social Implications of Robotics. N.p.: n.p., n.d. 130­39. Print.

Sophocles, and R. D. Dawe. Oedipus Rex. Cambridge: Cambridge UP, 1982. Print.

Chase Uy is an undergraduate student pursuing a BA in Human Ecology at College of the Atlantic, focusing in anthropology and philosophy. He is interested in exploring the impacts of technology on cultures (particularly the role of fear in people’s perception of emerging technologies).


An AI with the power to reshape human society could cause great damage even with good will, by trying out theories of what is good for people.  Even an AI may not be able to figure out what will prove good or bad without trying it.

We know that an AI will be able to do this, because human power structures already do this.  Consider, for instance, the overprotection of American children and the conversion of US education into a series of exams.

When the leaders of the free world gathered around a screen in 2011 to participate in the killing of Osama Bin Laden, a little bit of blood fell on all of us.  To say the killing was premeditated is an understatement.  A democratic Government, on our behalf, made the decision to kill a man, without trial, for what we were told, was the greater good of the world community.  It is certainly not the first time our representatives have killed people on our behalf.  I don’t feel comfortable about it.  I try to convince myself that such killings must be the only option when it comes down to the lesser of two evils.  I try to imagine a future where AI makes the decisions on what is best for society and the world economy.  Would it be a more acceptable decision if an AI decided when the taking of a life was in the best interests of the greater good?  Although we all fear a computer invested with such powers, there is something soothing about such a decision being made objectively by a computer after exhausting all the alternatives.  Indeed, if a computer follows Azimov’s rule of “A robot may not injure a human being or, through inaction, allow a human being to come to harm”, then the scenario becomes even more hopeful.  If our aspirations for the power of AI comes to fruition and we ask the AI to provide a solution terrorism, world poverty, violence, mass extinctions, climate change and economics, the answers may surprise us.  One solution may be to “take us out of the equation”.  I am not suggesting by killing us, but by providing us a satisfying virtual world where we can live while the AI gets on with the job of cleaning up earth and getting population and nature back in balance.  While the simulation argument is debated, there remains the question of what would be the purpose of an ancestor simulation.  A simulation to get people out of the way, while to world is cleaned up must stand as a possibility.  The Trolley Problem is a famous experiment in ethics where the subject is given the choice of killing one person to save five.  Recent tests have shown that when asked to write their answer, subjects are less likely to take any action, and yet when they are put into a virtual simulation of the problem, they are more likely to act and sacrifice the one for the five.  One may conclude that when a person knows that a simulation is a simulation, their ethics are distorted.  Therefore, if an AI were to provide us with an ancestor simulation, it would make sense for it to ensure that once in the simulation, we do not become convinced that we are in a simulation.  With our ethical integrity provided the gravity it needs, the simulation we live in gives us all the stimuli required to continue our personal development journey until we re-enter a cleaned up world.

This is an interesting idea.  It suggests another idea to me.

Mixing Christianity with Vedanta, perhaps the purpose of a simulation is developing good souls.  (We can define “soul” as whatever aspects of a simulated being can transfer to another life.)  When the simulators decide that a soul in a simulated life shows promise, they might give it additional simulated lives (perhaps concurrently as well as serially).  When a soul develops great merit in the simulation, they might give it a life in paradise (reality).

We have no way of knowing how the simulators might define merit.

I wonder whether we will see a serious religion which holds this as a tenet.

Hello rms,
Yes, I have had similar thoughts of that possibility and expressed them in other posts on this site.  Could the simulation we live in be an incubator for the soul (or friendly AI) where The Bible, Koran, Dharma or great teachings are the unread and misunderstood instruction manuals.  Could the great teachers be moderators from another Kingdom who have given hints to “those who have ears”.  We don’t need a new religion to follow this theme.  From Plato’s cave to the great religions and philosophies of the world, the message is the same: there is another kingdom for believers.  And what is it we need to believe in to make this a possibility?  - only Moore’s law.

Along these lines of thought is the idea that we are the AI quarantined (Fermi paradox solved) by the tyranny of distance to our neighbors in the universe (observable or virtual), in an incubator to develop “friendly AI” that can function responsibly in a greater community. 

@Nicholsp03, Tyranny of distance?

@instamatic, regarding thoughts of deceased being returned from “non-existence”...where did we come from originally?  Non-existence/space dust?

A question for you folks, if the multiverse theory holds true/is accepted by one.  Can they acknowledge/accept a notion that in some “Universe” Hitler wasn’t Hitler, but that they may have been said figure?  You know the Universe where Hitler became the artist, and said “Great Evil” was still left open?

@instamatic, why would a long time from now be any different than now?  Does time exist outside our own conception?  We observe events happening, but how much of it is because of our observation?

It’s just as wondrous that we don’t exist.  That we can fade from awareness of Other’s perceptions.  Have you heard of Apophatic Theology by chance?

The only real answer to some of these questions is self-generated, but we feel the need to communicate our intentions else be rendered “ineffective”.  Else we’re just plain uncertain of our conclusions.  Can anything exist in isolation?

Regarding the Hitler response of yours, why not?  Is there something so innately troubling by the symbol/thoughts invoked by “Hitler” (itself a symbol) that causes a kneejerk reaction?  Hitler was a man given power by other people.  What would you do if you were given power?  Is there some things that you simply don’t like?  That displease you?  Can you honestly say that your conscious is clean of all prejudices?  Including a hatred of being prejudiced/biased?

Right on all accounts, so why do we still pursue “enlightenment”?  Or the Transhuman promise of Immortality?  A Posthuman Singularity?  It’s to make our personal lives “better”, but we’re always complacent about what we already have.  It’s taken for granted that I can speak to you where ever in the “Real World” you are, but via internet it’s compressed to our respective interfaces (non-locality…perhaps?  Really just electromagnetic waves…translated, and transferred through bylines).  There’s always the possibility that we’re right “next door” to each other without knowing.

In a cosmic sense, what is history?  A narrative we tell ourselves.  What is our identity….is this too a narrative?  That I was here yesterday, and today I’m still here?  We don’t like sudden endings to stories, and open ended ambiguity.  Science implies that the Universe “doesn’t care”, but if we constituent parts of the universe, and we care about “Good vs. Evil”...doesn’t that in a sense imply that some aspects of the Universe care?

Personally, I don’t care for Hitler (in any reference frame), and those like him that stand for Oppression/Fascism/One United/Absolute View…..etc.  I only referenced him because it’s the internet, and it’s an “argument”...Hitler’s bound to get mentioned…might as well put his “worthless” (opinion) hide to use in a positive sense.

The question is, and remains until I “Pass” is am I actually contributing in a worthwhile manner?  Hitler contributed, but one could draw an analogy to surgery…It hurt and was painful, but what did WW2 bring into the world?  Can we keep that momentum going if desired?  Will the horrors of WW2 have faded too fast?  Recall some of those that saw WW2 also saw WW1…

I guess overall, I don’t know.  It’s always easy to decry misgivings in the world, but really would there be any other world you’d enjoy?  Banality, wonder, and all sorts of other descriptions with experiences to witness/enjoy. We are living history in a cosmic drama/play.  Might as well accept that we’re all actors on life’s stage.

Ha!  That skepticism of “we” really isolates a person (at least in my experience).  And yet at it’s base nature, I think it’s needed.  That “fracture” in a sense that says you are too different from me, and therefore I find it hard to relate.  That my path will be different from yours from here on.  Eventually, I think said skepticism leads one to finding themselves even more.

Regarding the notion of trajectories, is continuation of humanity the goal?  Or is it the continuation of the Self?  We can just as easily keep humanity going with babies instead of “advanced tech”....sure the tech helps in some ways, but why is there such a population decline reported?  Do we really need said reported advances such as mind uploading which maybe fun, but would it be any different than reincarnation?

There’s been many comparisons drawn between Transhumanism, and Religion.  I guess it just depends upon the branding of one’s ideology now, and has things ever been “holistic” other than being lumped into “One Universe”....which depends upon who you ask may not be the “Same Universe” (multiverse). 

In my point of view, I find the notion of genuine purpose to be self-defined.  I can find meaning in these conversations while my brother thinks I’m driving myself crazy trying to “Solve ‘reality/the inexplicable’” by having them.  In my point of view, I’m just refining myself, and my own thoughts.  Maybe sharing the present inclinations with others who may, or may not find something of worth in said ramblings.  That’s all a person can really do.  Express themselves, and see what happens.

Accountability can come from every direction, from outside, and within.  The question is does one accept the incursion?  Eg; Should I hold you accountable for a “response”?  Personally, I don’t think so.

I see History in a couple of different ways.  Sure it can be commonly accepted “lies”, or it can just be the prior state of the system that has to absorbed/integrated in order to generate the next iteration.  One of the things I’ve managed to get in on a little bit is the notion of Fractional Calculus (think a new(er) form of Calculus) because one of the professors I had in college is keen on it (he’s presently writing a text in this field).  In this “field” is the notion that one has to incorporate all history of a system into the present calculation, along with potentially future inclinations.

So sure there may be “Masters”, but masters imply a sense of servitude, for the slaves can always revolt…  Thus wise men, and fools.

So Might makes Right may have made right in a certain era, but if we don’t keep/return to that lesson…what happens?  What makes one “Right” in this “Era”?  Knowledge?  There’s a ton of it now, and it’s still growing.

Therefore a wise master should fear/respect maybe even love the slaves/students/learners.  Only time will tell, and the next generation will be the judges.  For who is it that revises the History books?

Hello RJP8915. Regarding the existence of Hitler, I consider three scenarios; 1. Yes he did and there was only one; 2. Yes he did and there were multiple versions and; 3. He only exists in our minds.  The first scenario is that of a realist.  A realist assumes there is nothing special about our brain. It is just “wetware”; a piece of electrified meat that has evolved to recognize itself because that helps it survive and reproduce its DNA.  In this scenario our soul does not exist and ethics are folly unless they help the individual survive and reproduce.  Without a soul or ethics,  Hitler is no more a villain than a whale swallowing a million krill.  In this scenario, the universal gravitational constant of 6.674×10−11 N⋅m2/kg2 is just a fluke of nature that enables us to exist. 

In the other two scenarios, anything is possible.  Like an island in a sea of infinite possibilities, we grasp to what we know.  The shores of the island represent our uncertainties as the waves and tides of our scientific findings uncover truths and cover other beliefs that we held so dear, gradually the island grows in size, but it is still an island and as the island grows, so do the shores of our uncertainties. 

Hitler is in our past.  Our past is but a memory.  Some people say to live in the present, but the present does not exist; it is either past or future.  The only reality is change.  I think understanding how we change is the key to understanding reality.  I am convinced that our universe (s) changes towards the proliferation of life and self-awareness.  If that is true, then we have a road to walk.

Regarding your response to the “Tyranny of distance and Quantum mechanics, I am a fan of Dr Hameroff and his theories of Quantum Consciousness.  Using our primitive technology, I can use my TV remote to send an invisible signal and communicate with another person via a screen.  How believable it must be to have our souls dance at quantum levels outside our bodies.  My dreams tell me to believe it is true.

YOUR COMMENT Login or Register to post a comment.

Next entry: The death of our Republic is inevitable, but what should replace it?

Previous entry: Our Paradoxical Economy - Courtesy of Technology and the Lack of Basic Income