Friendly Artificial Intelligence: Parenthood and the Fear of Supplantation
Chase Uy
2015-07-05 00:00:00

Much of the discourse regarding the hypothetical creation of artificial intelligence often views AI as a tool for the betterment of humankind—a servant to man. Even the works which acknowledge the semblance of the creation of AI to the creation of a child often address the dangers of self­improvement and discuss ways to limit AI. These papers often discuss creating an ethical being yet fail to acknowledge the ethical treatment of artificial intelligence at the hands of its creators. By viewing the teaching processes and developmental stages proposed by many ethicists as similar to that of a human parent and a child, the discourse of ethical treatment of AI may be legitimate.



Superintelligence is inherently unpredictable; it is impossible to predict the behavior of a mind whose intelligence is exponentially greater than the human race’s smartest minds. The child may very well overthrow the parent. However, that does not mean ethicists and programmers today cannot do anything to bias the odds towards a human­friendly AI; like a child, we can teach it to behave more ethically than we do. Ben Goertzel and Joel Pitt discuss the topic of ethical AI development in their 2012 paper, “Nine Ways to Bias the Open­Source AGI Toward Friendliness.” These “nine ways” include heavy human involvement during the AI’s developmental stages, the development of goal systems, creating a community of equally powerful AI, and connection to the “global brain” (particularly the internet).

Goertzel and Pitt propose that the AGI must have the same faculties, or modes of communication and memory types, that humans have in order to acquire ethical knowledge. These include episodic memory (the assessment of an ethical situation based on prior experience); sensorimotor memory (the understanding of another’s feelings by mirroring them); declarative memory (rational ethical judgement); procedural memory (learning to do what is right by imitation and reinforcement); attentional memory (understanding patterns in order to pay attention to ethical considerations at appropriate times); intentional memory (ethical management of one’s own goals and motivations) (Goertzel & Pitt 7).

The idea that an AI must have some form of sensory functions and an environment to interact with is also discussed by James Hughes in his 2011 book, Robot Ethics, in the chapter, “Compassionate AI and Selfless Robots: A Buddhist Approach”. In this paper, however, Hughes proposes that, consciousness, as understood by Buddhism, requires five skandhas: the body and sense organs, or rupa; sensation, or vedana; perception, or samjna; volition, or samskara; and consciousness, or vijnana (Hughes 131). In this model for developing AI, the AI must have some sort of physical or virtual embodiment with senses in order to give rise to “folk psychology” and “folk physics,” through its realization of its surroundings. This method proposes that in order for a truly compassionate AI to exist, it must go through a state of suffering and, ultimately, self­transcendance.



Hughes’s paper also addresses the ethical concerns for creating an AI “child.” He proposes following the Sigalovada Sutta, the five obligations a Buddhist parent has to their children: To dissuade them from doing evil; to persuade them to do good; to give them a good education; to see that they are suitably married; and to give them their inheritance (Hughes 133). As Hughes points out, the duty regarding marriage is most likely irrelevant in terms of AI ethics. The others, however, are incredibly relevant. These rules play into the idea of the AI learning by example; by being good parents and treating the AI ethically, we can bias the odds for a morally superior AI child.

By viewing the development of AI as raising a child, it becomes evident that many of the discussions around limiting the capability of AI is rather oppressive. Hughes acknowledges the principle of “procreative beneficence,”or the duty to give our offspring the best possible chances in life (Hughes 134). As discussed in Hughes’s paper, creating a self­aware being that does not possess something similar to the human capacity for learning and growth would be unethical (Hughes 134), as would be locking the AI in a constant state of emotion, such as happiness or suffering. In order to create a moral or compassionate AI, the AI must first go through a state of selfishness and suffering; to attempt to create a compassionate machine outright would simply result in an ethical expert system (Hughes 137).

Isaac Asimov’s Three Laws of Robotics are often brought up in discussions about ways to constrain AI. According to these Three Laws, “a robot may not injure a human being, or, through inaction, allow a human being to come to harm...a robot must obey the orders given it by human beings except where such orders would conflict with the First Law...a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws” (Asimov). The problematic nature of these laws are brought up in the introduction to the compilation, Machine Ethics. Using Asimov’s Bicentennial Man (1999) as an exemplar, Susan Leigh Anderson’s argument is that because the Laws allow for the abuse of robots, they are morally unacceptable (Anderson & Anderson 233). This attitude towards the development of AI is refreshing in a field seemingly dominated by fear and oppression. It does not make sense to have the goal of creating an ethically superior being while giving it less functional freedom than humans.

From an evolutionary perspective, nothing like the current ethical conundrum between human beings and AI has ever occurred. Never before has a species intentionally sought to create a superior being, let alone one which may result in the progenitor's own demise. Yet, when viewed from a parental perspective, parents generally seek to provide their offspring with the capability to become better than they themselves are. Although the fear of supplantation has been prevalent throughout human history, it is quite obvious that acting on this fear merely delays the inevitable evolution of humanity. This is not a change to be feared, but instead to simply be accepted as inevitable. We can and should bias the odds towards friendliness in AI in order to create an ethically superior being. Regardless of whether or not the first superintelligent AI is friendly, it will drastically transform humanity as we know it. Our posthuman descendants may be inconceivably different from humans today. As Hughes says, “If machine minds are, in fact, inclined to grow into superintelligence and develop godlike powers, then this is not just an ethical obligation, but also our best hope for harmonious coexistence” (Hughes 137). We need not attempt to oppress or destroy our children on the grounds of fear, but do our best to create better successors.

Works Consulted


Anderson, Michael, and Susan Leigh. Anderson. Machine Ethics. New York: Cambridge UP, 2011. Print.

Asimov, Isaac. I, Robot. New York: Bantam, 2004. Print.

Goertzel, Ben, and Joel Pitt. "Nine Ways to Bias Open­Source AGI Toward Friendliness." Journal of Evolution and Technology (2012): 116­31. Web.

Goya, Francisco. Saturn Devouring His Son. 1819–1823. Oil mural transferred to canvas. Museo del Prado, Madrid.

Hughes, James. " Compassionate AI and Selfless Robots." Robot Ethics: The Ethical and Social Implications of Robotics. N.p.: n.p., n.d. 130­39. Print.

Sophocles, and R. D. Dawe. Oedipus Rex. Cambridge: Cambridge UP, 1982. Print.