Ray Kurzweil on Rationality and the Moral Considerability of Intelligent Machines
Daryl Wennemann
2014-04-01 00:00:00

Kurzweil first considers the possibility that we might emotionally empathize with such non-humanB persons. “[M]y position is that I will accept nonbiological entities that are fully convincing in their emotional reactions to be conscious persons, and my prediction is that the consensus in society will accept them as well.” [HCM, p. 213] So, Kurzweil would include such a being in the moral community even if it could not pass a Turing test. Its emotive expressions constitute evidence of consciousness.

But to empathize with a non-humanB entity that is capable of humanB-like emotions seems to be humanB-centric. Could we empathize with a non-humanB being that has emotions and motivations that do not emulate humanB emotions and motivations? Alien beings or artificial persons may evolve reactions that are really alien to our humanB mode of emotional response. Here Kurzweil considers the emotional response of a squid which seemed to respond with fear to a stimulus it seemed to consider dangerous. Again, if we empathize with the fear of the squid based on our experience of fear, we seem to be involved in an anthropomorphic projection.

The point of Kurzweil’s treatment of non-humanB persons is that we have a basis for empathy with them, even if it involves anthropoBmorphic projection. His real worry is with a case in which an entity has intelligence but does not have a need for the emotions we observe in biological creatures. After all, his project is to produce a non-biological form of intelligence. If a non-biological entity exhibits humanB-like emotions, or ones that are not humanB-like, there is a basis for empathy with that creature. But what of an entity that does not exhibit emotions or pursue humanB goals? If we think such a being is morally considerable, what is it we might point to as a basis for identifying morally with such a creature?

Kurzweil notes that we are able to identify with humanB beings that act in a way that exhibits commitment to a worthy goal, especially if it furthers humanB well-being. We identify especially with someone that acts in a way that involves self-sacrifice. Kurzweil finds a basis for identifying morally with non-humanB persons that act in this way. But what shall we say about an intelligent being that does not act in a way that exhibits emotions with which I could empathize or pursues goals that are seemingly not humanBly significant?

One concern with such an artificial intelligence is that it might be able to operate so rapidly that we cannot know what goals it is actually pursuing. It may develop the intelligence to be able to conceal its goals from us. These kinds of concerns seem to support the effort to programme some ethical norms into intelligent machines for our own protection.

There may be a key to this problem in a phrase Kurzweil uses, “Personally I would say that if I saw in such a device’s behavior a commitment to a complex and worthy goal and the ability to execute notable decisions and actions to carry out its mission, I would be impressed and probably become upset if it got destroyed”. [HCM, p. 214] The key term here is “worthy”. What constitutes a worthy goal, especially in a way that is not anthropoBcentric or anthropoBmormic? It should be noted that Kurzweil did not designate a humanBly worthy goal. Can we think of morally worthy goals that are not humanBly worthy? If a non-humanB person can be said to be morally human (HumanM), i.e., a person, then it may be possible to find a way of identifying goals that are humanMly worthy. Such goals would be applicable to any intelligent being, whether it is humanB or not. If a non-humanB person were to pursue self-chosen morally worthy goals, that could be a basis for identifying with such a being morally.

First, we should note that in the Western philosophical tradition there is a basis for thinking of moral goals in a way that is not anthropoBcentric. Consider Robert Louden’s treatment of Immanuel Kant’s ethical theory. He has focused on the distinction between a pure part of ethics and an impure part in Kant’s ethical theory. According to Louden, “[T]he eventual intended scope of this moral community is actually supra-human.” [Robert B. Louden, Kant’s Impure Ethics, Oxford University Press, 2000, p. 11] As I commented on this passage in my work Posthuman Personhood, “The moral community can thus be seen as being constituted by all beings of a kind that are capable of rational self-direction, i.e., all humanM beings (both humanB and non-humanB).” [p. 86] So, the pure part of ethics can be seen to be concerned with all rationalR beings, that is, beings of a kind that are capable of being self-directing.

The term “RationalR” needs clarification here. In my recent article, “Rick Searle’s Rational Monster” [http://ieet.org/index.php/IEET/more/wennemann20140211], I distinguished three different kinds of rationality. It will be helpful to review that here. I will then apply the distinctions of different kinds of rationality to the issue of programming the pursuit of morally worthy goals into intelligent machines.

We can distinguish several different kinds of rationality. “Substantial rationality” can be understood to mean “an act of thought that reveals intelligent insight into the inter-relations of events in a given situation.” [Karl Mannheim, Man and Society in an Age of Reconstruction, New York: Harcourt Brace & World, 1940, p. 53.] Thus, if a person were to have insight into the functioning of an electrical circuit, that would be an instance of substantial rationality. Let’s use the term “RationalS” and its cognates to refer to substantial rationality.

“Instrumental rationality” or “functional rationality” involves “the coordination of action with reference to a goal. This is a type of rationality that has to do with the intelligent use of means to achieve a given goal. Here, the goal is given, and instrumental rationality applies to the intelligent choice of means. Thus, if we were to have a goal of sending humanB beings to Mars, we would have to think about whether a rocket could accomplish the goal. Would it be an effective means? But instrumental rationality does not bring the goal itself into question.” [Daryl J. Wennemann, Posthuman Personhood, p. 74.]

Let’s use the term “RationalI” and its cognates to refer to instrumental rationality. It should be noticed that rationalityS is implicated in the coordination of action with reference to a goal. But the choice of effective means to achieve a goal involves more than just having an insight into the inter-relations of events. This is where Kurzweil’s definition of “intelligence” as pattern recognition falls short. In The Age of Spiritual Machines, he notes, “Intelligence is precisely this process of selecting relevant information carefully so that it can skillfully and purposefully destroy the rest.[78]…What is going on is pattern recognition, the foundation of most human thought.[79]

Now, it seems that the choice of an effective means of achieving a goal is not just a matter of recognizing a pattern. Perhaps this is why Kurzweil also characterizes intelligence in terms of instrumental rationality (without using that terminology). “My view is that intelligence is the ability to use optimally limited resources—including time—to achieve [] goals…All that is needed to solve a surprisingly wide range of problems is exactly this: simple methods combined with heavy doses of computation….” [73]

The third kind of rationality I distinguish is “Reflexive rationality”. Reflexive rationality is a type of rationality that is directed to the intelligent choice of ends. Let’s use the term “RationalR” and its cognates to refer to reflexive rationality. An intelligent choice of ends involves choosing rationalR ends. A rationalR end is one that is consistent with reflexive rationality itself. It is an end that is rationallyR authorized.

​So, if I were to choose an end or goal that undermines rationalR goal-seeking, it would not be a goal that is consistent with reflexive rationality, since reflexive rationality is the standard for rationalR goal-seeking. Thus, to intentionally kill a humanB being is not a rationalR goal because humanB beings are beings of a kind that are capable of rationalR goal-seeking. They are humanM ie., persons. The humanB being I kill is no longer able to seek rationalR goals. It is important to observe that reflexive rationality (rationalityR) is concerned with rationalR goal-seeking as such, not just my own rationalR goal-seeking.

And so, any goal that undermines rationalR goal-seeking is morally impermissible. Again, if I were to impose a law on someone (heteronomy of the will) I would effectively be denying the person the ability to choose her/his own rationalR goals. My goal of imposing a law on someone would undermine rationalR goal-seeking. And so, it is not a rationalR goal. In this way, reasonR is able to determine what a morally worthy goal is. Such a conception of morally worthy goals is anthropoMcentric but not speciesist. It applies to any rationalR being.

​This approach recognizes that reasoning is a recursive activity that allows for reasonR to set standards for itself, and so it is reflexive.  ReasonR is also legislative (following Kant).  And so, reasonR establishes a standard for consistency, the principle of non-contradiction. 

If I am going to reason, reasonR requires that I be consistent.  I must adopt the standard of non-contradiction as a goal of my future reasoning.  If we generalize on the act of goal-setting, reasonR requires that I be consistent in the goals I set for myself.  If I reflect on the act of goal-setting, I can reflexively set a standard for consistency which requires that the goals I set for myself should be ones that are consistent with rationalR goal-setting.  So, if I adopt a goal that undermines rationalR goal-setting, that is not a rationalR goal.  It is self-defeating.

I see this as being what the categorical imperative is getting at.  In her treatment of Kant’s practical philosophy in The Sources of Normativity and The Constitution of Agency, Christine Korsgaard argues that if we act for reasons then we must follow the categorical imperative in order to not only be consistent, but to constitute ourselves as moral agents.  That is why we ought to commit ourselves to rationality.  “So rational motivation in a sense takes itself for its object.” (The Constitution of Agency, p. 214)

Now, it is interesting that Kurzweil points to the traditional golden rule as a place to start in thinking about programming ethical norms into machines. [HCM, p. 178] The categorical imperative can be seen as a philosophically sophisticated form of the golden rule. And Kurzweil also suggests that it is necessary to develop intelligent machines that have a process of review of their own procedures in setting their own goals. So, the task of developing self-aware intelligent machines that have an ethical orientation seems to depend upon the possibility of computers evolving their own reflexive rationality. If they are to be consistent in setting their own goals, they must not choose goals that undermine rationalR goal-setting.

It is also true that it would be morally impermissible for us to undermine the rationalR goal-setting activities of others, including intelligent machines. If such machines were capable of rationalR choice, then they too would be beings of a kind that are capable of being self-directing. Since we humanB beings are beings of a kind capable of being self-directing, we would have a basis for identifying morally with intelligent machines.

In the end, if we should be able to reason with non-humanB persons, the goal of reasonR would be the construction of a cooperative moral community in which all persons (humanB and non-humanB) would participate in applying the moral law, “But the eventual intended scope of this moral community is actually supra-human. Because… [an ethical commonwealth is] ‘a particular society that strives toward consensus [Einhelligkeit] with all human beings (indeed, all finite rational beings) [alle endlichen vernünftigen Wesen], in order to establish an absolute ethical whole,…For every species of rational being [jede Gattung vernünftiger Wesen] is objectively, in the idea of reason, destined to a common end, namely the promotion of the highest good as a good common to all’(6:97)”. [Louden, Kant’s Impure Ethics, pp. 127-128, quoting Kant from the Religion.] I would read this text as follows: “But the eventual intended scope of this moral community is actually supra-humanB. Because… [an ethical commonwealth is] ‘a particular society that strives toward consensus [Einhelligkeit] with all humanB beings (indeed, all finite rationalR beings) [alle endlichen vernünftigen Wesen], in order to establish an absolute ethical whole,…For every species of rationalR being [jede Gattung vernünftiger Wesen] is objectively, in the idea of reasonR, destined to a common end, namely the promotion of the highest good as a good common to all’(6:97)”.

Doesn’t this speak to Kurzweil’s grandiose vision for humanity? He ends How to Create a Mind with the assertion, “[W]aking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.” [p. 282] Will a nonbiological form of our humanB intelligence still be humanM? If so, then we would indeed have the moral task of moralizing the universe, so to speak. The Vocation of Humanity, as Johann Gottlieb Fichte conceived of it, is to construct “a world ordered and arranged by reason.” [The Vocation of Man, The Liberal Arts Press, Inc., p. 106.] That can only be understood as a world arranged by the ReasonR of HumanityM.

Images:

​http://www.deviantart.com/art/Robot-12226992

http://www.deviantart.com/art/brain-26905148