Institute for Ethics and Emerging Technologies

The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States. Please give as you are able, and help support our work for a brighter future.

Search the IEET
Subscribe and Contribute to:

Technoprogressive? BioConservative? Huh?
Quick overview of biopolitical points of view

whats new at ieet

IEET Affiliate Scholar Steve Fuller Publishes New Article in The Telegraph on AI

Can we build AI without losing control over it?

Blockchain Fintech: Programmable Risk and Securities as a Service

Brexit for Transhumanists: A Parable for Getting What You Wish For

How we’re harnessing nature’s hidden superpowers

The era of personal DNA testing is here

ieet books

Philosophical Ethics: Theory and Practice
John G Messerly


spud100 on 'For the unexpected innovations, look where you'd rather not' (Oct 22, 2016)

spud100 on 'Have you ever inspired the greatest villain in history? I did, apparently' (Oct 22, 2016)

RJP8915 on 'Brexit for Transhumanists: A Parable for Getting What You Wish For' (Oct 21, 2016)

instamatic on 'What democracy’s future shouldn’t be' (Oct 20, 2016)

instamatic on 'Is the internet killing democracy?' (Oct 17, 2016)

RJP8915 on 'The Ethics of a Simulated Universe' (Oct 17, 2016)

Nicholsp03 on 'The Ethics of a Simulated Universe' (Oct 17, 2016)

Subscribe to IEET News Lists

Daily News Feed

Longevity Dividend List

Catastrophic Risks List

Biopolitics of Popular Culture List

Technoprogressive List

Trans-Spirit List


Enframing the Flesh: Heidegger, Transhumanism, and the Body as “Standing Reserve”

Moral Enhancement and Political Realism

Intelligent Technologies and Lost Life

Hottest Articles of the Last Month

Here’s Why The IoT Is Already Bigger Than You Realize
Sep 26, 2016
(5940) Hits
(1) Comments

IEET Fellow Stefan Sorgner to discuss most recent monograph with theologian Prof. Friedrich Graf
Oct 3, 2016
(4010) Hits
(0) Comments

Space Exploration, Alien Life, and the Future of Humanity
Oct 4, 2016
(3966) Hits
(1) Comments

All the Incredible Things We Learned From Our First Trip to a Comet
Oct 6, 2016
(3038) Hits
(0) Comments

IEET > Security > SciTech > Vision > Artificial Intelligence > Philosophy > Affiliate Scholar > John Danaher

Print Email permalink (0) Comments (2036) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg

Should we stop killer robots? (2) - The Right Reason Objection

John Danaher
By John Danaher
Philosophical Disquisitions

Posted: Jan 18, 2016

(Previous post)

This post is the second in a short series looking at the arguments against the use of fully autonomous weapons systems (AWSs). As I noted at the start of the previous entry, there is a well-publicised campaign that seeks to pre-emptively ban the use of such systems on the grounds that they cross fundamental moral line and fail to comply with the laws of war. I’m interested in this because it intersects with some of my own research on the ethics of robotic systems. And while I’m certainly not a fan of AWSs (I’m not a fan of any weapons systems), I’m not sure how strong the arguments of the campaigners really are.

That’s why I’m taking a look at Purves, Jenkins and Strawser’s (‘Purves et al’) paper ‘Autonomous Machines, Moral Judgment, and Acting for the Right Reasons’. This paper claims to provide more robust arguments against the use of AWSs than have traditionally been offered. As Purves et al point out, the pre-existing arguments suffer from two major defects: (i) they are contingent upon the empirical realities of current AWSs (i.e. concerns about targeting mechanisms, dynamic adaptability) that could, in principle, be addressed; and (ii) they fail to distinguish between AWSs and other forms of autonomous technology like self-driving cars.

To overcome these deficiencies, the authors try to offer two ‘in principle’ objections to the use of AWSs. I covered the first of those in the last post. It was called the ‘anti-codifiability objection’. It argued against the use of AWSs on the grounds that AWSs could not exercise moral judgment and the exercise of moral judgment was necessary in order to comply with the requirements for just war. The reason that AWSs could not exercise moral judgment was because moral judgment could not be codified, i.e. reduced to an exhaustive set of principles that could be programmed into the AI.

I had several criticisms of this objection. First, I thought the anticodifiability thesis about moral judgment was too controversial and uncertain to provide a useful basis for a campaign against AWSs. Second, I thought that it was unwise to make claims being about what is ‘in principle’ impossible for artificially intelligent systems, particularly when you ignore techniques for developing AIs that could circumvent the codifiability issue. And third, I thought that the objection ignored the fact that AWSs could be better at conforming with moral requirements even if they themselves didn’t exercise true moral judgment.

To be fair to them, Purves et al recognise and address the last of these criticisms in their paper. In doing so, they introduce a second objection to the use of AWSs. This one claims that mere moral conformity is not enough for an actor in a just war. The actor must also act for the right reason. Let’s take a look at this second objection now.

1. The Right Reason Objection

The second objection has a straightforward logical structure (I’m inferring this from the article; it doesn’t appear in the suggested form in the original text):

  • (1) An actor in a just war cannot simply conform with moral requirements, they must act for the right moral reasons (‘Right Reason Requirement’).
  • (2) AWSs cannot act for the right moral reasons.
  • (3) Therefore, the use AWSs is not in compliance with the requirements of just war theory.

This is a controversial argument. I’ll have a number of criticisms to offer in a moment. For the time being, I will focus on how Purves et al defend its two main premises.

They spend most of their time on the first premise. As best I can tell, they adduce three main lines of argument in favour of it. The first is based around a thought experiment called ‘Racist Soldier’:

Racist Soldier: “Imagine a racist man who viscerally hates all people of a certain ethnicity and longs to murder them, but he knows he would not be able to get away with this under normal conditions. It then comes about that the nation-state of which this man is a citizen has a just cause for war: they are defending themselves from invasion by an aggressive, neighboring state. It so happens that this invading state’s population is primarily composed of the ethnicity that the racist man hates. The racist man joins the army and eagerly goes to war, where he proceeds to kill scores of enemy soldiers of the ethnicity he so hates. Assume that he abides by the jus in bello rules of combatant distinction and proportionality, yet not for moral reasons. Rather, the reason for every enemy soldier he kills is his vile, racist intent.”

(Purves et al 2015, 860)

You have got to understand what is going on in this thought experiment. The imagined soldier is conforming with all the necessary requirements of just war. He is not killing anybody who it is impermissible to kill. It just so happens that he kills to satisfy his racist desires. The intuition they are trying to pump with this thought experiment is that you wouldn’t want to allow such racist soldiers to fight in a war. It is not enough that they simply conform with moral requirements. They also need to act for the right moral reasons.

The second line of argument is a careful reading and analysis of the leading writers on just war theory. These range from classical sources such as Augustine to more modern writers such as Michael Walzer and Jeff McMahan. There is something of a dispute in this literature as to whether individual soldiers need to act for the right moral reasons or whether it is only the states who direct the war that need to act for the right moral reasons (the jus ad bellum versus jus in bello distinction). Obviously, Purves et al favour the former view and they cite a number of authors in support of this position. In effect then, this second argument is an argument from authority. Some people think that arguments from authority are informally fallacious. But this is only true some of the time: often an argument from authority is perfectly sound. It simply needs to be weighted accordingly.

The third line of argument expands the analysis to consider non-war situations. Purves et al argue that the right reason requirement is persuasive because it also conforms with our beliefs in more mundane moral contexts. Many theorists argue that motivation makes a moral difference to an act. Gift-giving is an illustration of this. Imagine two scenarios. One in which I give flowers to my girlfriend in order to make her feel better and another in which I give her flowers in order to make a rival for her affections jealous. The moral evaluation of the scenarios varies as a function of my reasons for action. Or so the argument goes.

So much for the first premise. What about the second? It claims that AWSs cannot act for moral reasons. Unsurprisingly, Purves et al’s defence of this claim follows very much along the lines of their defence of the anticodifiability objection. They argue that artificially intelligent agents cannot, in principle, act for moral reasons. There are two leading accounts of what it means to act for a reason (the belief-desire model and the taking as a reason model) and on neither of those is it possible for AWSs to act for a reason:

Each of these models ultimately requires that an agent possess an attitude of belief or desire (or some further propositional attitude) in order to act for a reason. AI possesses neither of these features of ordinary human agents. AI mimics human moral behavior, but cannot take a moral consideration such as a child’s suffering to be a reason for acting. AI cannot be motivated to act morally; it simply manifests an automated response which is entirely determined by the list of rules that it is programmed to follow. Therefore, AI cannot act for reasons, in this sense. Because AI cannot act for reasons, it cannot act for the right reasons.  (Purves et al 2015, 861)

This completes the initial defence of the right reason objection.

2. Evaluating the Right Reason Objection

Is this objection any good? I want to consider three main lines of criticism, some minor and some more significant. The criticisms are not wholly original to me (some elements of them are). Indeed, they are all at least partly addressed by Purves et al in their original paper. I’m just not convinced that they are adequately addressed.

I’ll start with what I take to be the main criticism: the right reason objection does not address the previously-identified problem with the anticodifiability objection. Recall that the main criticism of the anticodifiability objection was that the inability of AWSs to exercise moral judgment may not be a decisive mark against their use. If an AWS was better at conforming with moral requirements than a human soldier (i.e. if it could more accurately and efficiently kill legitimate targets, with less deleterious side effects), then the fact that it was not really exercising moral judgment would be inconclusive. It may provide a reason not to use it, but this reason would not be decisive (all things considered). I think this continues to be true for the right reason objection.

This is why I do not share the intuition they are trying to pump with the Racist Soldier example. I take this thought experiment to be central to their defence of the objection. Their appeals to authority are secondary: they have some utility but they are likely to be ultimately reducible to similar intuitive case studies. So we need to accept their interpretation of the racist soldier in order for the argument to really work. And I’m afraid it doesn’t really work for me. I think the intentions of the racist soldier are abhorrent but I don’t think they are decisive. If the soldier really does conform with all the necessary moral requirements — and if, indeed, he is better at doing so than a soldier that does act for the right reasons — then I fail to see his racist intentions as a decisive mark against him. If I had to compare two equally good soldiers — one racist and one not — I would prefer the latter to the former. But if they are not equal — if the former is better than latter on a consequentialist metric — then the racist intentions would not be a decisive mark against the former. Furthermore, I suspect the thought experiment is structured in such a way that other considerations are doing the real work on our intuitions. In particular, I suspect that the real concern with the racist soldier is that we think his racist intentions make him more likely to make moral mistakes in the combat environment. That might be a legitimate concern for AWSs that merely mimic moral judgment, but then the concern is really with the consequences of their deployment and not about whether they act for the right reasons. I also suspect that the thought experiment is rhetorically effective because it works off the well-known Knobe effect. This is a finding from experimental philosophy suggesting that people asymmetrically ascribe moral intentions to negative and positive behaviours. In short, the Knobe effect says I’m more likely to a ascribe a negative intention if a decision leads to a negative outcome than I am in if a decision leads to a positive outcome.

To be fair, Purves et al are sensitive to some of these issues. Although they don’t mention the Knobe effect directly, they try to address the problem of negative intentions by offering an alternative thought experiment involving a sociopathic soldier. This soldier doesn’t act from negative moral intentions; rather, his intentions are morally neutral. He is completely insensitive to moral reason. They still think we would have a negative intuitive reaction to this soldier’s actions because he fails to act for the right moral reasons. They argue that this thought experiment is closer to the case of the AWS. After all, their point is not that an AWS will act for bad moral reasons but that it will be completely incapable of acting for moral reasons. That may well be correct but I think that thought experiment also probably works off the Knobe effect and it still doesn’t address the underlying issue: what happens if the AWS is better at conforming with moral requirements? I would suggest, once again, that the lack of appropriate moral intentions would not be decisive in such a case. Purves et al eventually concede this point (p. 867) by claiming that theirs is a pro tanto, not an all things considered, objection to the use of AWSs. But this is to concede the whole game: their argument is now contingent upon how good these machines are at targeting the right people, and so not inherently more robust than the arguments they initially criticise.

As I say, that’s the major line of criticism. The second line of criticism is simply that I don’t think they do enough to prove that AWSs cannot act for moral reasons. Their objection is based on the notion that AWSs cannot have propositional attitudes like beliefs, desires and intentions. This is a long-standing debate in the philosophy of mind and AI. Suffice to say I’m not convinced that AIs could never have beliefs and desires. I think this is a controversial metaphysical claim and I think relying on such claims is unhelpful in the context of supporting a social campaign against killer robots. Indeed, I don’t think we ever know for sure whether other human beings have propositional attitudes. We just assume they do based on analogies with our own inner mental lives and inferences from their external behaviour. I don’t see why we couldn’t end up doing the same with AWSs.

The third line of criticism is one that the authors explore at some length. It claims that their objection rests on a category mistake, i.e. the mistaken belief that human moral requirements apply to artificial objects. We wouldn’t demand that a guided missile or a landmine act for the right moral reason, would we? If so, then we shouldn’t demand the same from an AWS. Purves et al respond to this in a couple of ways. I’m relatively sympathetic to their responses. I don’t think the analogy with a guided missile or a landmine is useful. We don’t impose moral standards on such equipment because we don’t see it as being autonomous from its human users. We impose the moral standards on the humans themselves. The concern with AWSs is that they would be morally independent from their human creators and users. There would, consequently, be a moral gap between the humans and the machines. It seems more appropriate to apply moral standards in that gap. Still, I don’t think that’s enough to save the objection.

That brings me to the end of this post. I’m going to do one final post on Purves et al’s paper. It will deal with two remaining issues: (i) can we distinguish between AWSs and other autonomous technology? and (ii) what happens if AWSs meet the requirements for moral agency? The latter of these is something I find particularly interesting. Stay tuned.

John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at NUI Galway (Ireland). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics. John blogs at You can follow him on twitter @JohnDanaher.
Print Email permalink (0) Comments (2037) Hits •  subscribe Share on facebook Stumble This submit to reddit submit to digg


YOUR COMMENT (IEET's comment policy)

Login or Register to post a comment.

Next entry: “Hey Bill Nye, How Are Ethics and Morals Related to Science?”

Previous entry: OpenAI - My Quick Thoughts


RSSIEET Blog | email list | newsletter |
The IEET is a 501(c)3 non-profit, tax-exempt organization registered in the State of Connecticut in the United States.

East Coast Contact: Executive Director, Dr. James J. Hughes,
56 Daleville School Rd., Willington CT 06279 USA 
Email: director @     phone: 860-428-1837

West Coast Contact: Managing Director, Hank Pellissier
425 Moraga Avenue, Piedmont, CA 94611
Email: hank @