IEET > GlobalDemocracySecurity > Fellows > Russell Blackford > Military
Killer robots in war

This thought experiment is not as far-fetched as it may seem at first glance. Many experts believe that we will be able, not too many decades down the track, to build a device with the capacities that I’ll be describing. My Generation Y philosophy/international studies students may still be young enough to be involved in real-world decisions when this sort of technology is available. Even I may still be alive, to vote on it, if it’s an election issue in 30 or 40 years time. Though it may be at an early stage, the necessary research is going on, even now, in such places as the US military’s Defense Advanced Research Projects Agency (DARPA).

Imagine that the T-1001 is a robotic fighting device with no need for a human being in the decision-making loop. Note that it does not possess any sentience, in the sense of consciousness or ability to suffer. It cannot reflect on or change the fundamental values that have been programmed into it. However, it is programmed to make independent decisions in the field; in that sense, it can operate autonomously, though it would not qualify as an entity with full moral autonomy in a sense that Kant, for example, might recognise. It has some limited ability to learn from experience and upgrade its programming.

The T-1001 is programmed to act within the traditional jus in bello rules of just war theory and/or similar rules existing in international law and military procedures manuals. Those rules include discriminating between combatants and non-combatants. I.e., civilians, and other non-combatants such as prisoners of war, have an immunity from attack; however, there is some allowance for “collateral damage”, relying on modern versions of the (admittedly dubious) doctrine of double effect. The T-1001 is not equipped with weapons that are considered evil in themselves (because they are indiscriminate or cruel). Its programming requires it to avoid all harms that are disproportionate to the reasonably expected military gains.

To accomplish all this, the T-1001’s designers have given it sophisticated pattern-recognition software and an expert program that makes decisions about whether or not to attack. It can distinguish effectively between combatants and non-combatants in an extensive range of seemingly ambiguous situations. It can weigh up military objectives against probable consequences, and is programmed to make decisions within the constraints of jus in bello (or similar requirements). As mentioned above, it does not use weapons that are evil in themselves, and does not attack non-combatants except strictly in accordance with an elaborate version of the doctrine of double effect that is meant to take account of concerns about collateral loss of life. It uses a general rule of proportionality. Indeed, the T-1001’s calculations, when it judges proportionality issues, consistently lead to more conservative decisions than those made by soldiers. I.e., its decisions are more conservative in the sense that it kills fewer civilians, causes less overall death and destruction, and produces less suffering than would be caused by human soldiers making the same sorts of decisions in comparable circumstances.

At the same time, it is an extremely effective combatant - faster, more accurate, and more robust than any human being. With declining birthrates in Western countries, and a shortage of young enlistees, it is a very welcome addition to the military capacity of countries such as the US, UK, Australia, etc.

In short, the T-1001 is more effective than human soldiers when it comes to traditional combat responsibilities. It does more damage to legitimate military targets, but causes less innocent suffering/loss of life. Because of its superior pattern-recognition abilities, its immunity to psychological stress, and its perfect “understanding” of the terms of engagement required of it, the T-1001 is better than human in its conformity to the rules of war.

One day, however, despite all the precautions I’ve described, something goes wrong and a T-1001 massacres 100 innocent civilians in an isolated village within a Middle Eastern war zone. Who (or what) is responsible for the deaths? Do you need more information to decide?

Given the circumstances, was it morally acceptable to deploy the T-1001? Is it acceptable for organisations such as DARPA to develop such a device?

I discussed a version of this scenario with my students this week. It seemed that, with some misgivings, the majority favoured deploying the T-1001, but perhaps only if a human was in the decision-making loop at least for the purpose of shutting it down if, despite all, it started to make an obvious error that would be a war crime if done by a human soldier. Presumably this would mean an automatic shut-down if it lost contact with the human being in the loop, such as by destruction of its cameras or its signal to base.

Although the scenario postulates that the T-1001 is actually less likely to commit war crimes (or the equivalent) than a human soldier, we can’t guarantee that it will never confront a situation that confuses it. It wouldn’t be nice if something like this started to run amok. Still, soldiers or groups of soldiers can also go crazy; in fact they are more likely to. Remember My Lai. But does a moral problem arise over the fact that, unless there’s somehow a human being in the loop, any unjustified civilian deaths that it causes are unlike other deaths in war? It seems to hard to call them “accidental”, but nor can they easily be sheeted home as any individual’s responsibility. Is it implicit in our whole concept of jus in bello that that kind of situation must not be allowed to eventuate?

What would you do if offered the chance to deploy this military gadget on the battlefield? Assume that you are fighting a just war.

Russell Blackford Ph.D. is a fellow of the IEET, an attorney, science fiction author and critic, philosopher, and public intellectual. Dr. Blackford serves as editor-in-chief of the IEET's Journal of Evolution and Technology. He lives in Newcastle, Australia, where he is a Conjoint Lecturer in the School of Humanities and Social Science at the University of Newcastle.



COMMENTS

Why are people always bothere this will be in war? I am far more worried a T1001 will do precisely the same in a local fast food store.

Not run amok - but displace jobs, and implictly change the value system on which our society is constructed.

YOUR COMMENT Login or Register to post a comment.

Next entry: Participatory Panopticon Trial One: FAIL

Previous entry: (Unrelated?) Huge News Stories