I Want Autonomous Killer Drones: A Personal View
Marcelo Rinesi
2012-12-05 00:00:00

Nb. This is a personal opinion piece, and doesn't necessarily reflect the position of the IEET. For a contrasting opinion, read an article at io9 from IEET Fellow George Dvorsky about a proposed Executive Order Establishing Limits on Autonomous Weapons Capable of Initiating Lethal Force drafted by IEET Fellow Wendell Wallach from the Yale Interdisciplinary Center for Bioethics (Wallach will be a panelist in the Dec. 10 Terminating the Terminator discussion panel).



One of arguments most frequently used against Lethal Autonomous Systems (I'll use the vernacular "autonomous killer drones") is their lack of built-in ethical constraints. It is often suggested that we must develop and implement robust ethical self-control systems prior to deploying any autonomous killer drone. The counterargument is obvious: have you seen what already happens in human-driven battlefields? Empirically, soldiers' ethical constraints are anything but foolproof (naturally so, given their training and the context of war); there's no reason to think even buggy software will be worse, and software, at least, can be debugged and improved.

A more indirect argument along the same lines is that, by removing humans from the "trigger" — which these days is as digitally mediated as any program would be — there isn't a direct focus of responsibility, which might make atrocities easier. But despite (to put a naive spin on it) official policy in developed armies, civilian collateral deaths and maiming occur all the time, and seldom if ever do the trigger-pushers, or their commanding officers, face charges. This is not to say that atrocities are ubiquitous; they aren't, but it's not the presence of a human pressing the trigger what limits them, or what fails when they do happen.

Perhaps the emotional underpinning to our resistance to autonomous killer drones is simply that we are used to humans killing humans; creating another entity with a certain level of perceived agency (as if standard self-guided missiles didn't already run complex algorithms) with the capability to kill us feels somehow as adding a completely new danger, even if in absolute terms your likelihood of death hasn't changed. This sort of ecological self-protection is understandable, but lacks sense once it's made explicit: if you're being killed, that it was a fellow human who did it is scant consolation.

Ultimately, the problem of having a killer drone flying over your head is nothing but the problem of having a killer anything flying over your head. The fact of killing by specifically trained and organized groups of people with the explicit backing of their societies is where has always lied, and should continue to lay, the locus of ethical concern.