What If Your Robot Is the Devil?
Patrick Lin
2011-06-15 00:00:00



In robotics, one source of ethical hand-wringing is the increasing autonomy we are giving to machines. Literature and film warn us that it’s a bad idea to let robots make their own choices, since we may be harmed by some of those choices. So, the argument goes, we need to restrict or regulate efforts to endow robots or artificial intelligence (AI) with autonomy.

botdevThis position seems reasonable at first glance, but I suggest that it may be inconsistent with how we act in the real world.

Never mind other harmful technologies that we’ve developed, from anthrax to the atomic bomb, there’s something about machine autonomy that gives us extra pause. For instance, in one possible future, the helpful robot servants we employ in our homes may decide that humans are a pox on the world and want to murder us. Or, the military robots we create to defend our society could turn against us. This is a kind of treachery we don’t see with other inventions.

Yet, we create autonomous things all the time without this moral anxiety, don’t we? Consider the organic machines we create: children. The worst evils in history have been caused by humans, all of whom start out as children to the parents who created them.

Except for the parents in The Omen and Rosemary’s Baby, we usually don’t struggle with the risk that our kids might turn bad, when we decide to bring them into the world. This risk, though very real, is simply ignored.

So how do we explain this ethical schizophrenia? We worry about creating autonomous robots, but not autonomous humans. Robots have very little history of harming innocents or the ecosystem, but we know with certainty that humans -- though impossibly cute as babies -- have a limitless capacity for destruction. As Ralph Waldo Emerson put it, “A child is a curly, dimpled lunatic.” And some adults are too.

Is there a difference that makes a difference?



My suggestion is this: If creating children is morally unproblematic, then so is creating autonomous robots, unless we can identify morally relevant differences between the two acts.

Of course, we instinctively want to defend our right to have children and show that kids are different than autonomous robots. But what exactly is the moral issue with creating robots that is avoided when we create human beings? Or, in other words, when we’re talking about autonomous beings, why is the responsibility of the parent seemingly less than the responsibility of an inventor?

We could perhaps reply that humans offer greater benefits than robots do, and that these benefits outweigh the risks, so having kids doesn’t elicit the same moral panic. But this position is difficult to maintain, as robots are and will be used for equally valuable roles, such as difficult surgery, hunting terrorists, devising scientific theories, caring for our elderly and our children, and even as surrogate partners or friends. Further, the odds that any individual child will generate benefits to society seem to be less certain than the odds of a robot generating benefits, since the robot would be designed to do exactly that.

Again, many things can (and do) go wrong in raising children, so the risks in creating autonomous humans are significant and well known. Some individuals -- for example, Adolf Hitler, Pol Pot, Saddam Hussein -- are directly responsible for millions of deaths and countless suffering. Meanwhile, a single robot might be responsible for tens of deaths, or hundreds or thousands at most. So a risk-versus-reward analysis likely will not do the job in explaining why we ought to be precautionary about robots but not children.

OK, then perhaps we could point out that a robot is artificially created and children are naturally created. But how exactly is this difference ethically relevant? Moreover, not all children are “naturally” created, when we consider fertility treatments, and there is presumably no concern related to the autonomy of those “artificial” beings we bring into existence. Likewise, we could point to an inorganic/organic difference to explain our moral schism, but while that difference is real, it doesn’t seem to make a moral difference. If robots someday could be biologically grown, then this position commits us to treating bio-robots and humans the same, at least on the considered issue.

Another defense could be to assert that we have a natural right to have children, perhaps given a divine command to “be fruitful and multiply.” But if we take that command seriously now, it would cause an unsustainable population boom, if everyone were to have a dozen or so offspring; and this is unethical to the extent that it would lead to suffering and a sharp drop in quality of life, given severe pressure on family budgets, natural resources, jobs, and so on.

In any case, such a divine command might be a prescription for human procreation (as if we were waiting for permission), but it is not a proscription on creating autonomous machines. The reasons in support of a natural human right to procreate also seem to support a right to create technology and tools; for instance, like having kids, making tools is fulfilling, it helps us better survive, and so on.

Still another defense could be that we cannot help but reproduce, given a biological compulsion to have sex and, in some, a deep-seated need to become a parent; in contrast, we have a choice to make robots or not. But this reply makes a virtue out of necessity, or the well-known mistake of deriving “ought” from “is”: Just because we are biologically driven to do something -- whether it’s to reproduce, fight wars, cheat on spouses, or overeat -- is not a good reason to declare that act as ethical. Anyway, we also seem to be hardwired to develop tools and technologies, in which case the same defense would imply that it is ethical to develop autonomous robots without restriction.

totdevIf it appears that we are continually running into dead-ends here, we could “bite the bullet” in at least a couple of ways: We could deny any obligation to tread carefully in developing autonomous robots or AI. But if this implies we are never obligated to avoid manufacturing dangerous or risky products, then we’d want to resist this highly counterintuitive position.

Or we could concede that our unrestricted practice of human reproduction is hypocritical and must be changed -- that is, we should start to seriously consider the risks posed by creating any autonomous beings, whether children or robots. And this may imply an obligation for greater education and parental supervision to ensure children don’t cause harm to individuals or humanity, including possibly the state’s regulation of procreation. It could also imply the need for a social insurance market against the contingency of wretched offspring. But most of us would want to resist those implications as well.

Ultimately, it could be that there is a defensible moral difference between creating children and autonomous robots. But it is not obvious what that difference is, despite our taking it for granted. Our search for that answer can illuminate our ethical responsibility in developing autonomous robots, especially as some fears about robots seem to be a projection of fears about ourselves -- we know what kind of devils we can be.