Military robots already have been deployed by the United States in the occupation of Iraq, and in growing numbers; not only are the quantities of robots increasing, but the varieties of their usage and capabilities are also expanding. Although the concept of full-scale robotic war still strikes some people as unrealistically futuristic or even science fictional, it’s clear that in fact the future is now.
About a month ago I wrote an article entitled “The Ethics of Killer Robots” for the Responsible Nanotechology blog.
In that article, I said:
Military robots already have been deployed by the United States in the occupation of Iraq, and in growing numbers, as this recent article in The New Atlantis makes clear:
When U.S. forces went into Iraq, the original invasion had no robotic systems on the ground. By the end of 2004, there were 150 robots on the ground in Iraq; a year later there were 2,400; by the end of 2008, there were about 12,000 robots of nearly two dozen varieties operating on the ground in Iraq. As one retired Army officer put it, the “Army of the Grand Robotic” is taking shape.
Not only are the quantities of robots increasing, but the varieties of their usage and capabilities are also expanding:
It isn’t just on the ground: military robots have been taking to the skies—and the seas and space, too. And the field is rapidly advancing. The robotic systems now rolling out in prototype stage are far more capable, intelligent, and autonomous than ones already in service in Iraq and Afghanistan. But even they are just the start.
Although the concept of full-scale robotic war still strikes some people as unrealistically futuristic or even science fictional, it’s clear that in fact the future is now. A report this week from the McClatchy Newspapers says:
The unmanned bombers that frequently cause unintended civilian casualties in Pakistan are a step toward an even more lethal generation of robotic hunters-killers that operate with limited, if any, human control.
The Defense Department is financing studies of autonomous, or self-governing, armed robots that could find and destroy targets on their own. On-board computer programs, not flesh-and-blood people, would decide whether to fire their weapons.
“The trend is clear: Warfare will continue and autonomous robots will ultimately be deployed in its conduct,” Ronald Arkin, a robotics expert at the Georgia Institute of Technology in Atlanta, wrote in a study commissioned by the Army.
“The pressure of an increasing battlefield tempo is forcing autonomy further and further toward the point of robots making that final, lethal decision,” he predicted. “The time available to make the decision to shoot or not to shoot is becoming too short for remote humans to make intelligent informed decisions.”
Belatedly, perhaps, ethical issues surrounding this development are attracting some attention. A recent opinion piece in NewScientist by A. C. Grayling, a philosopher at the University of London, decries the full-speed-ahead mentality that seems to dominate the robot-military-industrial complex:
In the next decades, completely autonomous robots might be involved in many military, policing, transport and even caring roles. What if they malfunction? What if a programming glitch makes them kill, electrocute, demolish, drown and explode, or fail at the crucial moment? Whose insurance will pay for damage to furniture, other traffic or the baby, when things go wrong? The software company, the manufacturer, the owner?
The civil liberties implications of robot devices capable of surveillance involving listening and photographing, conducting searches, entering premises through chimneys or pipes, and overpowering suspects are obvious. Such devices are already on the way. Even more frighteningly obvious is the threat posed by military or police-type robots in the hands of criminals and terrorists.
There needs to be a considered debate about the rules and requirements governing all forms of robot devices, not a panic reaction when matters have gone too far. That is how bad law is made—and on this issue time is running out.
We agree with Grayling’s call for urgent considered debate, now before it’s too late.
And when you factor in the possibility—or probability—of increased power and functionality of killer robots based on advanced nanotechnology, then the concerns he and others have expressed gain added severity.
It bears repeating: The future is now.