Dialoguing with the US Military on the Ethics of Battlebots
Ben Goertzel
2009-12-15 00:00:00
URL

Part of me wanted to bring a guitar and serenade the crowd (consisting perhaps 50% of uniformed officers) with “Give Peace a Chance” by John Lennon and “Masters of War” by Bob Dylan … but due to the wisdom of my 43 years of age I resisted the urge ;-p

Anyway the world seems very different than it did in the early 1970s when I accompanied my parents on numerous anti-Vietnam-war marches. I remain generally anti-violence and anti-war, but my main political focus now is on encouraging a smooth path toward a positive Singularity. To the extent that military force may be helpful toward achieving this end it has to be considered as a potentially positive thing….

My talk didn’t cover any new ground (to me); after some basic transhumanist rhetoric I discussed my notion of different varieties of ethics as corresponding to different types of memory (declarative ethics, sensorimotor ethics, procedural ethics, episodic ethics, etc.), and the need for ethical synergy among different ethics types, in parallel with cognitive synergy among different memory/cognition types. For the low-down on this see a previous blog post on the topic.

But some of the other talks and lunchroom discussions were interesting to me, as the community of military officers is rather different from the circles I usually mix in…

One of the talks before mine was a prerecorded talk (robo-talk?) on whether it’s OK to make robots that decide when/if to kill people, with the basic theme of “It’s complicated, but yeah, sometimes it’s OK.”

(A conclusion I don’t particularly disagree with: to my mind, if it’s OK for people to kill people in extreme circumstances, it’s also OK for people to build robots to kill people in extreme circumstances. The matter is complicated, because human life and society are complicated.)

(As the hero of the great film Kung Pow said, “Killing is bad. Killing is wrong. Killing is badong!” … but, even Einstein had to recant his radical pacifism in the face of the extraordinary harshness of human reality. Harshness that I hope soon will massively decrease as technology drastically reduces material scarcity and gives us control over our own motivational and emotional systems.)

Another talk argued that “AIs making lethal decisions” should be outlawed by international military convention, much as chemical and biological weapons and eye-blinding lasers are now outlawed…. One of the arguments for this sort of ban was that, without it, one would see an AI-based military arms race.

As I pointed out in my talk, it seems that such a ban would be essentially unenforceable.

For one thing, missiles and tanks and so forth are going to be controlled by automatic systems of one sort or another, and where the “line in the sand” is drawn between lethal decisions and other decisions, is not going to be terribly clear. If one bans a robot from making a lethal decision, but allows it to make a decision to go into a situation where making a lethal decision is the only rational choice, then what is one really accomplishing?

For another thing, even if one could figure out where to draw the “line in the sand,” how would it possibly be enforced? Adversary nations are not going to open up their robot control hardware and software to each other, to allow checking of what kinds of decisions robots are making on their own without a “human in the loop.” It’s not an easy thing to check, unlike use of nukes or chemical or biological weapons.

I contended that just as machines will eventually be smarter than humans, if they’re built correctly they’ll eventually be more ethical than humans — even according to human ethical standards. But this will require machines that approach ethics from the same multiple perspectives that humans do: not just based on rules and rational evaluation, but based on empathy, on the wisdom of anecdotal history, and so forth.

There was some understandable concern in the crowd that, if the US held back from developing intelligent battlebots, other players might pull ahead in that domain, with potentially dangerous consequences…. With this in mind, there was interest in my report on the enthusiasm, creativity and ample funding of the Chinese AI community these days. I didn’t sense much military fear of China itself (China and the US are rather closely economically tied, making military conflict between them unlikely), but there seemed some fear of China distributing their advanced AI technology to other parties that might be hostile.

I had an interesting chat with a fighter pilot, who said that there are hundreds of “rules of engagement” to memorize before a flight, and they change frequently based on political changes. Since no one can really remember all those rules in real-time, there’s a lot of intuition involved in making the right choices in practice.

This reminded me of a prior experience making a simulation for a military agency … the simulated soldiers were supposed to follow numerous rules of military doctrine. But we found that when they did, they didn’t act much like real soldiers — because the real soldiers would deviate from doctrine in contextually appropriate ways.

The pilot drew the conclusion that AIs couldn’t make the right judgments because doing so depends on combining and interpreting (he didn’t say bending, but I bet it happens too) the rules based on context. But I’m not so sure. For one thing, an AI could remember hundreds of rules and rapidly apply them in a particular situation — that is, it could do a better job of declarative-memory-based battle ethics than any human. In this context, humans compensate for their poor declarative memory based ethics [and in some cases transcend declarative memory based ethics altogether] with superior episodic memory based ethics (contextually appropriate judgments based on their life experiences and associated intuitions). But, potentially, an AI could combine this kind of experiential judgment with superior declarative ethical capability, thus achieving a better overall ethical functionality….

One thing that was clear is that the US military is taking the diverse issues associated with battle AI very seriously … and soliciting a variety of opinions from those all across the political spectrum … even including out-there transhumanists like me. This sort of open-ness to different perspectives is certainly a good sign.

Still, I don’t have a great gut feeling about superintelligent battlebots. There are scenarios where they help bring about a peaceful Singularity and promote overall human good … but there are a lot of other scenarios as well.

My strong hope is that we can create peaceful, benevolent, superhumanly intelligent AGI before smart battlebots become widespread.

My colleagues and I — among others — are working on it.