Weapons systems are getting smarter, and people are getting nervous. In May, Human Rights Watch called for the outlawing of autonomous military machines with lethal capabilities. The group, which has organized an international "Campaign to Stop Killer Robots," issued the statement as 87 member states of the United Nations Convention on Certain Conventional Weapons met in Geneva to discuss the legal and ethical implications of autonomous military machines.
This follows the Pentagon's decision in 2012 to set a five- to 10-year moratorium on developing autonomous weapons systems. The U.N. Special Rapporteur on extrajudicial, summary or arbitrary executions has likewise called on other states to put their research on hold. For now, only a handful of countries—Cuba, Pakistan, Egypt and Ecuador—and the Vatican support banning autonomous military machines, but that may change when the U.N. revisits the issue in November.
Certainly the idea of killer robots is unsettling, and proceeding with caution is a good idea—so long as we don't completely stop exploring this technology. Autonomy is already a feature of the modern military. Experimental drones can now fly themselves, pick out their own landing zones and travel in mutually supporting swarms. A logical next step is to give these unmanned systems the power to fire on their own, delivering weapons on target faster and with greater precision than a human ever could.
The argument for the ban goes like this: War, though mind-shatteringly nasty, is still not a moral free-for-all. Under international humanitarian law, we expect combatants to do their best to spare the lives of civilians. In the pre-al Qaeda days of uniforms, insignia and organized front lines, it was easy to distinguish legitimate from illegitimate targets. But in our age of amorphous militants and low-intensity conflict, making that distinction is often challenging for a human soldier, the critics say, and it would be nearly impossible for a robot.
Adhering to international humanitarian law also means making moral judgments under chaotic conditions. A machine will never be able to assign a value to, say, bombing a bridge and weigh its strategic importance against the cost borne by the local population. Equally problematic, machines lack basic human empathy. So an autonomous robot would respond to a 12-year-old holding a weapon very differently than a soldier would.
These are all powerful arguments, but there is something odd about closing the door on a technology simply based on what it may and may not be able to do. Shouldn't we be testing these suppositions first? Right now, there is far too much "Terminator" sci-fi coloring the debate. At this stage, no one is discussing an android stalking an urban landscape and reading threats based on human facial expressions or something equally subtle.
Autonomous weapons systems of the near future will be assigned the easy targets. They will pick off enemy fighter jets, warships and tanks—platforms that usually operate at a distance from civilians—or they will return fire when being shot at. None of this is a technical stretch. Combat pilots already rely on machines when they have to hit a target beyond visual range. Likewise, some ground-combat vehicles have shot-detection systems that slew guns in the direction of enemy fire (although we'd probably want a robot to rely on something more than acoustic triangulation before unloading).
As for the moral judgment objection, machines may not have to be philosophers to do the right thing. Ron Arkin, a roboticist at the Georgia Institute of Technology, posits a scenario in which a human commander determines the military necessity of an operation; the machine then goes out and identifies targets; and right before lethal engagement, a separate software package called the "ethical governor" measures the proposed action against the rules of engagement and international humanitarian law. If the action is illegal, the robot won't fire.
Mr. Arkin's ethical-governor concept has been met with much skepticism. But let's assume for the moment that warbots, unhampered by feelings of fear, anger or revenge, can outperform human soldiers in keeping the rate of civilian casualties low. (We'll know for sure only if such a system is developed and tested.) If the goal of international humanitarian law is to reduce noncombatant suffering in wartime, then using sharpshooting robots would be more than appropriate, it would be a moral imperative.
Anticipating this utilitarian argument, disarmament activists contend that, real-life consequences aside, it is inherently wrong to give a machine the power of life or death over a human being. Killing people with such a self-propelled contraption is to treat them like "vermin," as one activist put it. But why is raining bombs down on someone from 20,000 feet any better? And does intimacy with one's killer really make death somehow more humane?
Another, related objection goes to the issue of responsibility. Predator drones, activists note, have a human crew (albeit one ensconced in an air-conditioned trailer stateside), so there is someone to blame if something goes wrong.
But in the case of a fully autonomous system, who is liable for an unlawful killing? Is it the field commander? The software engineer? The defense contractor that performed the integration work? These are serious questions but are hardly showstoppers or even unique to killer robots. One could ask similar questions about injuries or deaths caused by self-driving cars.
But even if other states back an international treaty, enforcing it would prove nearly impossible. After all, it is hard to ban a weapon you can't quite define. Truth be told, modern militaries already employ a variety self-targeting weapons. For example, the Captor sea mine hunts submarines on its own, while the ship-mounted Phalanx gun automatically shoots down water-skimming missiles. Are these "killer robots" too? If not, why not?
Moreover, facile appeals to precedent notwithstanding, trying to enforce a ban battlefield robots would be nothing like banning lasers that permanently blind or chemical weapons. The military machines wouldn't leave telltale, a-robot-did-this marks on their victims. A bullet wound is a bullet wound, whether made by man or machine, so a medical forensics team would be at a loss to determine whether a country had violated a killer-robots ban. Likewise, what makes a machine autonomous is its software, and that is buried deep within the system. Trying to distinguish a robot from a drone is like guessing what the apps are on a stranger's smartphone by looking at its protective case.
Ultimately, a ban on lethal autonomous systems, in addition to being premature, may be feckless. Better to test the limits of this technology first to see what it can and cannot deliver. Who knows? Battlefield robots might yet be a great advance for international humanitarian law.