The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Banning autonomous weapons: Impractical and ineffective

By Paulo E. Santos, January 19, 2016

Computers have long outperformed humans at certain functions that are perceived to require "intelligence." A famous early example is the Bombe machine developed during World War II at Bletchley Park, which allowed the United Kingdom to decipher messages encoded by the German military's Enigma machines. In 1997, the IBM computer Deep Blue beat world chess champion Gary Kasparov in a six-game match. In 2011, IBM's Watson—which "uses natural language processing and machine learning to reveal insights from large amounts of unstructured data"—appeared on the television quiz show Jeopardy and outplayed a pair of former champions. So I reject my colleague Heather Roff's Round Two suggestion that I had contradicted myself by writing, on one hand, that it's likely impossible for artificial intelligence ever to achieve human-level intelligence; and on the other hand, that autonomous weapons might in the future perform some military functions better than human combatants can. From my perspective, no contradiction exists.

In a similar vein, Roff rejected an argument by Monika Chansoria, this roundtable's third participant, that autonomous weapons might become useful in the fight against terrorism. According to Roff, machines will never be capable of distinguishing terrorists from civilians because such a capability would require "artificial intelligence so sophisticated that it exceeds human intelligence where the ability to make certain distinctions is concerned." But some findings suggest that state-of-the-art face recognition algorithms could outperform humans in matching face pairs. Indeed, when it comes to capabilities that would allow autonomous weapons to combat terrorism effectively, recognizing faces with great precision under varying observation conditions is the key capability. To be sure, the algorithms that underlie machine perception now face a number of limitations, such as an inability to interpret fast-changing situations. But I see no reason why such obstacles can't be overcome in the future (even if, in the end, machine perception could be better deployed in surveillance systems than in weaponized machines).

The real problem with deploying autonomous lethal systems to combat terrorism is that killing suspected terrorists—while denying them the right to a fair trial—would amount to state assassination. And in any event the very concept of "terrorism" is ideologically charged. The independence movements in the Americas during the 18th and 19th centuries, for example, could have been interpreted as terrorist movements in the European capitals of the day.

Nuanced undertaking. Roff argues for a ban on lethal autonomous weapons. As a pacifist, I agree that, ideally, an outright ban is the best approach. But so much automation has already been integrated into weapons design that banning lethal autonomous weapons seems akin to stopping the development of warfare itself—a practical impossibility. And a ban, even if instituted, would likely be ineffective (and might even qualify as naive). Suppose a ban were implemented under conditions similar to those described in last year's open letter on autonomous weapons by researchers into artificial intelligence and robotics. The development of fully autonomous lethal weapons would be outlawed—but remote-controlled killing machines, cruise missiles, and other weapons with various levels of automation would not. In that situation, how could the international community be certain that a remotely controlled weapon deployed in conflict was not entirely controlled by an artificial agent? A weapon's interface need not change according to whether the agent that controls it is human or artificial. And humans could oversee a weapon's actions in either case. But in one case a human would make targeting decisions and in the other case an artificial intelligence would do so.

This is one reason I prefer strong regulation of autonomous weapons over an outright ban on the technology. Regulation would provide the tools necessary for analyzing and understanding increased automation in warfare. It would imply constraints on the development and use of autonomous weapons. And it would strike a blow against dehumanized killing and state-sponsored assassination.

If regulation is the correct course, the question becomes how to alter international humanitarian and human rights law, which now govern only human agents, so that they can cope with automation in warfare. To be sure, this would be a nuanced undertaking, and not a trivial one. But literature upon which discussions could be based already exists. It's time to get started on this project—instead of chasing a ban that will probably never be instituted and would likely be ineffective if it were.

 


Share: [addthis tool="addthis_inline_share_toolbox"]