The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Autonomous weapons: Tightrope balance

By Monika Chansoria, November 23, 2015

When researchers in artificial intelligence released an open letter in July calling for a ban on "offensive autonomous weapons beyond meaningful human control," they specified that a ban might prohibit weapons such as "armed quadcopters" capable of identifying and killing people "meeting certain pre-defined criteria." But the ban would not include "cruise missiles or remotely piloted drones for which humans make all targeting decisions." So it's worth noting that the proposed ban would not prohibit a number of autonomous weapons that have already been deployed—because these weapons are classified as defensive.

They include the US Navy's Phalanx—a "rapid-fire, computer-controlled, radar-guided gun system" that has been in use since 1980, and that the US Army has adopted in a land-based form more recently. Germany's fully automated NBS Mantis defense system can likewise detect, track, engage, and fire on projectiles. And Israel's Iron Dome missile defense system operates autonomously except when, perceiving a threat, it appeals to a human being for a quick decision on whether or not to fire.

Such systems are generally accepted as legitimate tools of war. Fully autonomous offensive weapons, however, are a different matter. They invite difficult questions about whether such weapons can uphold the moral imperative to protect civilian lives during conflict. Easily overlooked in this debate, however, is another moral imperative: that of protecting civilians endangered by non-state actors who deliberately perpetrate mass violence and terror against innocents.

In my view, any proposal for banning lethal autonomous weapons must take into account the unconventional, asymmetric, and irregular warfare that non-state transnational actors conduct—and such conflict's effects on civilians. Non-state actors often thrive precisely because they are indistinguishable from local civilian populations. They also thrive by making use of inhospitable terrain such as mountains and deserts, by slipping through porous borders, and by drawing on the help of complicit states or state actors. Militaries can sometimes overcome the advantages that non-state actors enjoy, notably through the optimal use of technologies including unmanned aerial vehicles (supported by good intelligence). But militaries find it very problematic to achieve victory over non-state actors in the conventional sense. Correspondingly, they struggle to protect civilians.

Could fully autonomous weapons with highly sophisticated capabilities change this equation? Might they, far from endangering civilians, save the lives of men, women, and children innocently caught up in violent conflict zones? If autonomous weapons could incapacitate enemy targets while minimizing undesired damage, they would merit serious consideration as weapons to be used in the fight against non-state actors and terrorists.

No existing weapon can properly be described as an offensive autonomous weapon capable of killing legitimate targets while sparing civilians. Today's artificial intelligence, which cannot reproduce human intelligence and judgment, would pose fundamental challenges to civilian safety if deployed on the battlefield. But it's crucial to remember that autonomous weapons technology is an evolving field. Future research and development may make it possible to encode machines with capacities for qualitative judgment that are not possible today. Future technological advancements might allow autonomous weapons to outperform human beings in battlefield situations.

In the end, I favor regulation of autonomous weapons rather than an outright ban on the entire technology. But a blanket ban does not seem likely in any event. The UK Foreign Office, for example, has stated that "[w]e do not see the need for a prohibition" on lethal autonomous weapons because "international humanitarian law already provides sufficient regulation for this area." What's needed in my opinion is a regulatory framework that limits the lethality of future autonomous weapons systems. Also needed is research into means (improved programming, for example) that would sharply limit the civilian casualties associated with autonomous weapons.

Ultimately, as with many other aspects of contemporary conflict or war, the most fundamental concern is proportionality. Indeed, I'd argue that lethal autonomous weapons might be considered ethical so long as the collateral damage they inflict isn't out of proportion to their contributions to peace, to security and stability, and to prevention of civilian bloodshed on a mass scale.

 


Share: [addthis tool="addthis_inline_share_toolbox"]