The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Distinguishing autonomous from automatic weapons

By Heather Roff, February 9, 2016

My roundtable colleagues Paulo E. Santos and Monika Chansoria both argue for regulating rather than banning autonomous weapons. But they never define precisely what they would regulate. This is a troublesome oversight—anyone arguing for regulation of weapons or their actions ought to have a very clear idea what regulation entails.

Autonomous weapons, according to the US Defense Department, are weapons that select a target and fire without intervention from a human operator. But what exactly does "select" mean? How about "intervention?" These questions are more subtle than they seem.

“Select” could mean scanning a particular space for a sensor input—say, a radar signature or a facial image. But in that case the weapon is not selecting a target. Rather, it is hunting for a preselected target. A human has actually selected the target, either through programming the target parameters or identifying a target object or target area. But a weapon of this sort isn’t truly autonomous; it’s automatic.

Then again, "select" could refer to the mere act of sensing a target. But modern militaries would find such a reading problematic. Many existing weapons systems—cruise missiles, counter-rocket and mortar defense systems, torpedoes, and sea mines—sense targets and fire on them. It is highly unlikely that any state would characterize these systems as autonomous.

So what does distinguish autonomous weapons from automatic weapons—and therefore subjects them to regulation or prohibition? I would answer this question by distinguishing sophisticated automatic weapons from limited learning autonomous weapons systems.

Sophisticated automatic weapons are incapable of learning, or of changing their goals. But due to their mobility and in some cases their autonomous navigation capacities, they are capable of wreaking havoc on civilian populations. Further, they cannot uphold the principles of necessity, precaution, and proportionality. Therefore, they would most likely be used as anti-material weapons. It is unlikely they would be used as anti-personnel weapons.

Limited learning weapons, meanwhile, are capable both of learning and of changing their sub-goals while deployed. They truly select a target among a range of objects or persons. In short, they pursue military objectives—just as soldiers decide whether to fire on a person, vehicle, or building, or how best to "take a hill." These are the truly autonomous weapons systems. (No state, by the way, has come out in favor of using autonomous weapons against people. Even states that oppose a ban on or regulation of autonomous weapons have maintained that autonomous weapons systems can only be used in "operationally appropriate situations" in "uncluttered environments." So Chansoria’s suggestion that autonomous weapons could be used in counterterrorism operations has no support in diplomatic or military circles.)

My colleagues suggest that I have denied the potential of artificial intelligence to surpass certain human capabilities, or have denied that artificial intelligence is more suited to certain tasks than humans are. I don't deny any such thing—which is precisely why I worry about the risks that limited learning weapons would pose if developed and fielded. These risks—which include changing the face not merely of war but also of peacetime civilian safety and freedom—are so large that the weapons posing them must be banned outright. And before anyone applauds, as Chansoria seems to do, future weapons capable of "qualitative judgment," it's best to remember that "qualitative judgment" could only emerge after autonomous technologies had passed through an arduous, dangerous middle ground of "limited" intelligence and little judgment.

Hard questions. So what, on a practical level, should be done about the weapons systems considered in this roundtable?

Where sophisticated automatic weapons are concerned, governments must think carefully about whether these weapons should be deployed in complex environments. States should institute regulations on how they can be used. But truly autonomous systems—limited learning or even more sophisticated weapons—ought to be banned. Their use would carry enormous risk for civilians; might escalate conflicts; would likely provoke an arms race in artificial intelligence; and would create a need for sensor networks throughout all battlespaces (and cities). Indeed, pervasive surveillance alone is worrisome enough to justify a ban on autonomous weapons.

It is unpersuasive to claim, as my colleagues have done, that a ban is unlikely to be enacted or would be impractical if instituted. Other technologies, such as blinding lasers, have been banned before use—why not autonomous weapons? And just as chemical weapons were banned with the support of the world's scientists and its chemical industry, the challenges of autonomous weapons can be addressed through cooperation among scientists, roboticists, and the technology industry. What's more, some militaries already have the capability to incorporate limited learning algorithms in weapons, but they have not deployed these capabilities due to uncertainty and risk. Since militaries are already showing restraint, why not press them to reject autonomous weapons completely?

Autonomous weapons entail hard questions and serious challenges. It's time to address them. Advancing Panglossian notions about the nature of future conflict accomplishes nothing.

 


Share: [addthis tool="addthis_inline_share_toolbox"]