The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By Mark Gubrud | April 12, 2015
As China, Russia, the United States, and 115 other nations convene in Geneva for their second meeting on lethal autonomous weapons systems, that phrase still has no official definition. It was coined by the chair of last year’s meeting, who assured me that the word “lethal” did not mean that robots armed with Tasers and pepper spray would be okay, or that weapons meant only to attack non-human targets—such as vehicles, buildings, arms and the other matériel of war—would be off-topic. Rather, there was a sense that the talks should focus on weapons that apply physical “kinetic” force directly and avoid being drawn into ongoing debates about the legality of cyber weapons that disrupt computers and networks.
But consensus about definitions is not the main problem; the heart of the matter is the need to prevent the loss of human control over fateful decisions in human conflict. The US military has been grappling with this issue for decades, because it has long been possible to build weapons that can hunt and kill on their own. An example is the Low-Cost Autonomous Attack System, a small cruise missile that was intended to loiter above a battlefield searching for a something that looked like a target, such as a tank, missile launcher, or personnel. The program was canceled in 2005, amid concerns about its reliability, controllability, and legality. But the winds have since shifted in favor of such weapons, with increasing amounts of money spent on research and development on robotic weapons that are able to seek out and hit targets on their own, without human intervention.
The problem in trying to nip such research and development in the bud is that whenever one proposes preventive arms control to block the development of new weaponry, “indefinable” is usually the first objection, closely followed by “unverifiable,” and finally, “it’s too late; everybody’s already got the weapons.” In fact, some autonomous weapons advocates have already jumped to the latter argument, pointing to existing and emerging weapons systems that make increasingly complex lethal decisions outside of human control. Does this mean that killer robots are already appearing on the scene? If so, shouldn’t this spur efforts to stop them?
The roadmap we’re using, and the road we’re on. Overriding longstanding resistance within the military, the Obama administration issued its Directive on Autonomy in Weapon Systems in 2012. This document instructs the Defense Department to develop, acquire, and use “autonomous and semi-autonomous weapons systems” and lays out a set of precautions for doing so: Systems should be thoroughly tested; tactics, techniques and procedures for their use should be spelled out; operators trained accordingly; and so forth.
These criteria are unremarkable in substance and arguably should apply to any weapons system. This directive remains the apparent policy of the United States and was presented to last year’s meeting on lethal autonomous weapons systems as an example for other nations to emulate.
The directive does contain one unusual requirement: Three senior military officials must certify that its precautionary criteria have been met—once before funding development, and again before fielding any new “lethal” or “kinetic” autonomous weapon. This requirement was widely misinterpreted as constituting a moratorium on autonomous weaponry; a headline from Wired reported it as a promise that “[a] human will always decide when a robot kills you.” But the Pentagon denies there is any moratorium, and the directive clearly indicates that officials can approve lethal autonomous weapons systems if they believe the criteria have been satisfied. It even permits a waiver of certification “in cases of urgent military operational need.”
More important, the certification requirement does not apply to any semi-autonomous systems, as the directive defines them, suggesting that these are of less concern. In effect, weapons that can be classified as semi-autonomous—including those intended to kill people—are given a green light for immediate development, acquisition, and use.
Understanding what this means requires a careful review of the definitions given, and also those not.
The directive defines an autonomous weapons system as one that “once activated, can select and engage targets without further intervention by a human operator.” This is actually quite helpful. Sweeping away arguments about free will and Kantian moral autonomy, versus machines being programmed by humans, it clarifies that “autonomous” just means that the system can act in the absence of further human intervention—even if a human is monitoring and could override the system. It also specifies the type of action that defines an autonomous weapons system as “select and engage targets,” and not, for example, rove around and conduct surveillance, or gossip with other robots.
A semi-autonomous weapons system, on the other hand, is defined by the directive as one that “is intended to only engage individual targets or specific target groups that have been selected by a human operator.” Other than target selection, semi-autonomous weapons are allowed to have every technical capability that a fully autonomous weapon might have, including the ability to seek, detect, identify and prioritize potential targets, and to engage selected targets with gunfire or a homing missile. Selection can even be done before the weapon begins to seek; in other words, it can be sent on a hunting mission.
Given this, it would seem important to be clear about whatever is left that the human operator must do. But the directive just defines “target selection” as “the determination that an individual target or a specific group of targets is to be engaged.”
That leaves a great deal unexplained. What does selection consist of? A commander making a decision? An operator delivering commands to a weapons system? The system telling a human it’s detected some targets, and getting a “Go”?
How may an individual target or specific group be specified as selected? By name? Type? Location? Physical description? Allegiance? Behavior? Urgency? If a weapon on a mission locates a group of targets, but can only attack one or some of them, how will it prioritize?
If these questions are left open, will their answers grow more permissive as the technology advances?
A war of words about the fog of war. In reality, the technological frontiers of the global robot arms race today fall almost entirely within the realm of systems that can be classified as semi-autonomous under the US policy. These include both stationary and mobile robots that are human-operated but may automate every step from acquiring, identifying, and prioritizing targets to aiming and firing a weapon. They also include missiles that, after being launched, can search for targets autonomously and decide on their own that they have found them.
A notorious example of the latter is the Long-Range Anti-Ship Missile, now entering production and slated for deployment in 2018. As depicted in a highly entertaining video released by Lockheed-Martin, this weapon can reroute around unexpected threats, search for an enemy fleet, identify the one ship it will attack among others in the vicinity, and plan its final approach to defeat antimissile systems—all out of contact with any human decision maker (but possibly in contact with other missiles, which can work together as a team).
At the Defense Advanced Research Projects Agency’s Robotics Challenge trials on the Homestead racetrack outside Miami in December 2013, I asked officials whether the long-range anti-ship missile was an autonomous weapons system—which would imply it should be subject to senior review and certification. They did not answer, but New York Times reporter John Markoff was later able to obtain confirmation that the Pentagon classifies the missile as merely “semi-autonomous.” What it would have to do to be considered fully autonomous remains unclear.
Following a series of exchanges with me on this subject, two prominent advocates for autonomous weapons, Paul Scharre and Michael Horowitz, explained that, in their view, semi-autonomous weapons would be better distinguished by saying that they are “intended to only engage individual targets or specific groups of target[s] that a human has decided are to be engaged.” Scharre was a principal author of the 2012 directive, but this updated definition is not official, little different from the old one, and clarifies very little.
After all, if the only criterion is that a human nominates the target, then even The Terminator (from the Hollywood movie of the same name) might qualify as semi-autonomous, provided it wasn’t taking its orders from an evil computer.
Scharre and Horowitz would like to “focus on the decision the human is making” and “not apply the word ‘decision’ to something the weapon itself is doing, which could raise murky issues of machine intelligence and free will.” Yet from an operational and engineering standpoint—and as a matter of common sense—machines do make decisions. Machine decisions may follow algorithms entirely programmed by humans, or may incorporate machine learning and data that have been acquired in environments and events that are not fully predictable. As with human decisions, machine decisions are sometimes clear-cut, but sometimes they must be made in the presence of uncertainty—as well as the possible presence of bugs, hacks, and spoofs.
An anti-ship missile that can supposedly distinguish enemy cruisers from hapless cruise liners must make a decision as it approaches an otherwise unknown ship. An antenna collects radar waveforms; a lens projects infrared light onto a sensor; the signals vary with aspect and may be degraded by conditions including weather and enemy countermeasures. Onboard computers apply signal processing and pattern recognition algorithms and compare with their databases to generate a score for a “criteria match.” The threshold for a lethal decision can be set high, but it cannot be 100 percent certainty, since that would ensure the missile never hits anything.
For killer robots, as for humans, there is no escape from Plato’s cave. In this famous allegory, humans are like prisoners in a cave, able to see only the shadows of things, which they think are the things themselves. So when a shooter (human or robot) physically aims a weapon at an image, it might be a mirage, or there might be something there, and that might be the enemy object that a human or computer has decided should be engaged. Which is the real target: the image aimed at, the intended military objective, or the person who actually gets shot? If a weapons system is operating autonomously, then the “individual” or “specific” target that a faraway human (perhaps) has selected becomes a Platonic ideal. The robot can only take aim at shadows.
Taking the mystery out of autonomy. Two conclusions can be drawn. One is that using weapons systems that autonomously seek, identify, and engage targets inevitably involves delegating fatal decisions to machines. At the very least, this is a partial abdication of the human responsibility to maintain control of violent force in human conflict.
The second conclusion is that, as my colleague Heather Roff has written, autonomous versus semi-autonomous weapons is “a distinction without a difference.” As I wrote in 2013, the line is fuzzy and broken. It will not hold against the advance of technology, as increasingly sophisticated systems make increasingly complicated decisions under the rubric of merely carrying out human intentions.
For the purposes of arms control, it may be better to reduce autonomy to a simple operational fact—an approach I call “autonomy without mystery.” This means that when a system operates without further human intervention, it should be considered an autonomous system. If it happens to be working as you intended, that doesn’t make it semi-autonomous. It just means it hasn’t malfunctioned (yet).
Some weapons systems that automate most, but not all, targeting and fire control functions may be called nearly autonomous. They are of concern as well: If they only need a human to say “Go,” they could be readily modified to not need that signal. Human control is not meaningful if operators just approve machine decisions and avoid accountability. As soon as a weapons system no longer needs further human intervention to complete an engagement, it becomes operationally autonomous.
Autonomy without mystery implies that many existing, militarily important weapons must be acknowledged as actually, operationally autonomous. For example, many missile systems use homing sensors or “seekers” that are out of range when the missiles are launched, and must therefore acquire and home in on their targets autonomously, later on in flight. The 2012 directive designates such weapons as “semi-autonomous,” which exempts them from the certification process while implicitly acknowledging their de facto autonomy.
Scharre and Horowitz point to the Nazi-era Wren torpedo, which could home on a ship’s propeller noise, as evidence that such systems have existed for so long that considering them as autonomous would imply that “this entire discussion is a lot of fuss for nothing.” But the fuss is not about the past, it is about the future. The Long-Range Anti-Ship Missile, with its onboard computers identifying target ships and planning how to attack, gives us a glimpse of one possible future. When some call it just a “next-generation precision-guided weapon,” we should worry about this next generation, and even more about the generations to come.
Follow your guiding stars, and know where you don’t want to go. One cannot plausibly propose to ban all existing operationally autonomous weapons, but there is no need to do so if the main goal is to avert a coming arms race. The simplest approach would be grandfathering; it is always possible to say “No guns allowed, except for antiques.” But a better approach would be to enumerate, describe, and delimit classes of weaponry that do meet the operational definition of autonomous weapons systems, but are not to be banned. They might be subjected to restrictions and regulations, or simply excluded from coverage under a new treaty.
For example, landmines and missiles that self-guide to geographic coordinates have been addressed by other treaties and could be excluded from consideration. Automated defenses against incoming munitions could be allowed but subjected to human supervision, range limitations, no-autonomous-return-fire, and other restrictions designed to block them from becoming autonomous offensive weapons.
Nearly autonomous weapons systems could be subjected to standards for ensuring meaningful human control. An accountable human operator could be required to take deliberate action whenever a decision must be made to proceed with engagement, including any choice between possible targets, and any decision to interpret sensor data as representing either a previously “selected” target or a valid new target. An encrypted record of the decision, and the data on which it was made, could be used to verify human control of the engagement.
Autonomous hunter-killer weapons like the Long-Range Anti-Ship Missile and the canceled Low-Cost Autonomous Attack System should be banned outright; if any are permitted, they should be subject to strict quantitative and qualitative limits to cap their development and minimize their impact on arms race and crisis stability. No autonomous systems should ever be permitted to seek generic target classes, whether defined by physical characteristics, behavior or belligerent status, nor to prioritize targets based on the situation, lest they evolve into robotic soldiers, lieutenants, and generals—or in a civil context, police, judge, jury, and executioner all in one machine.
Negotiating such a treaty, under the recognition that autonomy is an operational fact, will involve bargaining and compromise. But at least the international community can avoid decades of dithering over the meaning of autonomy and searching for some logic to collapse the irreducible complexity of the issue—while technology sweeps past false distinctions and propels the world into an open-ended arms race.
Thinking about autonomous weapons systems should be guided by fundamental principles that should always guide humanity in conflict: human control, responsibility, dignity, sovereignty, and above all, common humanity, as the world faces threats to human survival that it can only overcome by global agreement.
In the end, where one draws the line is less important than that it is drawn somewhere. If the international community can agree on this, then the remaining details become a matter of common interest and old-fashioned horse trading.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
Topics: Analysis, Special Topics, Technology and Security