In August, a group of experts on robotics and artificial intelligence released an open letter to the UN Convention on Certain Conventional Weapons. The well-publicized letter called on the convention “to find a way to protect us all from” the dangers of autonomous weapons systems—and drew attention to a lack of international regulation on autonomous weapons (often understood as weapons that “once activated, can select and engage targets without further human intervention”).
In 2013 the convention added autonomous weapons to the list of weapons it might consider restricting or outlawing. But parties to the convention remain far from agreement on how to define “lethal autonomous weapons systems” or “appropriate human control of autonomous weapons”—a necessary precursor to further discussions on the topic or to a pre-emptive ban of the sort advocated by the Campaign to Stop Killer Robots. In December of last year, the convention established a Group of Governmental Experts, with a mandate to discuss lethal autonomous weapons systems—but the group’s first meeting has been postponed twice for budgetary reasons. It is now scheduled for next month.
Deliberative processes that might examine autonomous weapons from the perspective of the laws of war—processes, that is, that could result in new regulations—are notoriously sluggish. Meanwhile, autonomous weapons technology is developing apace. Nations such as the United States, China, Russia, South Korea, and the United Kingdom continue to develop autonomous weapons and related dual-use technologies, meaning that deployment of these weapons could become a fait accompli before any pre-emptive ban can be negotiated.
The current debate over autonomous weapons exhibits two important shortcomings. First, though it is important to examine autonomous weapons from the legal and regulatory perspective, doing so can fail to capture the reality that autonomous weapons, and the practices associated with their development and deployment, can alter norms themselves. For example, practices surrounding autonomous weapons can produce new understandings, outside and beyond international law, of when and how using force is appropriate. As Herbert Lin has written in the Bulletin, the unrestricted submarine warfare of World War II undermined agreed-upon norms about the conduct of war; other such examples are not hard to find.
Second, when observers discuss autonomous weapons’ game-changing potential in international relations and security policy, they often overemphasize the technologically sophisticated autonomous weapons of the future. (This tendency is shaped by popular culture’s “Terminator” vision of humanoid monsters and is affected by the lack of a consensus definition of “autonomous weapons” or “autonomy.”) Overemphasizing technologically sophisticated weapons seems to result in a belief that the international community should just wait to see whether “killer robots” indeed become reality. However, no matter how important advanced artificial intelligence will be for future weapons systems, it is “stupid” autonomous weapons that require attention now. (This issue has been discussed, for example, by Noel Sharkey, an emeritus professor of artificial intelligence and robotics at the University of Sheffield—and, in a broader context, by Toby Walsh, a professor of artificial intelligence at the University of New South Wales.)
To sort out these problems, it is helpful to contrast autonomy with mere automation. Drawing on definitions from basic robotics, automated machines can be said to run according to fixed and preprogrammed sequences of action. Autonomous systems, meanwhile, are defined by their ability to adapt: An autonomous device’s “actions are determined by its sensory inputs, rather than where it is in a preprogramed sequence.” This level of autonomy is easy to achieve—one need only think of robotic vacuum cleaners. But where weapons are concerned, even this level of autonomy contests the idea of appropriate human control. And importantly, unlike the humanoid killer robots of possible future scenarios, this level of autonomy already exists.
Higher threshold. In August the UK Ministry of Defence released a “Joint Doctrine Publication” on unmanned aircraft systems. The document provided definitions of and distinctions between “automated” and “autonomous” systems—and in so doing departed from the generally accepted understanding of these terms. It characterized autonomous weapons narrowly as sophisticated systems “capable of understanding higher-level intent and direction [and] capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.”
Systems that fall under this definition are not yet operational—but the definition raises the possibility that autonomous weapons emerging in the future may not even be regarded as such. That is, the UK Ministry of Defence raised the threshold at which weapons systems are considered autonomous. Correspondingly, it offered an expanded definition of “automated system,” declaring that “an automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logically follow a predefined set of rules in order to provide an outcome.” This expanded definition moves “automation” into territory reserved for “autonomy” in the basic-robotics definition cited above.
Clearly, these new definitions present new challenges to the possible regulation and restriction of autonomous weapons. For example, what of the United Kingdom’s own declaration that it “does not possess armed autonomous aircraft systems and it has no intention to develop them”? The UK government insists that “the operation of UK weapons will always be under human control,” in the sense that “a person [will be] involved in setting appropriate parameters.” But when the distinction between automation and autonomy is blurred, Britain’s declaration represents only a weak moratorium on the development of weapons systems with autonomous qualities. And would a programmer developing an algorithm for the operation of autonomous weapons represent “human control” over the setting of “appropriate” parameters”?
“Better soldiers.” Given all this, debate about autonomous weapons should much more thoroughly take into account the ways in which existing norms may be affected by currently available autonomous weapons. Discussing how norms can regulate autonomous weapons is important, but a reversal is also in order—it’s time to investigate how weapons impact norms.
An important category of norms to consider in this context is “procedural norms.” These norms, which apply in confined organizational settings such as militaries, provide standards for appropriate ways of doing things. They are based on specific objectives and expectations that are often associated with efficiency and effectiveness. Where weapons are concerned, greater levels of technical autonomy produce improvements in reaction time, systemic reliability, endurance, or precision (in contrast to unmanned and remote-controlled aerial vehicles, which do not necessarily deliver such improvements). Because autonomous weapons confer advantages where procedural norms are concerned, their deployment is more likely. That is, autonomous weapons provide procedural incentives to remove human decision-making, in temporally proximate terms, from the use of force. As one US Marine Corps lawyer puts it, “from an operational perspective, [human decision-making] might … prove counterproductive in the event of future conflict with a near-peer competitor.” Autonomous weapons pose severe challenges in the realms of ethics and accountability, but they score highly when it comes to fulfilling procedural norms.
It is sometimes presumed that autonomous weapons will demonstrate ethical superiority over humans. Any such superiority is still hypothetical, but autonomous weapons might lack potentially problematic emotions such as fear, anger, or vengefulness. Presumed ethical superiority leads to further procedural arguments for constructing autonomous weapons as “better soldiers” that will outperform humans morally and in terms of compliance with international humanitarian law. If this argument becomes more dominant, the widespread development and deployment of autonomous weapons will become more likely—further escalating the possibility that procedural norms will affect the public and legal norms that underlie international law and notions of legitimacy.
The US military’s pervasive and accelerating deployment of drones, and drones’ centrality in US security policy, show that practices indeed shape norms. Drones have become “preferred” security instruments due to specific rationales based on procedural norms. Autonomous weapons’ versatility, the dual-use character of their main features, and the technological rivalry among major powers qualify them as very important instruments—and this makes their regulation more difficult. Whenever procedural norms prevail over legal and ethical norms, the latter category, unfortunately, is likely to yield or adapt.
To be sure, some types of autonomous weapons might be banned in the future. But practices now being established regarding autonomous weapons are already setting standards about the future use of force. This trend should be monitored much more closely—regardless of whether the Convention on Certain Conventional Weapons, governments, and nongovernmental organizations find common ground in their struggle to define what autonomous weapons are in the first place.
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.