The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By Peter Asaro | April 27, 2018
If machines that autonomously target and kill humans are fielded by one country, it could be quickly followed by others, resulting in destabilizing global arms races. And that’s only a small part of the problem.
The Convention on Certain Conventional Weapons (CCW) at the UN has just concluded a second round of meetings on lethal autonomous weapons systems in Geneva, under the auspices of what is known as a Group of Governmental Experts. Both the urgency and significance of the discussions in that forum have been heightened by the rising concerns over artificial intelligence (AI) arms races and the increasing use of digital technologies to subvert democratic processes. Some observers have expressed concerns that the CCW discussions might be hopeless or futile, and that no consensus is emerging from them. Those concerns miss the significance of what has already happened and the opportunities going forward.
For some observers, the concerns over an AI arms race have overshadowed concerns about autonomous weapons. Some have even characterized the Campaign to Stop Killer Robots as aiming to “ban artificial intelligence” itself. I do not agree with these views, and argue in a forthcoming paper for I/S: A Journal of Law and Policy for the Information Society that various scholars and media use the term “AI arms race” to mean very different and even incompatible things, ranging from economic competition, to automated cyberwarfare, to embedding AI in weapons. As a result, it does not really make sense to talk about “an AI arms race” as a singular phenomenon to be addressed by a single policy. Moreover, the discussions taking place at the UN are focused on autonomy in weapons, which is only partially related to larger issues of an AI arms race—although establishing norms on the automated control of conventional weapons, such as meaningful human control, could certainly advance discussion in other areas, such as cyberwarfare and AI ethics.
The central issue in the CCW discussions over lethal autonomous weapons is the necessity for human control over what the International Committee of the Red Cross has called the “critical functions” of targeting and engagement in attacks. AI could be used in various ways by militaries, including in weapons systems, and even in the critical functions of targeting and engagement. The issue is not what kind of technology is used or its sophistication, but whether and how the authority to target and engage is delegated to automated processes, and what implications this has for human responsibility and accountability, as well as human rights and human dignity.
Should the nature of responsibility and accountability in the engagement of weapons erode, and shift away from human control, a range of other critical issues arises. If machines that autonomously target and kill humans are fielded by one country, it could be quickly followed by other countries, resulting in destabilizing and costly arms races—regional and global. Given the nature of the technology, we can also expect to see these systems proliferate rapidly among countries and also spread to non-state actors. The potential hacking and spoofing of automated weapons–making the hijacking and redirection of weapons by adversaries a real possibility—raise another set of concerns. Then there are the risks posed by attacks that may be increasingly difficult to attribute to their source–a phenomenon already seen in cyberattacks that could extend into kinetic attacks. Then again, automated systems could initiate or escalate conflicts without human political or military decision-making or authority. Furthermore, large-scale deployments of autonomous systems, such as swarms, will behave in intrinsically unpredictable ways, especially when they engage other autonomous systems—raising questions about state responsibility and the reliance on limited ability of testing regimes to ensure their safety.
We already see the military interest in speeding up battlefield decision-making, and the shift from human-speed to machine-speed. Will accelerating military tactics to speeds at which humans cannot meaningfully operate mean that humans eventually lose control over military strategy as well? By empowering small groups of people—even individuals—to unleash massive levels of destruction and kill in great numbers, autonomous weapons could constitute a new kind of weapon of mass destruction.
Each of the issues I have mentioned could have powerfully negative effects on the number and intensity of conflicts around the world. Taken together, they would transform warfare into something new, unfamiliar, and increasingly unpredictable. Will politics and diplomacy be able to keep up? Insofar as these possible changes in the direction of autonomous warfighting are threats to human accountability and state responsibility, they also threaten to undermine the rule of law itself. The question is fundamental: Can the global community of nations come together to agree not to pursue weapons that target and kill humans autonomously, or will they succumb to the relentless logic of striving for military advantage, and thereby sacrifice other values—diplomacy, rule of law, responsibility, human rights, and dignity?
While most individuals and governments have acknowledged the risks of autonomous weapons, there are genuine disagreements over what, if anything, should be done to mitigate those risks. A handful of states argue that no new law or regulation is needed or that it is too early to regulate and we should wait to see how these weapons are used. Some propose soft law measures, such as a “political declaration” affirming the importance of retaining some form of human control over weapons systems and the use of force. A couple of states have proposed agreeing on “best practices” and greater transparency for the design, testing, and use of autonomous weapons. But the majority of states are now proposing that new international law be negotiated on lethal autonomous weapons systems, including 26 states that seek a ban treaty. The Campaign to Stop Killer Robots has from the beginning supported a comprehensive ban on the production and use of such weapons—a position that can also be viewed as a positive obligation requiring states to ensure that the targeting and engagement of weapons should always be kept under meaningful human control.
There are states that argue that existing international humanitarian law is sufficient to guide and regulate the development of these systems. They express a confidence in both the clarity of existing legal norms and the mechanisms for enforcing them, such as weapons reviews required under Article 36 of Additional Protocol I (1977) to the Geneva Conventions. Yet the many discussions and debates at the Convention on Certain Conventional Weapons have made one thing clear: There are differing views and understandings of what exactly constitutes a targeting decision, or a legal judgement, or an ethical judgement, and how any of those terms could or should be applied to an automated system carrying out a military operation. In fact, there are serious risks that legal norms and the application of established concepts of humanitarian law could become fuzzier or weaker as technology advances and confidence in autonomous technology increases. At the very least, there is a need for a shared understanding of these terms and their legal implications under the laws of war. If they achieve little else, the CCW discussions have at least begun this crucial process.
The proponents of lethal autonomous weapons argue that there could be real advantages to automating targeting and engagement, and that prohibitions or regulations could prevent beneficial applications of the technology. The United States has the most articulated and public policy on the development of such weapons, Department of Defense Directive 3000.09, and has suggested that something like that directive might serve as guide to best practices for other countries. The United States has also submitted a working paper for the recent meeting. The document offers a series of arguments and assertions that emerging technology, and AI in particular, could help to limit civilian casualties in armed conflict in various ways. These range from pattern recognition systems for filtering surveillance data, to software for blast damage assessment, to guidance systems on missiles, to mines and munitions that self-destruct or deactivate after a period of time. Yet in nearly all the examples the US document sets out, automation merely provides human decision makers with objects of interest and analyses to help the human decision-maker a better decision about the risks of engaging a target. In other examples the automation comes into play only after a human decision-maker has designated a target and initiated an attack. In these cases, the automation merely helps the weapon reach its intended target, or deactivate if no target is found. These are reasonable uses of automation, and if properly employed, they do have the potential to reduce the impact of certain attacks on civilians and conform to international humanitarian law.
Whether a new generation of highly automated systems will actually reduce civilian casualties is an empirical question that depends not simply on the technical capabilities of the weapons systems (theoretical or verified), but also on how those systems are used in practice. If they encourage greater use in riskier civilian areas, or lower thresholds to conflict, overall civilian impacts from autonomous weapon systems may actually be greater and therefore not correlate with reduced risks to civilians as their greater accuracy or precision targeting capabilities might suggest.
International humanitarian law encourages the reduction of risks to civilians in wartime—military forces in fact have a duty to take reasonable precautions to protect civilians. But how do the laws of war, written for humans, apply to an automated system? Can the performance of an algorithm constitute a proportionality assessment? What counts as “reasonable” for a machine? Can an automated decision qualify as a legal judgement? Even for more computationally tractable legal requirements, such as discriminating combatants from civilians, there remain difficult questions of what technical standards should apply and how they should be tested. Likewise, terms like “target,” “objective,” and “an attack” do not have strict legal definitions that can be easily translated into automated programs. They depend on context and the nature of a military operation. A lack of shared concepts and terms is precisely the kind of unregulated development that we have already begun to see in cyberwarfare, where there is little agreement regarding norms, even those norms already established in international law as outlined by the Tallinn Manual. If one truly believes in the humanitarian potential of automation in weapons systems, one should support the clarification and codification of shared norms and standards to guide the development of such technologies.
My own writings, and the documents and statements issued by the Campaign to Stop Killer Robots, have consistently encouraged the use of advanced information-processing technologies to further enhance the situational awareness and decision-making capacity and accuracy of human commanders and soldiers. Our concern has been the elimination of those commanders and soldiers from critical decisions altogether. As automation becomes more sophisticated and relied upon, it also becomes more important that the people who operate the automated systems understand how the systems use data and algorithms to make automated assessments. Some governments are beginning to ask these questions, as Belgium does in its working paper from the November Group of Governmental Experts meeting. What are the essential aspects of human control? To what extent might automated information processing over-determine human judgement and usurp human autonomy? And what point might a system be “functionally delegated” the authority to kill, even when it has no such legal authority?
The Convention on Certain Conventional Weapons has a way forward for its next meeting in August. Participants could agree to pursue the negotiation of a legally binding instrument that asserts clear norms. Or they might choose a lesser, non-binding output. In their joint working paper, France and Germany, which chaired informal meetings on autonomous weapons at the CCW, have offered a vision that seeks a positive and tangible outcome and positive step toward a treaty, in the form of a non-binding political resolution. But that vision does not articulate the substance of the issue of human control of weapons—how the issue should be framed, or how it might be assessed or enforced. And as a practical matter, there is concern that such a resolution could be the final outcome of this CCW process, rather than a stepping stone to a binding instrument that truly regulates autonomous weaponry.
But international consensus is within reach. Broad consensus on the need for human control over targeting and engagement of weapons already exists. States must work to clarify the nature of that control, the line between what forms of control are acceptable or unacceptable, and the legal language and framework for codifying these. The failure of other consensus-based fora to reach outcomes on issues involving new and emerging technologies, including cyberwarfare, should serve as a warning, and not a benchmark for what could be achieved at the Convention on Certain Conventional Weapons. Just the same, if there is to be any hope of making progress on regulating cyberwarfare, or agreement over international regulations to manage the risks of AI and other advanced technologies, the CCW could yet prove to be a model for international consensus-building and norm innovation. “Meaningful human control” as a concept is already finding use in discussions of AI ethics and self-driving cars.
The kind of regulation sought by civil society groups in regard to autonomous weapons—killer robots, if you will—is largely without precedent. Rather than specify a particular type of munition or weapon with a particular effect or mode of action, what is needed is a regulation of the manner in which weapons are used so as to ensure that advancements in technology do not fundamentally undermine international humanitarian law itself. Positive and meaningful action on this issue is still within reach, and it is up to the diplomats at the Convention on Certain Conventional Weapons and their governments to prove that they can work together to address the full range of threats to humanity posed by autonomous weapons.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
Topics: Military Applications of AI
Share: [addthis tool="addthis_inline_share_toolbox"]