Banning and regulating autonomous weapons
By Heather Roff |
Hunting in packs. Patrolling computer networks. Deployed on land, at sea, in the air, in space—everywhere. Deployed autonomous weapons certainly sound sinister. But on balance, would they promote or detract from civilian safety?
Answering this question requires a clear understanding of the term "civilian safety." If it means protecting civilian lives during armed conflict, then yes, autonomous weapons might well contribute to this end someday. Today's technology, however, is not robust enough for autonomous weapons to distinguish combatants from noncombatants, particularly amid insurgencies or civil wars. The best that current technology can achieve is to recognize radar signatures, heat signatures, shapes—or, in the case of people, sensors on uniforms. But this only helps identify one’s own fighters, which in no way increases civilian security.
Over time, autonomous weapons technology may—with advancements in facial recognition, gesture recognition, biometrics, and so forth—become better able to identify permissible targets. Such advancements, nevertheless, would not guarantee that civilians would not be targeted. Nor would they preclude the emergence of other threats to civilian safety. For instance, in order to counter potential threats, autonomous weapons may someday perform persistent surveillance of populations, somewhat akin to the "Gorgon Stare" airborne surveillance system currently utilized by the United States. If similar technology is employed on autonomous weapons, societies will face a host of problems not directly related to armed conflict but nonetheless related to civilian safety.
"Civilian safety" extends, then, beyond the conduct of hostilities—beyond the scope of international humanitarian law. That is, civilian safety is both a wartime and a peacetime concern. In peacetime, another type of law applies—international human rights law, which is a broader set of treaties, principles, laws, and national obligations regarding "civil, political, economic, social, and cultural rights that all human beings should enjoy."
If autonomous weapons are to comport with international human rights law, the weapons must at least comply with all international, regional, and bilateral human rights treaties, as well as with corresponding domestic legislation. Indeed, it might be necessary for autonomous weapons to promote human rights. So it's not enough for a nation to ask whether autonomous weapons will protect civilians in some other country where it is engaged in military operations; autonomous weapons must also abide by the laws of one’s own country. More subtly, autonomous weapons must also pass legal muster in circumstances that sit uncomfortably between the laws of war and the laws of peace.
All this constitutes a very high bar for autonomous weapons. To see this clearly, examine how autonomous weapons might violate, for example, the European Convention on Human Rights. If autonomous weapons were deployed within Europe and were used for ubiquitous surveillance, say in counterterrorism operations, they might fail to respect the right to private and family life, which is guaranteed under Article 8 of the convention. These weapons, because they might be cyber-related instead of robotic, could also have adverse effects on freedom of thought, conscience, and religion (guaranteed under Article 9). Cyber-related autonomous weapons could impinge on freedom of expression (Article 10) if they chilled online discourse or expression.
Of course the most serious threat posed by autonomous weapons is the threat to the right to life. One might suppose that "civilian safety" means right to life, instead of, for example, the right to private and family life. But the right to life—which is guaranteed not only under the convention's Article 2 but also under other important international instruments—is not unlimited. The right to life depends to a large extent on legal permissions regarding the use of lethal force.
These legal permissions, however, differ depending upon whether one is at war or at peace. In peacetime (or "law enforcement") situations, using lethal force requires an imminent threat to bystanders or officers. During war, the threshold for using lethal force is much lower. Applying these distinctions to autonomous weapons suggests that if an individual is identified as a potential or actual threat, autonomous weapons must try to arrest him (unless the threat is lethal and imminent to bystanders; there could be no threat to the machine). If the system is incapable of arrest—say, because it is an aerial system—the choices seem limited to either killing or not killing. But killing an individual in such circumstances would be an automatic violation of the right to life. What is more, doing so would transgress the right to a fair trial. Denying the right to trial undermines the rule of law, itself the most important force providing for and protecting civilian safety.
Danger to everyone. Beyond all this, civilian safety and consequently the right to life are threatened by a potential arms race in autonomous weapons and artificial intelligence. Such a race would expose civilians the world over to undue, potentially existential risk. If autonomous weapons are developed and deployed, they will eventually find a home in every domain—air, space, sea, land, and cyber. They will hunt in packs. They will be networked in systems of unmanned weapons systems. They will patrol computer networks. They will be everywhere. It is hubris, then, to suppose that only one country will pursue their development.
Many states will conclude that their defense requires development, at an ever-quickening pace, of ever-stronger artificial intelligence and weapons with ever greater autonomy. But autonomous systems with learning abilities could quickly get beyond their creators' control. They would be a danger to anyone within their immediate reach. And autonomous weapons connected to each other via networks, or autonomous agents endowed with artificial intelligence and connected to the Internet, would not be confined to a single geographic territory or to states involved in armed conflict. The unintended effects of creating and fielding autonomous systems might be so severe that the risks associated with their use would outweigh any possible benefits.
Is an outright ban the proper response to development of autonomous weapons, or is effective international regulation the proper approach? I have urged in the past that autonomous weapons be banned—the risks of not banning them are too high. But with or without a ban, effective international legislation is required. Many information and communications technologies are dual-use—meaning they can be put to both military and non-military uses. Artificial intelligence can benefit societies, and this good shouldn't be thrown out with the bad. Therefore, states must come together, with the help of experts and nongovernmental organizations, to create a practical, workable approach to autonomous technologies in robotics and in cybersecurity—an approach that precludes weaponization but allows beneficial uses. Thus it is not a question of whether to ban or to regulate. It is really a question of how best to do both.