The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Autonomous weapons, civilian safety, and regulation versus prohibition

In July, researchers in artificial intelligence and robotics released an open letter—endorsed by high-profile individuals such as Stephen Hawking—calling for "a ban on offensive autonomous weapons beyond meaningful human control." The letter echoes arguments made since 2013 by the Campaign to Stop Killer Robots, which views autonomous weapons as "a fundamental challenge to the protection of civilians and to … international human rights and humanitarian law." But support for a ban is not unanimous. Some argue that autonomous weapons would commit fewer battlefield atrocities than human beings—and that their development might even be considered morally imperative. Below, authors from Brazil, India, and the United States debate this question: Would deployed autonomous weapons promote or detract from civilian safety; and is an outright ban—or rather effective international regulation—the proper response to development of autonomous weapons?

Round 1

Banning and regulating autonomous weapons

Hunting in packs. Patrolling computer networks. Deployed on land, at sea, in the air, in space—everywhere. Deployed autonomous weapons certainly sound sinister. But on balance, would they promote or detract from civilian safety?

Answering this question requires a clear understanding of the term "civilian safety." If it means protecting civilian lives during armed conflict, then yes, autonomous weapons might well contribute to this end someday. Today's technology, however, is not robust enough for autonomous weapons to distinguish combatants from noncombatants, particularly amid insurgencies or civil wars. The best that current technology can achieve is to recognize radar signatures, heat signatures, shapes—or, in the case of people, sensors on uniforms. But this only helps identify one’s own fighters, which in no way increases civilian security.

Over time, autonomous weapons technology may—with advancements in facial recognition, gesture recognition, biometrics, and so forth—become better able to identify permissible targets. Such advancements, nevertheless, would not guarantee that civilians would not be targeted. Nor would they preclude the emergence of other threats to civilian safety. For instance, in order to counter potential threats, autonomous weapons may someday perform persistent surveillance of populations, somewhat akin to the "Gorgon Stare" airborne surveillance system currently utilized by the United States. If similar technology is employed on autonomous weapons, societies will face a host of problems not directly related to armed conflict but nonetheless related to civilian safety.

"Civilian safety" extends, then, beyond the conduct of hostilities—beyond the scope of international humanitarian law. That is, civilian safety is both a wartime and a peacetime concern. In peacetime, another type of law applies—international human rights law, which is a broader set of treaties, principles, laws, and national obligations regarding "civil, political, economic, social, and cultural rights that all human beings should enjoy."

If autonomous weapons are to comport with international human rights law, the weapons must at least comply with all international, regional, and bilateral human rights treaties, as well as with corresponding domestic legislation. Indeed, it might be necessary for autonomous weapons to promote human rights. So it's not enough for a nation to ask whether autonomous weapons will protect civilians in some other country where it is engaged in military operations; autonomous weapons must also abide by the laws of one’s own country. More subtly, autonomous weapons must also pass legal muster in circumstances that sit uncomfortably between the laws of war and the laws of peace.

All this constitutes a very high bar for autonomous weapons. To see this clearly, examine how autonomous weapons might violate, for example, the European Convention on Human Rights. If autonomous weapons were deployed within Europe and were used for ubiquitous surveillance, say in counterterrorism operations, they might fail to respect the right to private and family life, which is guaranteed under Article 8 of the convention. These weapons, because they might be cyber-related instead of robotic, could also have adverse effects on freedom of thought, conscience, and religion (guaranteed under Article 9). Cyber-related autonomous weapons could impinge on freedom of expression (Article 10) if they chilled online discourse or expression.

Of course the most serious threat posed by autonomous weapons is the threat to the right to life. One might suppose that "civilian safety" means right to life, instead of, for example, the right to private and family life. But the right to life—which is guaranteed not only under the convention's Article 2 but also under other important international instruments—is not unlimited. The right to life depends to a large extent on legal permissions regarding the use of lethal force.

These legal permissions, however, differ depending upon whether one is at war or at peace. In peacetime (or "law enforcement") situations, using lethal force requires an imminent threat to bystanders or officers. During war, the threshold for using lethal force is much lower. Applying these distinctions to autonomous weapons suggests that if an individual is identified as a potential or actual threat, autonomous weapons must try to arrest him (unless the threat is lethal and imminent to bystanders; there could be no threat to the machine). If the system is incapable of arrest—say, because it is an aerial system—the choices seem limited to either killing or not killing. But killing an individual in such circumstances would be an automatic violation of the right to life. What is more, doing so would transgress the right to a fair trial. Denying the right to trial undermines the rule of law, itself the most important force providing for and protecting civilian safety.

Danger to everyone. Beyond all this, civilian safety and consequently the right to life are threatened by a potential arms race in autonomous weapons and artificial intelligence. Such a race would expose civilians the world over to undue, potentially existential risk. If autonomous weapons are developed and deployed, they will eventually find a home in every domain—air, space, sea, land, and cyber. They will hunt in packs. They will be networked in systems of unmanned weapons systems. They will patrol computer networks. They will be everywhere. It is hubris, then, to suppose that only one country will pursue their development.

Many states will conclude that their defense requires development, at an ever-quickening pace, of ever-stronger artificial intelligence and weapons with ever greater autonomy. But autonomous systems with learning abilities could quickly get beyond their creators' control. They would be a danger to anyone within their immediate reach. And autonomous weapons connected to each other via networks, or autonomous agents endowed with artificial intelligence and connected to the Internet, would not be confined to a single geographic territory or to states involved in armed conflict. The unintended effects of creating and fielding autonomous systems might be so severe that the risks associated with their use would outweigh any possible benefits.

Is an outright ban the proper response to development of autonomous weapons, or is effective international regulation the proper approach? I have urged in the past that autonomous weapons be banned—the risks of not banning them are too high. But with or without a ban, effective international legislation is required. Many information and communications technologies are dual-use—meaning they can be put to both military and non-military uses. Artificial intelligence can benefit societies, and this good shouldn't be thrown out with the bad. Therefore, states must come together, with the help of experts and nongovernmental organizations, to create a practical, workable approach to autonomous technologies in robotics and in cybersecurity—an approach that precludes weaponization but allows beneficial uses. Thus it is not a question of whether to ban or to regulate. It is really a question of how best to do both.

 

Autonomous weapons: Tightrope balance

When researchers in artificial intelligence released an open letter in July calling for a ban on "offensive autonomous weapons beyond meaningful human control," they specified that a ban might prohibit weapons such as "armed quadcopters" capable of identifying and killing people "meeting certain pre-defined criteria." But the ban would not include "cruise missiles or remotely piloted drones for which humans make all targeting decisions." So it's worth noting that the proposed ban would not prohibit a number of autonomous weapons that have already been deployed—because these weapons are classified as defensive.

They include the US Navy's Phalanx—a "rapid-fire, computer-controlled, radar-guided gun system" that has been in use since 1980, and that the US Army has adopted in a land-based form more recently. Germany's fully automated NBS Mantis defense system can likewise detect, track, engage, and fire on projectiles. And Israel's Iron Dome missile defense system operates autonomously except when, perceiving a threat, it appeals to a human being for a quick decision on whether or not to fire.

Such systems are generally accepted as legitimate tools of war. Fully autonomous offensive weapons, however, are a different matter. They invite difficult questions about whether such weapons can uphold the moral imperative to protect civilian lives during conflict. Easily overlooked in this debate, however, is another moral imperative: that of protecting civilians endangered by non-state actors who deliberately perpetrate mass violence and terror against innocents.

In my view, any proposal for banning lethal autonomous weapons must take into account the unconventional, asymmetric, and irregular warfare that non-state transnational actors conduct—and such conflict's effects on civilians. Non-state actors often thrive precisely because they are indistinguishable from local civilian populations. They also thrive by making use of inhospitable terrain such as mountains and deserts, by slipping through porous borders, and by drawing on the help of complicit states or state actors. Militaries can sometimes overcome the advantages that non-state actors enjoy, notably through the optimal use of technologies including unmanned aerial vehicles (supported by good intelligence). But militaries find it very problematic to achieve victory over non-state actors in the conventional sense. Correspondingly, they struggle to protect civilians.

Could fully autonomous weapons with highly sophisticated capabilities change this equation? Might they, far from endangering civilians, save the lives of men, women, and children innocently caught up in violent conflict zones? If autonomous weapons could incapacitate enemy targets while minimizing undesired damage, they would merit serious consideration as weapons to be used in the fight against non-state actors and terrorists.

No existing weapon can properly be described as an offensive autonomous weapon capable of killing legitimate targets while sparing civilians. Today's artificial intelligence, which cannot reproduce human intelligence and judgment, would pose fundamental challenges to civilian safety if deployed on the battlefield. But it's crucial to remember that autonomous weapons technology is an evolving field. Future research and development may make it possible to encode machines with capacities for qualitative judgment that are not possible today. Future technological advancements might allow autonomous weapons to outperform human beings in battlefield situations.

In the end, I favor regulation of autonomous weapons rather than an outright ban on the entire technology. But a blanket ban does not seem likely in any event. The UK Foreign Office, for example, has stated that "[w]e do not see the need for a prohibition" on lethal autonomous weapons because "international humanitarian law already provides sufficient regulation for this area." What's needed in my opinion is a regulatory framework that limits the lethality of future autonomous weapons systems. Also needed is research into means (improved programming, for example) that would sharply limit the civilian casualties associated with autonomous weapons.

Ultimately, as with many other aspects of contemporary conflict or war, the most fundamental concern is proportionality. Indeed, I'd argue that lethal autonomous weapons might be considered ethical so long as the collateral damage they inflict isn't out of proportion to their contributions to peace, to security and stability, and to prevention of civilian bloodshed on a mass scale.

 

Autonomous weapons and the curse of history

Autonomous weapons capable of selecting and attacking targets without human intervention are already a reality. Today they are largely restricted to targeting military objects in unpopulated areas—but fast-developing computer systems with enormous processing power and robust algorithms for artificial intelligence may soon have nations making concrete choices about deploying fully autonomous weapons in urban warfare.

But would automated warfare, as some observers claim, minimize collateral damage—or simply result in mass destruction? The answer isn't clear. What's clear is that targeting decisions made by human beings are often extremely bad. To be sure, it's important to discuss the ethics of autonomous weapons and debate whether they should be banned, regulated, or left to develop without restrictions. But dehumanized killing in all its forms is ultimately the issue.

Optimized casualties. First, what's meant by "autonomous weapons" anyway? It's a term with unclear boundaries. Cruise missiles and remote-controlled drones are in some sense autonomous, and both have been deployed widely in the battlefield. But when people speak of autonomous weapons, they generally mean weapons that have state-of-the-art capabilities in artificial intelligence, robotics, and automatic control and can, independent of human intervention, select targets and decide whether to strike them.

It's also important to understand what "artificial intelligence" means—or, more to the point, what it doesn't mean. The artificial intelligence portrayed in films and fantasy novels often involves machines that demonstrate human-level intelligence. There is currently no scientific evidence that such a thing is even possible. Instead, artificial intelligence concerns the development of computational algorithms suitable for reasoning tasks—that is, problem solving, decision making, prediction, diagnosis, and so forth. Artificial intelligence also involves generalizing or classifying data—what's known as machine learning. And intelligent systems might include computer vision software that aims ultimately to provide meaningful interpretations of images. Functions such as these don't add much excitement to Hollywood movies, but they are of great interest in the development of autonomous weapons.

Some argue that applying artificial intelligence to warfare, especially via autonomous weapons, might optimize casualties on the battlefield. Intelligent robotics systems, so the argument goes, could identify targets precisely and efficiently. They could engage in combat in such a way that collateral damage would be minimized—certainly when compared to many missions executed by humans, such as the October 3 attack by a US Air Force gunship on a Doctors Without Borders hospital in Afghanistan. Autonomous weapons might reduce civilian casualties to a bare minimum even as they improve the odds for successful missions.

But similar arguments could have been marshalled for most innovations in the history of weaponry, all the way from gunpowder to the "surgical strikes" of the first Iraq War (which were portrayed so glamorously on television). And even if new weapons do manage to "optimize" killing, they also dehumanize it. For centuries, soldiers aimed at a person. Now, they sometimes aim at a target in a kind of video game. In the future they may not aim at all, leaving that job to a machine. But the differences between, say, a fully autonomous weapon system and a cruise missile or remote-controlled military drone are really more technical than ethical or moral. If modern society accepts warfare as a video game—as it did by accepting the "surgical strikes" of the 1990s—autonomous weapons have already been accepted into warfare.

Recently a number of scientists—this author among them—signed an open letter calling for a "ban on offensive autonomous weapons beyond meaningful human control." As I re-read the letter now, I notice afresh that it proposes a ban only on weapons that "select and engage targets without human intervention," while it excludes "cruise missiles or remotely piloted drones for which humans make all targeting decisions." In a sense this formulation implies that indiscriminate killing of large numbers of people—whether soldiers or civilians, adults or children—is allowable as long as humans make the targeting decisions. But examining humanity's history, it's hard to see why human control of weapons is so much better than autonomous control might be. To take one example from the 20th century—from among far too many choices—human control did not prevent the mass murder in August, 1945, of an estimated 200,000 civilians in Hiroshima and Nagasaki (though perhaps those atrocities could have been prevented if the development and use of nuclear weapons had come under effective international regulation as soon as scientists became aware that building such weapons was possible).

I signed the open letter as a pacifist. I would sign any letter that proposed a ban on the development and production of weapons. But I do not believe that an outright international ban on autonomous weapons would prevent their development—after all, research into advanced lethal intelligent robotic systems is already decades old. And compared to nuclear and usable biological weapons, whose development requires very specialized and expensive laboratories and access to easy-to-track materials, autonomous weapons are easy to make. Any existing laboratory for intelligent robotics could, with modest funding and within weeks, build from scratch a mobile robot capable of autonomously tracking and firing on anything that moves. The robot would get stuck at the first stairway it encountered, but it would nonetheless be a basic autonomous weapon.

There is no feasible way to ensure that autonomous weapons will never be built. A ban on their development would simply be an invitation to create underground laboratories, which would make it impossible to control the weapons or hold accountable the entities that developed them. What's feasible—through effective international regulation—is to ensure that development of autonomous weapons is analyzed and tracked on a case-by-case basis. Strict rules would govern autonomous weapons' targets, and deployment of the weapons would have to accord with international humanitarian law—if accordance proved impossible, the weapons would never be deployed in the field. Finally, a system must be established for holding accountable any organization that, in creating and deploying autonomous weapons, fails to abide by the regulations that govern them.

 

Round 2

Autonomous weapons: Not just smarter smart bombs

It’s easy to assume that autonomous weapons will, as their technological capacity improves, someday surpass human beings in their decision-making capacity on the battlefield. Humans, after all, get tired. They are easily misguided, or ideologically bent, or lack good judgment. Technologically sophisticated weapons, meanwhile, will not suffer from any of these failings.

But this assumption has no grounds in reality, and it is a poor basis on which to make decisions about the future of autonomous weapons.

My roundtable colleague Paulo Santos writes that no scientific evidence supports the idea that machines might ever "demonstrate human-level intelligence." But he doesn’t dismiss the thought that "Autonomous weapons might reduce civilian casualties to a bare minimum even as they improve the odds for successful missions." Here he walks a fine and somewhat odd line—and he does the same when it comes to banning autonomous weapons or regulating them. He would prefer that autonomous weapons never exist. But he is concerned that a ban, proving infeasible, would encourage the creation of underground laboratories. So he comes down on the side of regulation.

Monika Chansoria, meanwhile, is highly concerned about protecting civilians from terrorists. On that basis she argues against banning autonomous weapons. But autonomous weapon systems have nothing to do with terrorism. They don’t represent a way to target terrorists. They are not a way to "win" the "war" against terror. They are merely weapons that can detect, select, and fire on targets without human intervention.

Yet if one is to believe, as Chansoria appears to, that autonomous weapon systems will someday gain the ability to distinguish terrorists from civilians (thus detecting and selecting one human over another), one must believe these systems will be embedded with artificial intelligence so sophisticated that it exceeds human intelligence where the ability to make certain distinctions is concerned.

If one does not assume that the technology will rely on artificial intelligence that exceeds human intelligence, I am hard pressed to see how such systems would ever be able to identify individuals who don’t wear uniforms but do actively participate in hostilities. In the words of Stuart Russell, a leading expert on artificial intelligence at the University of California, Berkeley, "’combatant’ is not a visual category." Rather, it is a class of persons engaged in an undefined set of activities. This means that unless humans wear sensors that autonomous weapons can detect, artificial intelligence cannot provide "meaningful interpretation of images" (to borrow Santos’s phrase) in a complex battlespace where humans engage in an undefined set of behaviors.

Gaining clarity on autonomous weapons means abandoning the notion that they are merely smarter smart bombs. Precision munitions that "limit collateral damage" are merely precise in their ability to locate a particular location in time and space. That location, either painted by a human being with a laser, or guided through coordinates and satellites, is still set by a human. The human chooses that target. The weapon’s precision concerns only the probability that the weapon will land on that exact spot. Autonomous weapons, on the other hand, would choose their own targets. They would choose the munition to launch toward a target. That munition might be a "smart bomb" or it might be a "dumb bomb," but the precision isn’t the issue. The very choice of target is the issue.

Thus it does not help to confuse matters by misclassifying autonomous weapons, discussing their deployment in operationally inappropriate environments, or assuming that their development will yield cleaner war with less collateral damage. Such approaches do nothing to address the very real challenges that autonomous weapons present. What’s really needed is for the international community, and for international organizations such as the United Nations, to take a timely and decisive stand on the matter. How would regulation work? What might a ban look like? These are questions that member states must answer. But it’s time to begin answering them and stop engaging in meaningless chatter.

Autonomous weapons and the arduous search for civilian safety

The other authors in this roundtable express reasonable concerns about autonomous weapons, but give too little consideration to the civilian carnage caused by terrorists—carnage that might someday be reduced by autonomous weapons systems under effective regulation.

Lethal autonomous weapons require effective international regulation—that's one point on which all authors in this roundtable agree. One participant, Heather Roff, argues for banning autonomous weapons outright in addition to regulating them. But a blanket ban is very unlikely to be enacted. This makes international regulation, administered through an effective regime, the only viable path forward. Ideally, a regulatory system would both limit the collateral damage that using autonomous weapons might entail and regulate the weapons' development and proliferation. But the ultimate point of a regulatory system would be to enhance autonomous weapons' chances of contributing to, rather than detracting from, civilian safety.

Might autonomous weapons pose dangers to civilians? Certainly. As Roff argued in Round One, though future technological advancements may enable autonomous weapons to identify permissible targets, such advancements "would not guarantee that civilians would not be targeted." But this observation, though correct, overlooks the dangers posed to civilians by non-state actors who thrive on remaining indistinguishable from local civilian populations. Take India as a case in point. According to the Institute for Conflict Management in New Delhi, nearly 21,000 Indian civilians and security personnel have been killed in terrorist violence since 1988. So, though one can't argue that autonomous weapons would necessarily promote rather than detract from civilian safety—they remain a developing technology—one can argue with conviction that civilian safety, in the absence of autonomous weapons, is deeply compromised by non-state actors. Indeed, it's difficult under such circumstances to reach the "clear understanding" of the term "civilian safety" that Roff seeks. Amid insurgencies, civil wars, or other types of asymmetric violence, any such search is bound to be arduous.

Paulo Santos, meanwhile, argues that "autonomous weapons have already been accepted into warfare" because modern society has accepted "warfare as a video game." But in a place such as India, society hasn't accepted war at all—rather, war in unconventional, asymmetric, and irregular forms has been thrust upon Indians by non-state and transnational actors. These groups receive from complicit states or state entities the resources needed to conduct terrorism. This sort of warfare, even as it endangers civilian populations, creates regional instability, which itself can spawn further terrorism or armed insurgencies.

Open, democratic nations where freedom is upheld as a value—nations whose leaders seek to behave as responsible international stakeholders—do not target innocent civilians. It is non-state networks that employ mass violence against civilians to advance their agendas. This imbalance puts democratic nations at an automatic disadvantage and civilians at grave risk. If autonomous weapons systems under effective regulation can prevent mass civilian bloodshed while minimizing collateral damage, they deserve serious consideration as a legitimate technology to be employed during conflict and war.

 

Autonomous and unaccountable

Though this roundtable's participants agreed in Round One that autonomous weapons should be subject to international regulation, no one spent much time discussing how a regulatory system might be created.

Perhaps that's because all three authors concentrated on points that—although difficult to disagree with—were nonetheless important to establish at the outset. Civilian safety should be a top priority both in wartime and peacetime. Autonomous weapons can't maximize chances of military success and minimize the risk of collateral damage today, but someday they might gain those abilities. Advanced autonomous weapons, if ever deployed, could compromise basic human rights.

With all that established, Monika Chansoria and I both argued for regulating rather than banning autonomous weapons—though she and I arrived at that position for very different reasons. Heather Roff, meanwhile, argued for regulation and a ban. But again, each author discussed only briefly how to establish regulation—admittedly, a difficult issue. Autonomous weapons, by definition, are meant to make decisions by themselves. How then to assign responsibility for crimes they commit? Who is to blame when a lethal autonomous machine malfunctions?

Consider how many times you've heard a phrase such as "The problem was caused by system error." Language of this sort generally cuts off further discussion. So it's easy to imagine scenarios in which innocent civilians are killed, perhaps scores of them, but no one is held accountable because a "system error" is at fault. And indeed, who would be to blame? The mission commander who deployed an autonomous weapon, expecting it to engage with an appropriate target? The weapons' developers, who had nothing to do with targeting at all?

Autonomous weapons would automatically produce accountability gaps. But assigning responsibility for the actions of autonomous military machinery really shouldn’t be so dissimilar from assigning responsibility in other military operations: Responsibility should follow the chain of command. Therefore, the organization or individuals who gave the order to use an autonomous weapon should be held responsible for the actions of the machine. "System failure" should never justify unnecessary casualties. If that idea were incorporated into international humanitarian law and international human rights law—which currently govern only human agents, not machines—then these arenas of international law (discussed at length by Roff in Round One) might provide a sufficient basis for regulating autonomous weapons.

Human beings have learned to live with military innovations ranging from aerial bombardment to nuclear weapons. They've even learned to live with terrorist rampages. People will likewise become accustomed to increased autonomy in killing machines. That should never preclude bringing to justice people responsible for war crimes, no matter the tools used to perpetrate the crimes.

Then again, it's not clear whether the international community would even find out about cases in which autonomous weapons killed innocent civilians. The secrecy surrounding the US military drone program doesn't inspire much confidence in that regard. In warfare, accountability gaps are common. They are created by the inherent secrecy of military operations, the complacency of the media, and public attitudes rooted in ignorance. Accountability gaps—which will continue to exist with or without autonomous weapons—bring the Bulletin's Doomsday Clock closer to midnight than autonomous weapons ever will.

 

Round 3

Distinguishing autonomous from automatic weapons

My roundtable colleagues Paulo E. Santos and Monika Chansoria both argue for regulating rather than banning autonomous weapons. But they never define precisely what they would regulate. This is a troublesome oversight—anyone arguing for regulation of weapons or their actions ought to have a very clear idea what regulation entails.

Autonomous weapons, according to the US Defense Department, are weapons that select a target and fire without intervention from a human operator. But what exactly does "select" mean? How about "intervention?" These questions are more subtle than they seem.

“Select” could mean scanning a particular space for a sensor input—say, a radar signature or a facial image. But in that case the weapon is not selecting a target. Rather, it is hunting for a preselected target. A human has actually selected the target, either through programming the target parameters or identifying a target object or target area. But a weapon of this sort isn’t truly autonomous; it’s automatic.

Then again, "select" could refer to the mere act of sensing a target. But modern militaries would find such a reading problematic. Many existing weapons systems—cruise missiles, counter-rocket and mortar defense systems, torpedoes, and sea mines—sense targets and fire on them. It is highly unlikely that any state would characterize these systems as autonomous.

So what does distinguish autonomous weapons from automatic weapons—and therefore subjects them to regulation or prohibition? I would answer this question by distinguishing sophisticated automatic weapons from limited learning autonomous weapons systems.

Sophisticated automatic weapons are incapable of learning, or of changing their goals. But due to their mobility and in some cases their autonomous navigation capacities, they are capable of wreaking havoc on civilian populations. Further, they cannot uphold the principles of necessity, precaution, and proportionality. Therefore, they would most likely be used as anti-material weapons. It is unlikely they would be used as anti-personnel weapons.

Limited learning weapons, meanwhile, are capable both of learning and of changing their sub-goals while deployed. They truly select a target among a range of objects or persons. In short, they pursue military objectives—just as soldiers decide whether to fire on a person, vehicle, or building, or how best to "take a hill." These are the truly autonomous weapons systems. (No state, by the way, has come out in favor of using autonomous weapons against people. Even states that oppose a ban on or regulation of autonomous weapons have maintained that autonomous weapons systems can only be used in "operationally appropriate situations" in "uncluttered environments." So Chansoria’s suggestion that autonomous weapons could be used in counterterrorism operations has no support in diplomatic or military circles.)

My colleagues suggest that I have denied the potential of artificial intelligence to surpass certain human capabilities, or have denied that artificial intelligence is more suited to certain tasks than humans are. I don't deny any such thing—which is precisely why I worry about the risks that limited learning weapons would pose if developed and fielded. These risks—which include changing the face not merely of war but also of peacetime civilian safety and freedom—are so large that the weapons posing them must be banned outright. And before anyone applauds, as Chansoria seems to do, future weapons capable of "qualitative judgment," it's best to remember that "qualitative judgment" could only emerge after autonomous technologies had passed through an arduous, dangerous middle ground of "limited" intelligence and little judgment.

Hard questions. So what, on a practical level, should be done about the weapons systems considered in this roundtable?

Where sophisticated automatic weapons are concerned, governments must think carefully about whether these weapons should be deployed in complex environments. States should institute regulations on how they can be used. But truly autonomous systems—limited learning or even more sophisticated weapons—ought to be banned. Their use would carry enormous risk for civilians; might escalate conflicts; would likely provoke an arms race in artificial intelligence; and would create a need for sensor networks throughout all battlespaces (and cities). Indeed, pervasive surveillance alone is worrisome enough to justify a ban on autonomous weapons.

It is unpersuasive to claim, as my colleagues have done, that a ban is unlikely to be enacted or would be impractical if instituted. Other technologies, such as blinding lasers, have been banned before use—why not autonomous weapons? And just as chemical weapons were banned with the support of the world's scientists and its chemical industry, the challenges of autonomous weapons can be addressed through cooperation among scientists, roboticists, and the technology industry. What's more, some militaries already have the capability to incorporate limited learning algorithms in weapons, but they have not deployed these capabilities due to uncertainty and risk. Since militaries are already showing restraint, why not press them to reject autonomous weapons completely?

Autonomous weapons entail hard questions and serious challenges. It's time to address them. Advancing Panglossian notions about the nature of future conflict accomplishes nothing.

 

Autonomous weapons: Useful if well regulated

Cuba, Ecuador, Egypt, Pakistan, and the Vatican—only these five states, out of the 87 that sent representatives to a 2014 UN conference on lethal autonomous weapons, submitted statements urging that autonomous weapons systems be banned. Meanwhile, several dozen nations may be developing military robotics. In this environment, it seems highly unlikely that lethal autonomous weapons will be banned—and also unlikely that a ban would prove practical if instituted.

My roundtable colleague Heather Roff seems to dismiss the very possibility that autonomous weapons could ever surpass human beings where battlefield decision making is concerned. On that point Paulo E. Santos has already rebutted Roff—citing research, for example, which suggests that face recognition algorithms may come to match face pairs better than humans can. And then there's the argument that autonomous weapons may outperform humans in some situations precisely because they are not human. Heritage Foundation scholar Steven Groves argues that autonomous weapons "may perform better than humans in dangerous environments where a human combatant may act out of fear or rage."

And contrary to what Roff has suggested, autonomous weapons could play a number of useful military roles—all while conforming to international humanitarian law. Groves argues that autonomous weapons operating in permissive environments might one day attack tank formations in remote areas such as deserts—or attack warships positioned far from commercial shipping routes. Such uses of autonomous weapons would conform to the principle of distinction—an element of international humanitarian law that requires parties to a conflict to distinguish between civilians and combatants and to direct attacks only against the latter. In combat zones with no civilians or civilian objects present, it would be impossible for autonomous weapons to violate the principle of distinction.

Likewise, autonomous weapons deployed in the air could perform important military functions while adhering to the principle of proportionality in attack, another element of international humanitarian law. Autonomous weapons, for example, might hunt enemy aircraft in zones where civilian aircraft aren't permitted to fly. They might be programmed to recognize enemy aircraft by their profiles, their heat signatures, their airspeed threshold, and so forth—all of which would distinguish them from civilian aircraft. In such situations, the advantages of attacking enemy aircraft could not be outweighed by the risk of excessive civilian casualties. That risk would approach zero. Much the same, Groves says, would hold true under water—autonomous weapons systems could patrol waters and attack enemy submarines without posing much risk of excessive collateral damage. Roff, evidently taking none of this into account, produces a rather generic argument against autonomous weapons.

Traditional military strategies and tactics alone cannot adequately contend with some of the challenges presented to liberal democratic states by non-state and transnational actors, including unconventional, sub-conventional, asymmetric, and irregular forms of conflict. States must limit the scope and intensity of the military force they apply because of norms requiring that collateral damage be minimized and proportionality be maintained. Non-state actors do not respect such norms. This creates a political and psychological asymmetry that must be addressed on future battlefields. To the extent that autonomous weapons under appropriate regulation can aid in that project, they ought not to be rejected.

I argued in Round One that lethal autonomous weapons could be considered ethical as long as the collateral damage they inflict isn't out of proportion to their "contributions to peace, to security and stability, and to prevention of civilian bloodshed on a mass scale."My stand finds resonance with Heritage scholar James Jay Carafano’s argument that autonomous weapons have "the potential to increase … effectiveness on the battlefield, while … decreasing [collateral] damage and loss of human life." I stand by my Round One statement—and against an improbable, impractical ban on autonomous weapons.

 

Banning autonomous weapons: Impractical and ineffective

Computers have long outperformed humans at certain functions that are perceived to require "intelligence." A famous early example is the Bombe machine developed during World War II at Bletchley Park, which allowed the United Kingdom to decipher messages encoded by the German military's Enigma machines. In 1997, the IBM computer Deep Blue beat world chess champion Gary Kasparov in a six-game match. In 2011, IBM's Watson—which "uses natural language processing and machine learning to reveal insights from large amounts of unstructured data"—appeared on the television quiz show Jeopardy and outplayed a pair of former champions. So I reject my colleague Heather Roff's Round Two suggestion that I had contradicted myself by writing, on one hand, that it's likely impossible for artificial intelligence ever to achieve human-level intelligence; and on the other hand, that autonomous weapons might in the future perform some military functions better than human combatants can. From my perspective, no contradiction exists.

In a similar vein, Roff rejected an argument by Monika Chansoria, this roundtable's third participant, that autonomous weapons might become useful in the fight against terrorism. According to Roff, machines will never be capable of distinguishing terrorists from civilians because such a capability would require "artificial intelligence so sophisticated that it exceeds human intelligence where the ability to make certain distinctions is concerned." But some findings suggest that state-of-the-art face recognition algorithms could outperform humans in matching face pairs. Indeed, when it comes to capabilities that would allow autonomous weapons to combat terrorism effectively, recognizing faces with great precision under varying observation conditions is the key capability. To be sure, the algorithms that underlie machine perception now face a number of limitations, such as an inability to interpret fast-changing situations. But I see no reason why such obstacles can't be overcome in the future (even if, in the end, machine perception could be better deployed in surveillance systems than in weaponized machines).

The real problem with deploying autonomous lethal systems to combat terrorism is that killing suspected terrorists—while denying them the right to a fair trial—would amount to state assassination. And in any event the very concept of "terrorism" is ideologically charged. The independence movements in the Americas during the 18th and 19th centuries, for example, could have been interpreted as terrorist movements in the European capitals of the day.

Nuanced undertaking. Roff argues for a ban on lethal autonomous weapons. As a pacifist, I agree that, ideally, an outright ban is the best approach. But so much automation has already been integrated into weapons design that banning lethal autonomous weapons seems akin to stopping the development of warfare itself—a practical impossibility. And a ban, even if instituted, would likely be ineffective (and might even qualify as naive). Suppose a ban were implemented under conditions similar to those described in last year's open letter on autonomous weapons by researchers into artificial intelligence and robotics. The development of fully autonomous lethal weapons would be outlawed—but remote-controlled killing machines, cruise missiles, and other weapons with various levels of automation would not. In that situation, how could the international community be certain that a remotely controlled weapon deployed in conflict was not entirely controlled by an artificial agent? A weapon's interface need not change according to whether the agent that controls it is human or artificial. And humans could oversee a weapon's actions in either case. But in one case a human would make targeting decisions and in the other case an artificial intelligence would do so.

This is one reason I prefer strong regulation of autonomous weapons over an outright ban on the technology. Regulation would provide the tools necessary for analyzing and understanding increased automation in warfare. It would imply constraints on the development and use of autonomous weapons. And it would strike a blow against dehumanized killing and state-sponsored assassination.

If regulation is the correct course, the question becomes how to alter international humanitarian and human rights law, which now govern only human agents, so that they can cope with automation in warfare. To be sure, this would be a nuanced undertaking, and not a trivial one. But literature upon which discussions could be based already exists. It's time to get started on this project—instead of chasing a ban that will probably never be instituted and would likely be ineffective if it were.

 


Share: [addthis tool="addthis_inline_share_toolbox_w1sw"]