The case for banning autonomous weapons rests on morality, not practicality

By Robert Hart | April 24, 2017

As the world condemns Syrian president Bashar al-Assad’s use of chemical weapons in Syria, two fundamental lessons about the nature of international weapons bans have emerged. The first is that there are certain classes of weapons so morally repugnant that under no circumstances can their use ever be justified. As US President Donald Trump recently reiterated: They cross a “red line.” The second lesson is that the moral reasons underpinning a ban exist independently from the practical successes of the ban.

In the wake of the Syrian chemical weapons ban failure, nobody is questioning whether chemical weapons ought to be banned. Instead they are doubling down, condemning the use of such weapons and, in the case of Trump, taking retaliatory action as a result. These two lessons are useful in evaluating the current debate over whether lethal autonomous weapons systems should join chemical weapons on the list of internationally prohibited weapons.

A brief history of bans. So far, several classes of weapons are banned internationally. Chemical and biological weapons were banned in the aftermath of World War I, cementing in international law the long-standing cultural taboos against their use. Since then, prohibitions have been strengthened a number of times, most notably through the Chemical Weapons Convention and the Biological Weapons Convention. Cluster munitions, anti-personnel land mines, and permanently blinding laser weapons are also the subjects of international bans, the latter being the first weapons type preemptively outlawed (“before a stream of victims gave visible proof of its tragic effects,” the International Committee of the Red Cross noted) since 1868, when the use of exploding bullets was prohibited.

There are a number of ongoing campaigns that seek to expand this list. One campaign aims to finally ban nuclear weapons, and negotiations recently began at the United Nations with the support of more than 120 countries. Notably absent from discussions were most of the world’s nuclear powers, including the United States, the United Kingdom, China, and Russia.

Another proposal seeks to preemptively ban lethal autonomous weapons systems—commonly called “killer robots”—that could “select and engage targets without meaningful human control.” After several years of informal discussion, activists celebrated a victory late last year when the United Nations announced the official formation of a dedicated group of governmental experts.

Pros and cons of an autonomous weapons ban. Those advocating a ban of autonomous weapons make their case by pointing to a number of profound moral, social, and security concerns. Echoing earlier concerns about drones, some fear autonomous weapons systems will further lower the threshold for initiating war—making it a less politically costly, and thus more likely, option than ever before. Others worry that giving machines the power to decide who lives and dies is a fundamental affront to human dignity, and something that can never be morally justified. Worse still, many believe autonomous weapons systems would likely violate international rules of war, being unable to follow principles of proportionality and discrimination. In an open letter spearheaded by the Future of Life Institute, experts also voice concerns that a failure to ban autonomous weapons would make a dangerous artificial intelligence arms race “virtually inevitable.” The letter has more than 20,000 signatories, including many prominent computer scientists, as well as luminaries like Noam Chomsky, Elon Musk, and Stephen Hawking.

But progress on a ban has been slow, and experts fear fully autonomous weapons systems will be seen on the battlefield before effective regulation has been developed. To date, just two countries clearly outline formal policies on autonomous weapons—the United Kingdom and the United States—and each skirts the problem by reasserting platitudes that weapons will remain under “meaningful human control.” Neither supports a ban, and the UK policy says existing international law will be “sufficient to regulate the use” of this new class of weaponry.

Not everyone thinks a ban is a good idea. Roboticist and roboethicist Ronald C. Arkin says autonomous systems could potentially reduce the civilian casualties of war, especially when contrasted with the obvious failings of today’s human fighters. Arkin does not oppose a ban in principle, but rather believes that a premature ban could prevent the reduction of civilian deaths and casualties.

A more common line of opposition takes aim at the practicalities of a ban. In an article for The Conversation, Jai Galliott, an early supporter of a ban, outlined the reasons why he had changed his mind, observing that “We already have weapons of the kind for which a ban is sought.” Galliott added, “UN bans are also virtually useless . . . ‘bad guys’ don’t play by the rules.” Evan Ackerman, the senior writer for IEEE Spectrum’s robotics blog, concurs: “The barriers keeping people from developing this kind of system are just too low” for any ban to work.

Distinguishing between morality and practicality. These arguments largely miss the point. In its appeal for a ban on chemical and biological weapons after World War I, the International Committee of the Red Cross branded them “barbarous inventions” that could “only be called criminal.” Similarly, in its call to ban nuclear weapons, the International Campaign to Abolish Nuclear Weapons points to the urgent humanitarian case for abolition, arguing that this immense “humanitarian harm . . . must inform and motivate efforts to outlaw and eradicate nuclear weapons.”

The case for banning entire classes of weapons rests on moral grounds; practicality has nothing to do with it. To suggest that because a technology already exists, as has been the case with every class of weapon banned today—or because there are low barriers to its development, which is a growing problem with biological weapons—is sufficient reason to oppose a ban entirely neglects the moral foundations upon which bans are built, and opens the door for a line of reasoning that precludes any weapon ban at all. Weapons are not banned because they can be, but because there is something so morally abhorrent about them that nothing can justify their use.

This brings us to one of the most fundamental points driving the campaign to ban lethal autonomous weapons systems: “Allowing life or death decisions to be made by machines crosses a fundamental moral line.” As recent events in Syria tragically demonstrate, weapons bans are not and never will be watertight, but it is not these practicalities that drive the ban. The prohibition on chemical weapons underscores the overwhelming belief that their use will never be acceptable, and the same reasoning applies to a ban on lethal autonomous weapons.

Enforcing a ban on autonomous weapons systems might be difficult, but these difficulties are entirely divorced from the moral impetus that drives a ban. Similarly, discussing the potential difficulties of enacting a ban, or the current state of the technology, should be separate from discussing whether or not lethal autonomous weapons systems should join the list of prohibited weapons.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments