Autonomous weapons and the curse of history

By Paulo E. Santos, November 23, 2015

Autonomous weapons capable of selecting and attacking targets without human intervention are already a reality. Today they are largely restricted to targeting military objects in unpopulated areas—but fast-developing computer systems with enormous processing power and robust algorithms for artificial intelligence may soon have nations making concrete choices about deploying fully autonomous weapons in urban warfare.

But would automated warfare, as some observers claim, minimize collateral damage—or simply result in mass destruction? The answer isn't clear. What's clear is that targeting decisions made by human beings are often extremely bad. To be sure, it's important to discuss the ethics of autonomous weapons and debate whether they should be banned, regulated, or left to develop without restrictions. But dehumanized killing in all its forms is ultimately the issue.

Optimized casualties. First, what's meant by "autonomous weapons" anyway? It's a term with unclear boundaries. Cruise missiles and remote-controlled drones are in some sense autonomous, and both have been deployed widely in the battlefield. But when people speak of autonomous weapons, they generally mean weapons that have state-of-the-art capabilities in artificial intelligence, robotics, and automatic control and can, independent of human intervention, select targets and decide whether to strike them.

It's also important to understand what "artificial intelligence" means—or, more to the point, what it doesn't mean. The artificial intelligence portrayed in films and fantasy novels often involves machines that demonstrate human-level intelligence. There is currently no scientific evidence that such a thing is even possible. Instead, artificial intelligence concerns the development of computational algorithms suitable for reasoning tasks—that is, problem solving, decision making, prediction, diagnosis, and so forth. Artificial intelligence also involves generalizing or classifying data—what's known as machine learning. And intelligent systems might include computer vision software that aims ultimately to provide meaningful interpretations of images. Functions such as these don't add much excitement to Hollywood movies, but they are of great interest in the development of autonomous weapons.

Some argue that applying artificial intelligence to warfare, especially via autonomous weapons, might optimize casualties on the battlefield. Intelligent robotics systems, so the argument goes, could identify targets precisely and efficiently. They could engage in combat in such a way that collateral damage would be minimized—certainly when compared to many missions executed by humans, such as the October 3 attack by a US Air Force gunship on a Doctors Without Borders hospital in Afghanistan. Autonomous weapons might reduce civilian casualties to a bare minimum even as they improve the odds for successful missions.

But similar arguments could have been marshalled for most innovations in the history of weaponry, all the way from gunpowder to the "surgical strikes" of the first Iraq War (which were portrayed so glamorously on television). And even if new weapons do manage to "optimize" killing, they also dehumanize it. For centuries, soldiers aimed at a person. Now, they sometimes aim at a target in a kind of video game. In the future they may not aim at all, leaving that job to a machine. But the differences between, say, a fully autonomous weapon system and a cruise missile or remote-controlled military drone are really more technical than ethical or moral. If modern society accepts warfare as a video game—as it did by accepting the "surgical strikes" of the 1990s—autonomous weapons have already been accepted into warfare.

Recently a number of scientists—this author among them—signed an open letter calling for a "ban on offensive autonomous weapons beyond meaningful human control." As I re-read the letter now, I notice afresh that it proposes a ban only on weapons that "select and engage targets without human intervention," while it excludes "cruise missiles or remotely piloted drones for which humans make all targeting decisions." In a sense this formulation implies that indiscriminate killing of large numbers of people—whether soldiers or civilians, adults or children—is allowable as long as humans make the targeting decisions. But examining humanity's history, it's hard to see why human control of weapons is so much better than autonomous control might be. To take one example from the 20th century—from among far too many choices—human control did not prevent the mass murder in August, 1945, of an estimated 200,000 civilians in Hiroshima and Nagasaki (though perhaps those atrocities could have been prevented if the development and use of nuclear weapons had come under effective international regulation as soon as scientists became aware that building such weapons was possible).

I signed the open letter as a pacifist. I would sign any letter that proposed a ban on the development and production of weapons. But I do not believe that an outright international ban on autonomous weapons would prevent their development—after all, research into advanced lethal intelligent robotic systems is already decades old. And compared to nuclear and usable biological weapons, whose development requires very specialized and expensive laboratories and access to easy-to-track materials, autonomous weapons are easy to make. Any existing laboratory for intelligent robotics could, with modest funding and within weeks, build from scratch a mobile robot capable of autonomously tracking and firing on anything that moves. The robot would get stuck at the first stairway it encountered, but it would nonetheless be a basic autonomous weapon.

There is no feasible way to ensure that autonomous weapons will never be built. A ban on their development would simply be an invitation to create underground laboratories, which would make it impossible to control the weapons or hold accountable the entities that developed them. What's feasible—through effective international regulation—is to ensure that development of autonomous weapons is analyzed and tracked on a case-by-case basis. Strict rules would govern autonomous weapons' targets, and deployment of the weapons would have to accord with international humanitarian law—if accordance proved impossible, the weapons would never be deployed in the field. Finally, a system must be established for holding accountable any organization that, in creating and deploying autonomous weapons, fails to abide by the regulations that govern them.

 


Share: [addthis tool="addthis_inline_share_toolbox"]