Autonomous and unaccountable

By Paulo E. Santos, December 7, 2015

Though this roundtable's participants agreed in Round One that autonomous weapons should be subject to international regulation, no one spent much time discussing how a regulatory system might be created.

Perhaps that's because all three authors concentrated on points that—although difficult to disagree with—were nonetheless important to establish at the outset. Civilian safety should be a top priority both in wartime and peacetime. Autonomous weapons can't maximize chances of military success and minimize the risk of collateral damage today, but someday they might gain those abilities. Advanced autonomous weapons, if ever deployed, could compromise basic human rights.

With all that established, Monika Chansoria and I both argued for regulating rather than banning autonomous weapons—though she and I arrived at that position for very different reasons. Heather Roff, meanwhile, argued for regulation and a ban. But again, each author discussed only briefly how to establish regulation—admittedly, a difficult issue. Autonomous weapons, by definition, are meant to make decisions by themselves. How then to assign responsibility for crimes they commit? Who is to blame when a lethal autonomous machine malfunctions?

Consider how many times you've heard a phrase such as "The problem was caused by system error." Language of this sort generally cuts off further discussion. So it's easy to imagine scenarios in which innocent civilians are killed, perhaps scores of them, but no one is held accountable because a "system error" is at fault. And indeed, who would be to blame? The mission commander who deployed an autonomous weapon, expecting it to engage with an appropriate target? The weapons' developers, who had nothing to do with targeting at all?

Autonomous weapons would automatically produce accountability gaps. But assigning responsibility for the actions of autonomous military machinery really shouldn’t be so dissimilar from assigning responsibility in other military operations: Responsibility should follow the chain of command. Therefore, the organization or individuals who gave the order to use an autonomous weapon should be held responsible for the actions of the machine. "System failure" should never justify unnecessary casualties. If that idea were incorporated into international humanitarian law and international human rights law—which currently govern only human agents, not machines—then these arenas of international law (discussed at length by Roff in Round One) might provide a sufficient basis for regulating autonomous weapons.

Human beings have learned to live with military innovations ranging from aerial bombardment to nuclear weapons. They've even learned to live with terrorist rampages. People will likewise become accustomed to increased autonomy in killing machines. That should never preclude bringing to justice people responsible for war crimes, no matter the tools used to perpetrate the crimes.

Then again, it's not clear whether the international community would even find out about cases in which autonomous weapons killed innocent civilians. The secrecy surrounding the US military drone program doesn't inspire much confidence in that regard. In warfare, accountability gaps are common. They are created by the inherent secrecy of military operations, the complacency of the media, and public attitudes rooted in ignorance. Accountability gaps—which will continue to exist with or without autonomous weapons—bring the Bulletin's Doomsday Clock closer to midnight than autonomous weapons ever will.

 


Share: [addthis tool="addthis_inline_share_toolbox"]