The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By Monika Chansoria, February 1, 2016
Cuba, Ecuador, Egypt, Pakistan, and the Vatican—only these five states, out of the 87 that sent representatives to a 2014 UN conference on lethal autonomous weapons, submitted statements urging that autonomous weapons systems be banned. Meanwhile, several dozen nations may be developing military robotics. In this environment, it seems highly unlikely that lethal autonomous weapons will be banned—and also unlikely that a ban would prove practical if instituted.
My roundtable colleague Heather Roff seems to dismiss the very possibility that autonomous weapons could ever surpass human beings where battlefield decision making is concerned. On that point Paulo E. Santos has already rebutted Roff—citing research, for example, which suggests that face recognition algorithms may come to match face pairs better than humans can. And then there's the argument that autonomous weapons may outperform humans in some situations precisely because they are not human. Heritage Foundation scholar Steven Groves argues that autonomous weapons "may perform better than humans in dangerous environments where a human combatant may act out of fear or rage."
And contrary to what Roff has suggested, autonomous weapons could play a number of useful military roles—all while conforming to international humanitarian law. Groves argues that autonomous weapons operating in permissive environments might one day attack tank formations in remote areas such as deserts—or attack warships positioned far from commercial shipping routes. Such uses of autonomous weapons would conform to the principle of distinction—an element of international humanitarian law that requires parties to a conflict to distinguish between civilians and combatants and to direct attacks only against the latter. In combat zones with no civilians or civilian objects present, it would be impossible for autonomous weapons to violate the principle of distinction.
Likewise, autonomous weapons deployed in the air could perform important military functions while adhering to the principle of proportionality in attack, another element of international humanitarian law. Autonomous weapons, for example, might hunt enemy aircraft in zones where civilian aircraft aren't permitted to fly. They might be programmed to recognize enemy aircraft by their profiles, their heat signatures, their airspeed threshold, and so forth—all of which would distinguish them from civilian aircraft. In such situations, the advantages of attacking enemy aircraft could not be outweighed by the risk of excessive civilian casualties. That risk would approach zero. Much the same, Groves says, would hold true under water—autonomous weapons systems could patrol waters and attack enemy submarines without posing much risk of excessive collateral damage. Roff, evidently taking none of this into account, produces a rather generic argument against autonomous weapons.
Traditional military strategies and tactics alone cannot adequately contend with some of the challenges presented to liberal democratic states by non-state and transnational actors, including unconventional, sub-conventional, asymmetric, and irregular forms of conflict. States must limit the scope and intensity of the military force they apply because of norms requiring that collateral damage be minimized and proportionality be maintained. Non-state actors do not respect such norms. This creates a political and psychological asymmetry that must be addressed on future battlefields. To the extent that autonomous weapons under appropriate regulation can aid in that project, they ought not to be rejected.
I argued in Round One that lethal autonomous weapons could be considered ethical as long as the collateral damage they inflict isn't out of proportion to their "contributions to peace, to security and stability, and to prevention of civilian bloodshed on a mass scale."My stand finds resonance with Heritage scholar James Jay Carafano’s argument that autonomous weapons have "the potential to increase … effectiveness on the battlefield, while … decreasing [collateral] damage and loss of human life." I stand by my Round One statement—and against an improbable, impractical ban on autonomous weapons.
Topics: Technology and Security
Share: [addthis tool="addthis_inline_share_toolbox"]