The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By Heather Roff, December 23, 2015
It’s easy to assume that autonomous weapons will, as their technological capacity improves, someday surpass human beings in their decision-making capacity on the battlefield. Humans, after all, get tired. They are easily misguided, or ideologically bent, or lack good judgment. Technologically sophisticated weapons, meanwhile, will not suffer from any of these failings.
But this assumption has no grounds in reality, and it is a poor basis on which to make decisions about the future of autonomous weapons.
My roundtable colleague Paulo Santos writes that no scientific evidence supports the idea that machines might ever "demonstrate human-level intelligence." But he doesn’t dismiss the thought that "Autonomous weapons might reduce civilian casualties to a bare minimum even as they improve the odds for successful missions." Here he walks a fine and somewhat odd line—and he does the same when it comes to banning autonomous weapons or regulating them. He would prefer that autonomous weapons never exist. But he is concerned that a ban, proving infeasible, would encourage the creation of underground laboratories. So he comes down on the side of regulation.
Monika Chansoria, meanwhile, is highly concerned about protecting civilians from terrorists. On that basis she argues against banning autonomous weapons. But autonomous weapon systems have nothing to do with terrorism. They don’t represent a way to target terrorists. They are not a way to "win" the "war" against terror. They are merely weapons that can detect, select, and fire on targets without human intervention.
Yet if one is to believe, as Chansoria appears to, that autonomous weapon systems will someday gain the ability to distinguish terrorists from civilians (thus detecting and selecting one human over another), one must believe these systems will be embedded with artificial intelligence so sophisticated that it exceeds human intelligence where the ability to make certain distinctions is concerned.
If one does not assume that the technology will rely on artificial intelligence that exceeds human intelligence, I am hard pressed to see how such systems would ever be able to identify individuals who don’t wear uniforms but do actively participate in hostilities. In the words of Stuart Russell, a leading expert on artificial intelligence at the University of California, Berkeley, "’combatant’ is not a visual category." Rather, it is a class of persons engaged in an undefined set of activities. This means that unless humans wear sensors that autonomous weapons can detect, artificial intelligence cannot provide "meaningful interpretation of images" (to borrow Santos’s phrase) in a complex battlespace where humans engage in an undefined set of behaviors.
Gaining clarity on autonomous weapons means abandoning the notion that they are merely smarter smart bombs. Precision munitions that "limit collateral damage" are merely precise in their ability to locate a particular location in time and space. That location, either painted by a human being with a laser, or guided through coordinates and satellites, is still set by a human. The human chooses that target. The weapon’s precision concerns only the probability that the weapon will land on that exact spot. Autonomous weapons, on the other hand, would choose their own targets. They would choose the munition to launch toward a target. That munition might be a "smart bomb" or it might be a "dumb bomb," but the precision isn’t the issue. The very choice of target is the issue.
Thus it does not help to confuse matters by misclassifying autonomous weapons, discussing their deployment in operationally inappropriate environments, or assuming that their development will yield cleaner war with less collateral damage. Such approaches do nothing to address the very real challenges that autonomous weapons present. What’s really needed is for the international community, and for international organizations such as the United Nations, to take a timely and decisive stand on the matter. How would regulation work? What might a ban look like? These are questions that member states must answer. But it’s time to begin answering them and stop engaging in meaningless chatter.
Topics: Technology and Security
Share: [addthis tool="addthis_inline_share_toolbox"]