The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By Michael C. Horowitz | June 24, 2016
When intelligent machines appear with weapons in movies or television shows, it generally means that humanity is on the run. From The Terminator to Battlestar Galactica, priority number one for sentient robots in science fiction is often to eliminate humanity. With new, real-world advances in artificial intelligence emerging nearly every week, prompting the White House to announce a new initiative to understand its benefits and risks, is it time for humanity to run for the hills? Spoiler alert—no.
Global science and technology leaders like Stephen Hawking and Elon Musk have expressed concern about the existential risk to human society that could come from highly advanced artificial intelligence. Echoing Oxford philosopher Nick Bostrom, Hawking and Musk fear the potentially unintended and unpredictable ways that highly advanced artificial intelligence might function. Futurist Ray Kurzweil and those associated with the Singularity movement, in contrast, view the growth of intelligent machines as a boon for humanity. By harnessing their power, optimists argue that a “global techno-Nirvana” could emerge.
These bold and clashing visions understandably get a lot of attention—they make for great press. And certainly, the increasing integration of autonomous machines into human society, and in particular into militaries, raises important ethical, moral, and practical questions that should force us to think hard. But we are a long way from the kind of world-destroying machines that show up in the movies, and focusing the conversation on that scenario is distracting and unproductive. We need to stop talking about Skynet, the superintelligent computer system at the heart of The Terminator films, and start talking about the real ways integrating machine autonomy into weapon systems could influence the character of warfare.
To that end, a seemingly much more prosaic conversation is taking place. A Campaign to Stop Killer Robots has emerged, led by Human Rights Watch and other nongovernmental organizations committed to banning lethal autonomous weapon systems before they ever hit the battlefield. In 2015, thousands of artificial intelligence and robotics experts (plus luminaries like Hawking and Musk) signed an open letter arguing for the prohibition of “offensive autonomous weapons beyond meaningful human control.” In May, the White House came out with its initiative, recognizing that “artificial intelligence carries some risk and presents complex policy challenges.” And the United Nations’ Convention on Certain Conventional Weapons has discussed autonomous weapons at its annual review conference in Geneva for the last three years. The world is paying attention.
What is a killer robot? A key sticking point in the global conversation remains fundamental uncertainty about what exactly an autonomous weapon system is, along with what it means for a weapon to be beyond human control. The US Department of Defense, like many nongovernmental organizations, defines an autonomous weapon system as one with the ability to select and engage targets without human intervention. The devil, however, is in the details.
Interpreted broadly, weapon systems that select and engage targets on their own could include computer-guided precision weapons like the Tomahawk missile used by the United States and other militaries, as well as other homing munitions that have existed for many decades. On the other end of the spectrum, autonomous weapons could be defined so narrowly as to only include intelligent machines capable of cognitive judgments on par with humans. There is a huge gulf between precision-guided weapons that reduce civilian casualties and enhance battlefield effectiveness by enabling more accurate targeting, on one hand, and sentient robotic soldiers on the other. A failure to agree on a working definition makes real discussion about the differences between potential autonomous weapon systems and existing weapons difficult.
At this year’s annual Convention on Certain Conventional Weapons, attended by more than 100 countries, there was one sign of progress—what appears to be general agreement that humans should remain at the center of decision-making concerning the use of force, whether one describes that center with the phrase “meaningful human control” (the phrase preferred by NGOs) or “appropriate levels of human judgment” (preferred by the United States and some other governments). But even these phrases raise further questions about how specifically to define those terms, especially given the heavy automation in many modern weapon systems today, such as the AIM-120 AMRAAM air-to-air missile, the next-generation LRASM anti-ship missile, or the Phalanx ship-defense system.
The vast degree of uncertainty about what autonomous weapons are makes the discussion over whether to ban them fundamentally different from the arms control dialogues of the last few decades. Land mines, cluster munitions, and blinding lasers have all been regulated in recent years, the latter proactively, as they weren’t yet in battlefield use. But those campaigns focused on discrete and well-understood weapons not viewed as central to most military operations. In the case of land mines and cluster munitions, their responsibility for horrific civilian suffering around the world also loomed large.
By contrast, how can states have a clear position on regulating or banning autonomous weapons when they do not know whether the new rules will apply to today’s increasingly ubiquitous precision arms, or only cover artificial-intelligence weapons they currently have no intention of building? Autonomous weapon systems are not a specific, discrete technology in any case. More broadly, technology guru Kevin Kelley compares artificial intelligence to electricity in its likely omnipresence in the future. If Kelley is correct, and artificial intelligence becomes embedded in parts of devices across modern militaries, it could yield a classic arms control dilemma: The more potentially important systems are to the military operations of leading nations, the more difficult their regulation becomes. Definitions, here, are critically important.
Context counts. Differentiating between types of potential autonomous weapons could offer one way forward. Some future autonomous munitions, for example, if deployed by a responsible and accountable actor following the law of war, may have much more in common with today’s cruise missiles than with humanoid soldier-robots.
Even in a situation where an autonomous weapon platform—such as an artificially intelligent version of the remotely piloted MQ-9 Reaper deployed by the US and other militaries today—selects, tracks, and engages targets on its own, context may make a difference in how we should think about it. An autonomous weapon platform operating in a naval battle in a specific geographic area, in a time-delimited fashion, where only lawful combatants are present, may raise different, less pressing issues than an autonomous Reaper conducting targeted strikes against individuals in densely-populated urban environments.
Drawing analytical bright lines is an intellectual activity far from headline-grabbing concerns about world-destroying killer robots. But drawing them is necessary, because unless we can define autonomous weapon systems, the category will remain too broad.
Arms control has tended to work best when focused on counting tangible systems like missiles or bombers. It is simply easier to conceptualize, understand, and plan for controls on tangible systems than for hazily defined future technologies. Even when we can all agree on exactly what we’re talking about, though, autonomous weapons present interesting challenges. The difference between the remotely piloted MQ-9 Reaper and an autonomous Reaper is software, not hardware, raising more complex verification challenges than those presented by arms control agreements of the past. It would be hard for a country to even know if an adversary had used an autonomous system, as opposed to a remotely piloted system. And giving broad access to the software codes that operate weapon systems could raise other security challenges. In areas as uncertain as artificial intelligence, the risk of unintended consequences is high—from both inaction and action.
We must also remember that some elements of the technology are coming no matter what. In today’s world, machine learning, robotics, and artificial intelligence are being developed for use in every major industry and every part of the world, and that means they will inevitably affect the military. The commercial applications are vast, and discovery is unlikely to slow down soon. There needs to be a continuing dialogue between scientists, who understand trends in artificial intelligence and the realm of the possible, and defense experts and military leaders who bring familiarity with how advanced weapons operate today and deep knowledge about how states think about weapons development.
US Army Air Force Gen. Henry Arnold wrote of industrial age warfare in 1943 that “[l]aw cannot limit what physics makes possible.” The international community has come a long way in its ability to leverage international law and regulate the use of military force since 1943. What is necessary in the current conversation about artificial intelligence and military systems, however, is to push past worst-case fears and best-case hopes. Instead, we should attempt to better understand potential complexities in how increasing autonomy may shape warfare. Only then will we be able to determine what, if anything, individual countries and the international community should do to make sure the intersection of machine autonomy and military systems happens in the way most likely to improve, rather than threaten, the future of humanity.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
Topics: Columnists, Technology and Security