The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Who’ll want artificially intelligent weapons? ISIS, democracies, or autocracies?

By Michael C. Horowitz | July 29, 2016

One of the biggest fears about the nexus of artificial intelligence and the military is that machine learning—a type of artificial intelligence that allows computers to learn from new data without being explicitly programmed—could spread rapidly in military systems, and even risk an arms race. And the alarm over the consequences of robots armed with weapons even extended to the recent use of a remotely piloted, rather than autonomous, bomb disposal robot retrofitted with an explosive by the Dallas Police Department in July 2016. That event triggered a wave of articles about the consequences of robots with weapons, especially when used outside the military.

Discussions of the military applications of robotics have tended to focus on the United States, largely because of America’s extensive use of uninhabited (also called unmanned) aerial vehicles,  or “drones,” to conduct surveillance and launch lethal strikes against suspected militants around the world. Yet limiting the discussion of military robotics to those developed by wealthy, democratic countries such as the United States may miss important underlying trends.

Commercial markets are already spearheading the integration of artificial intelligence, in the form of “deep learning” and the broader category of machine learning, into the US and global economies. (In deep learning, a set of algorithms is used that attempt to model high-level abstractions in data by using models composed of multiple linear and non-linear transformations.)

As a July 15 special feature in The Economist on artificial intelligence argued, “you are already using it every day without realizing it: It helps to power Google’s search engine, Facebook’s automatic photo tagging, Apple’s voice assistant, Amazon’s shopping recommendations, and Tesla’s self-driving cars.” More recently, Macy’s is experimenting with using artificial intelligence, also known as AI, to help customers shop for clothes.

If this trend continues, some observers say that there could be significant large-scale implications on the same level as the industrial revolution. Putting aside those social, human, and industrial issues, we could soon face a future where AI is increasingly integrated into all areas of the economy, using the widely available underlying software. Export controls are unlikely to stop the spread of these new tools, because history shows that government regulations generally trail, rather than lead, the invention of new technologies. What’s more, there is significant innovation in AI occurring outside the United States.

At least some of the machine learning capabilities intended for purely commercial purposes will inevitably have military spillovers—reversing the dynamic of the Cold War, when military inventions helped fuel the US civilian economy. (The GPS system on your smartphone, for example, has its origins in a US Navy project to pinpoint the location of its nuclear submarines.) This distinction is important, because artificial intelligence is not in and of itself a weapon. It is much more like satellite access or the combustion engine—an enabler that could underlie many different functions, including military ones. This is in contrast to, for example, stealth technology, which only has military purposes in the form of shielding vehicles or missiles from radar.

As the capabilities of machine learning and other forms of AI become increasingly available around the world, this democratization of technology could speed the diffusion of the militarily relevant aspects of artificial intelligence—for both state and non-state actors. As my research shows, there is a big gap between the diffusion of technology and the diffusion of organizational systems for effectively using that technology. That being said, however, there are important implications beyond the United States, the United Kingdom, and other democracies—implications that include militant groups such as ISIS, and non-democracies such as Russia and China.

Many scientists are concerned that if the major military powers were to mass-produce autonomous weapons, that could lead to the weapons’ arrival on black markets and into the hands of militant groups and warlords. The problem is that the basics required to create simple versions of these weapons will likely be available to militant groups and less-developed states regardless of what the major military powers do.

Violent non-state actors such as ISIS—along with less wealthy states—could acquire the capacity to build simple autonomous weapons, particularly as commercial applications spread the relevant software and programming around the world. These capabilities, however, are likely to be fairly basic, and are arguably available today already. For example, a militant group or nation-state today could take a tracked vehicle, mount a machine gun on it, connect that machine gun to a heat sensor, and program the machine gun to fire anytime the heat sensor registered the heat signature of a human. This would be a relatively indiscriminate weapon that would generally violate international laws regarding acceptable wartime practices and likely be less efficient at accomplishing its destructive objective than other uses of force.

Moreover, if machine learning can be made to aid jobs such as connecting weapons to tracking devices and target identification, the resulting automation of tasks would ease the challenges of “system integration,” or the bringing together of component subsystems into a single, functioning system. Consequently, it would be possible to modify commercial artificial intelligence to aid in military operational planning, making it easier for many non-state actors as well as nation-states to conduct ever-more complex operations.

The conventional wisdom is that the military applications of robotics, both remotely piloted and autonomous (self-piloted), are generally an issue mostly for countries such as the United States. The thinking is that wealthy democracies are especially disposed towards investing in robotics to gain military advantage—and reduce the risk to their soldiers’ lives— because the use of robotics takes maximum advantage of their edge in capital, rather than labor. There is certainly some truth in this. Democracies such as the United States and Israel have led the world in their investments in military robotics.

Yet it is a mistake to overlook how certain types of autocracies may be inclined to invest in military robotics, particularly as the potential to harness robotics with AI grows. Autocracies exist based on their exclusion of much of their population from governance, and they inherently distrust many of their own people. This means that autocratic militaries, in general, often focus on maximizing control of their own militaries.

Autocratic leaders often fear their own people—and their militaries—for good reason. Coup attempts are much more likely in autocratic regimes, events in mid-July in Turkey aside. These pressures may be somewhat different in autocracies that are more inclusive, or that perceive a low level of internal threats, but it is only natural for dictators to distrust their militaries, especially because many autocrats have themselves come to power in military-supported coups.   

Given these pressures, many autocrats may be especially interested in military robotic systems, since they could be operated from more centralized command stations. An academic study I’ve done with researcher Matthew Fuhrmann of Stanford University—forthcoming in International Organization—shows that autocracies are just as likely to pursue armed drones as democracies. Using force from a centralized location, rather than a series of dispersed ones, means that an autocrat can more easily monitor troops that have the ability to use lethal force. This could have several consequences. For example, a key constraint on the power of autocrats—the need to have large numbers of soldiers willing to repress their own population in a tense face-to-face situation while the autocrat is holed up in a palace far away—would be reduced. Would the Egyptian military have refused Mubarek’s command to fire on protesters in Tahrir Square during the 2001 Arab Spring if the people making the choice were a handful of regime loyalists isolated in a command center that could use force by pushing a button? Centralization would make military control by the most committed regime loyalists easier.

In addition, harnessing AI to centralized military systems could reduce the number of troops that an autocrat needs to use military force—even beyond the possibilities for centralization offered by remotely piloted drones. Given the focus of autocrats on reducing their vulnerability to internal political threats, and their view of people as a weakness, automated solutions could appear especially useful. Robert Work, the US Deputy Secretary of Defense, made a similar argument in December 2015 when he stated: “[A]uthoritarian regimes who believe people are weaknesses in the machine, that they are the weak link in the cog, that they cannot be trusted, will naturally gravitate towards totally automated solutions. Why do I know that? Because that's exactly the way the Soviets conceived of their reconnaissance strike complex. It was going to be completely automated.”

Therefore, it is not surprising that a democracy such as the United States, with the best soldiers in the world, is focusing on a new and different approach in its applications of machine learning: human-machine teaming. Through what is known as its Third Offset Strategy, the US military seeks to harness emerging technologies to enhance the ability of US forces to win on the battlefield, and deter conflicts from happening in the first place. Human-machine teaming makes sense, philosophically, when a country believes in and trusts its soldiers.

Theory aside, more autocratic countries, such as Russia and China, also have a very pragmatic reason to invest heavily in the weaponization of machine learning—their inherent interest in pursuing any emerging capabilities that could help them challenge US conventional military supremacy. This naturally makes them more likely to explore avenues such as those offered by AI. Russia, for example, announced in 2014 that it was developing robotic sentries to protect its ICBM bases that might have automatic modes able to use force without a human in the loop. China is similarly “investing heavily in robotics and autonomy,” said Work in that same speech.

What does all of this mean? Most important, when thinking about the ways that emerging technologies such as AI may shape warfare, it is a mistake to focus simply on the United States. At lower levels, militant groups and less capable states may already have what they need to produce some simple autonomous weapon systems, and that capability is likely to spread even further for purely commercial reasons. Moreover, some autocratic countries may have inherent incentives, whether they are more concerned about internal or external threats, to harness machine learning to their new weapons systems. Finally, all of these those concerned—whether it be ISIS, the United States, China, or others—have one thing in common: They are only likely to develop and deploy these weapons if they believe that these capabilities will give them a relative advantage. So a key task in predicting how and whether these capabilities are likely to spread lies in understanding where they may be useful, and where they may not, in modern war.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments