18 September 2017

Will artificially intelligent weapons kill the laws of war?

Herbert Lin

Herbert Lin

Herbert Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and research fellow at the Hoover Institution, both at Stanford...

More

On September 1, Vladimir Putin spoke with Russian students about science in an open lesson, saying that “the future belongs to artificial intelligence” and whoever masters it first will rule the world. “Artificial intelligence is the future, not only for Russia, but for all humankind,” he added. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Putin also said he would not like to see any nation “monopolize” the field, asserting that “[i]f we become leaders in this area, we will share this know-how with the entire world, the same way we share our nuclear technologies today.”

So Putin says he will share Russian AI with the rest of the world. Whether or not one believes that claim, it’s hard to imagine that any nation will have a “monopoly” on the technology—so for the moment, let’s assume roughly equal levels of AI sophistication for Russia and the West. What would it mean for the future of armed conflict to integrate equal levels of artificial intelligence (AI) into future military systems—not only those of the West and those of Russia, but for any nations that might face off in armed conflict?

The level of technological sophistication is only one aspect of technology’s impact on the physical battlefield. There are two other important aspects of that impact: The first involves the numbers of fielded systems that engage in combat; after all, any given system can be in only one place at a time, and more systems mean greater reach and coverage.

The second is how they are used—often captured under the rubric of the doctrine that guides mission planning. Military commanders want to accomplish certain objectives, and they deploy and use the assets available to them accordingly. They need to specify what targets are of interest, when these targets should be attacked, what the rules of engagement should be, and so on.

For the sake of argument, let’s assume that the numbers of systems in a conflict are roughly equal. Then, by assumption, the only significant difference between the two sides will be doctrinal. What would be the key differences between military doctrines of various nations regarding the use of these AI-driven systems?

It is fair to say that military theorists in all major nations are now considering the impact that AI-enabled weapon systems might have in combat. Doctrinal discussions are ongoing within militaries around the world, and no one knows the full shape and contours of future doctrines for any nation. But one might still be able to make inferences based our knowledge of past practice.

In particular, there is a great deal of Western writing concerning the extent to which the use of AI-enabled weapons will conform to international humanitarian law, i.e. jus in bello, or the laws of war. A typical issue centers on how these weapons will be able to make distinctions between civilian and military entities in conflict, if they can at all. In many Western nations, especially the United States, conformance to the laws of war has a high priority in planning for military operations, even if US forces in practice have from time to time not fully observed the laws of war. The US Defense Department employs thousands of lawyers (some estimates run as high as 10,000, though that number seems excessively high to me) doing all manner of legal work related to Defense functions. Another source reports that at least a few hundred lawyers oversee legal issues related to operational mission planning. Neither estimate cites source data that can be verified independently, but few observers doubt that lawyers do play an important role in operational mission planning.

It’s easy to imagine that other nations might not have a comparable level of concern about military operations complying with the laws of war. So imagine a Western nation and one of these other nations, armed with similar quantities of AI-enabled weapons of roughly comparable technical sophistication, engaged in armed conflict over or in territory where civilians are present. Is it more likely that military advantage will accrue to the side that exhibits less caution and uses its weapons more aggressively or to the other side?

For me, the answer is clear from a military standpoint. Indeed, that is the point of the two sources cited above complaining about the number of lawyers that participate in planning for military operations—they worry that legal judgments override operational necessities and impede or degrade US operational effectiveness. To the extent that diligent compliance with the laws of war translates into less effective combat operations (and I have never seen an argument or evidence to the contrary), battles between forces that are equally matched qualitatively and quantitatively are likely to be won by those who are less diligent with respect to compliance.

A review of the history of unrestricted submarine warfare is instructive in this regard. Unrestricted submarine warfare refers to the wartime practice of submarines sinking civilian ships (such as merchant ships) without warning; such warfare was first practiced in World War I. Article 22 of the London Naval Treaty of 1930 forbade this practice and required submarine commanders to provide for the safety of those on board before attacking a merchant ship. However, given that civilian merchant ships carried substantial amounts of war materiel, it was useful to the war effort to sink them. Submarines giving warning to civilian ships before attacking them would greatly degrade the effectiveness of such attacks, and if the civilian ships were armed, such warning might even endanger the submarine.

During the Nuremberg trials after World War II, German Adm. Karl Doenitz was found to have violated the protocols of Article 22 when he ordered the sinking of neutral merchant vessels without warning in operational battle zones. Nevertheless, the tribunal elected to ignore these breaches of international law because of authoritative testimony and evidence that the United States and Britain had also engaged in unrestricted submarine warfare.

As the history of unrestricted submarine warfare demonstrates, humanitarian motivations were ignored when observing those restrictions compromised combat effectiveness. It’s not unimaginable that a similar fate might await the laws of war when AI-enabled weapons become ubiquitous.