Regulating military AI will be difficult. Here’s a way forward

By Vincent Boulanin | March 3, 2021

A U-2 spy plane.The US Air Force outfitted a U-2 spy plane with an AI system designed to perform in-flight tasks that would normally be a pilot's responsibility. Credit: US Department of Defense.

For officials running the world’s most powerful militaries, investing in artificial intelligence (AI) systems for surveillance, battle management, or autonomous weapons is a no brainer. Most analysts believe AI will be a critical technology for national security. But as the United States and other countries build better AI, they may end up pushing nervous adversaries and even allies to take steps that could increase the risk of armed conflicts. The AI laggards could build up their militaries, make changes to their nuclear posture, or even put their nuclear arsenals on higher alert.

If the international community doesn’t properly manage the development, proliferation, and use of military AI, international peace and stability could be at stake. It’s not too early for national governments to take action. The big question is what are the right tools for doing so. For now, instead of the slow-moving process of drafting an international treaty, governmental negotiators should focus on efforts to build trust. They should look to some of the same mechanisms that helped the Soviet Union and the United States keep a nuclear war at bay. These confidence-building measures could be key to reducing the risks posed by military AI.

Why a treaty isn’t the answer. Officials in the arms control community like to push for internationally agreed regulations such as legally binding treaties to address military risks; these have the greatest normative power. However, there are three reasons that pursuing an AI treaty similar to those that ban biological weapons, chemical weapons, and anti-personnel landmines will be challenging.

First, capturing what military AI and its risks are in the language of a treaty is no easy task. AI is a fuzzy technological area, and experts themselves still disagree about what it is and isn’t. Perceptions also change over time; what people considered AI in the 1980s is standard software technology today. The arms control community would face significant conceptual and political hurdles if it pursued a blanket regulation on military AI.

To make their task easier, treaty negotiators might try and focus on regulations targeted at more problematic military AI applications such as autonomous weapons systems or elements of nuclear command and control. But not even this will be easy. Negotiators at the United Nations have been debating about lethal autonomous weapon since 2013 and still haven’t reached a consensus on what these weapons are and how they will be used. In fact, governments have yet to articulate use cases for most applications of military AI.

RELATED:
Wargames and AI: A dangerous mix that needs ethical oversight

Second, it might take years or even decades for governmental negotiators to reach an agreement on an AI arms control treaty. Given how fast AI technology evolves, officials may find that the eventual outcome of any international negotiation is out of tune with technological reality and obsolete from the get go, especially if a treaty is based on technical characteristics.

The private sector, which develops most AI technology, has already been reluctant to participate in regulatory efforts on military AI, likely due to public relations concerns. Many companies have shied away from participating in the UN lethal autonomous weapons debates.

Third, the political outlook for a new arms control agreement is gloomy. As tensions rise between Russia, China, and the United States, it’s difficult to imagine these countries having many constructive discussions on military AI going forward.

A way forward? Luckily, arms-control negotiators have more tools to work with than formal debates and internationally agreed regulations. They also can look to other processes and risk-reduction measures to identify and mitigate the spectrum of risks that may stem from military AI.

To a large extent, countries could effectively mitigate risks through the creative use of the suite of confidence-building measures that the arms control community came up with during the Cold War. The United States and the Soviet Union had, for instance, regular dialogues, a hotline to help them communicate during nuclear tensions, and scientific cooperation programs aimed at increasing mutual understanding and trust.

Confidence-building measures—in the form of sharing information or engaging in dialogue—are extremely valuable tools for addressing the conceptual problems posed by AI and developing collaborative risk-reduction initiatives. They can help diplomats and other officials working on arms control and strategic issues create a common vocabulary, a shared understanding of the state and trajectory of the technology, and a mutual understanding of the risks posed military AI applications.

Trust-building activities such as conferences where experts interact with one another or scientific collaboration programs can help the arms control community to not only follow technological developments but also to involve private sector and academic experts in identifying risk-reduction measures. These activities are also well suited for discussing narrow technical issues like the testing methods that could ensure AI safety that may otherwise be difficult to address in treaty-based arms control forums.

RELATED:
Preserving the nuclear test ban after Russia revoked its CTBT ratification

World governments can use confidence-building activities as keys to open politically deadlocked multilateral arms control processes, like the nearly decade-long UN debates on lethal autonomous weapons. For the major military powers of the world, a lack of mutual trust is the biggest hurdle they face in pursuing their arms control objectives on AI. Information sharing, expert conferences, and other dialogues can help them develop a better understanding of one another’s capabilities and intentions.

AI is an intangible technology; capabilities can be easily over- or underestimated. Through confidence-building measures, governments can have greater certainty about the type of challenges that might emerge not only as their adversaries but also their allies adopt AI.

Ultimately, confidence-building processes like so-called “track 1, 1.5, and 2” dialogues—for example, an international academic conference—might help countries build a common understanding about what constitutes responsible use of military AI. Some countries are working on efforts intended to encourage norms of good behaviour. The United States has invited allied countries to discuss the ethical use of AI, for example. Likewise, NATO has initiated a process to encourage its members to agree on a series of ethical principles. Several European countries have called for European Union members to start a strategic process on the responsible use of new technologies, AI in particular.

There are also a handful of dialogue processes aimed at facilitating discussions between rivals, notably the United States, Russia, and China. These are invaluable as they provide an opportunity for experts from these countries to discuss in a non-politicized setting the possibility for internationally agreed limits on the military use of AI.

The arms control community must respond to the risks of military AI, but the pursuit of a dedicated treaty might not be the right approach—at least for now. Confidence-building dialogues and activities might provide more agile and effective ways for national governments—but also the relevant companies and academic initiatives that create many of these technologies in the first place—to align their views and work collaboratively to identify specific risk-reduction measures. Eventually, these dialogues could form the basis for one or even multiple regulatory efforts, which could be based on agreed norms of responsible behavior regarding military AI.

Editor’s note: This article was drafted with the support, in part, of Carnegie Corporation of New York for a project for the Center for a New American Security. All content is the responsibility of the author.

 


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ron Wilson
Ron Wilson
3 years ago

What is needed is a treaty that requires all lethal Ai systems to have the capabilities that is somewhat like that of a crash recorder in an airliner that records any and all lethal events by that system and that cannot be tampered or erased so that an international body can review any lethal action to see if it met accepted/acceptable rules of engagements (ROIs).

Greg
Greg
3 years ago

For AI arms control maybe there might be some insight from a Sci Fi video: “Collossus: The Forbin Project”. That video had the American AI supercomputer communicating with the Soviet Union Super Computer. It was meant so each could understand the capabilities of the other but it became a collaboration between the two. The prime directive of the American Collossus and the Soviet Super Computer was to prevent nuclear war. They both decided the best way was to control humanity. https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project#:~:text=After%20being%20handed%20full%20control,its%20creators'%20orders%20to%20stop. What might be learnt from this? Depending on how capable AI military AI computers become there could be some… Read more »