The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Costly signaling: How highlighting intent can help governments avoid dangerous AI miscalculations

By Owen J. Daniels, Andrew Imbrie | December 15, 2023

President Joe Biden hosts a meeting on Artificial Intelligence, Tuesday, June 20, 2023, at The Fairmont hotel in San Francisco.President Joe Biden hosts a meeting on artificial intelligence, Tuesday, June 20, 2023, at The Fairmont hotel in San Francisco. Credit: The White House by Adam Schultz

In October, the Biden administration released a sweeping executive order on the safe, secure, and trustworthy development and use of artificial intelligence. The extensive text lays out the president’s vision for responsible AI use in the United States and acknowledges that “harnessing AI for good … requires mitigating its substantial risks.” It tasks federal agencies with helping to protect Americans’ safety, privacy, and civil rights from misuse of AI, in some cases with near-term deadlines likely to jolt the bureaucracy into action.

Despite being nearly 100 pages long, the executive order is not an exhaustively detailed roadmap for trustworthy AI. It sets technological priorities and regulatory steps within the numerous agencies of the executive branch and outlines how they can begin to implement AI safeguards, but it does not carry the same power as a congressionally passed law. It is also not overly specific in enumerating the various tools agencies will need to monitor and regulate AI. Nonetheless, the executive order sends a fundamentally important signal to the American public, as well as allies and competitors abroad, about the administration’s responsible AI vision and the necessary steps for realizing it.

Government signaling around AI is challenging. Policymakers can struggle to communicate their intentions for such a transformative technology, which is already reshaping societies, economies, diplomacy, and warfare. With the recent blinding pace of commercial AI developments, many globally dispersed, countries may be unsure of how others plan to use the latest applications to gain competitive advantage. The risks of misperception and inadvertent escalation abound, so it is imperative that policymakers leverage the full complement of policy tools to send clear and credible signals of intent.

After all, there is a reason Washington and Moscow established a direct hotline following the near-disaster of the Cuban Missile Crisis.

To make their intentions clearer, policymakers can use costly signaling—a policy tool examined closely in international relations literature—to communicate about AI and decode others’ intentions. Signals are costly when the sender pays a price, whether political, reputational, or even monetary, if they fail to follow through on the messages they communicate. During the Cold War, for example, governments revealed certain capabilities to rivals to communicate deterrence messages; while such actions constrained the potential for surprise use, they allowed adversaries to understand aspects of game-changing new systems. Applying the framework of costly signals to AI amid today’s geopolitical context can help policymakers chart a path toward the responsible use of these machine tools.

Backing up words with actions. Discerning the intentions of allies and adversaries in AI is critically important for understanding risks associated with different applications of the technology. For example, policymakers may have concerns that their counterparts in another state are rushing to field capabilities that are inadequately tested to get a leg up on the competition. When embedded in foreign policy, defense, or technology strategies, costly signals can help policymakers highlight their intentions, mitigate the risks of inadvertent escalation or misperception, and reveal capabilities to adversaries that deter risk-taking.

As the United States and China navigate strategic and technological competition, the ability to discern intentions with costly signals will be key. Leveraging this policy tool in AI will likely be challenging, however, given the dual-use applications of the technology, rapid progress in large language models, and competition among private companies to bring their AI models to market first. Policymakers in both countries will need to monitor the latest technological developments and remain alert for signals the other side may be sending. The choice is not simply whether to “conceal or reveal” AI capabilities, but also how to reveal them and through which channels. The act of sending a message does not guarantee the receiver will understand it correctly, and signals may be lost amid the noise of large bureaucracies. Execution is important, but it is sometimes insufficient.

Four types of costly signals are particularly relevant to AI. The first type is tying hands, which entails strategically making public commitments before domestic and foreign audiences. If countries sign a treaty committing to develop and use responsible AI standards, for example, they may face pressure from co-signatories and the public should they deploy frontier AI models that do not meet these standards.

The second is sunk costs, where the price of a commitment is incorporated from the start and the high price of a decision indicates the sender is unlikely to renege. In AI, we might think of commitments to license and register algorithms or investments in test and evaluation infrastructure facilities as sunk costs.

RELATED:
Three key misconceptions in the debate about AI and existential risk

The third is installment costs, where the sender commits to sustaining costs into the future. These could include compute accounting tools that track clusters of AI chips in data centers or verification of government pledges to conduct AI risk assessments of models and make the results of those assessments available to the public.

The fourth is reducible costs, where the sender pays the costs of sending the signal upfront but can offset those costs over time; for example, small data approaches to AI or model cards and data sheets that provide transparency on the training data, model weights, and other specific features of AI models. These steps may be expensive to implement at first; over time, however, the costs can be recouped as the models gain popular traction and the companies deploying them earn a reputation for trustworthy development.

The potential and challenges of costly signaling. Decoding signals around military AI provides an example of both the importance and challenges of signaling. This task is difficult for several reasons. AI technologies can fail in ways that are surprising and hard to fix. Testing and evaluation methods for appraising AI-enabled military systems are nascent, and the role of private industry in developing dual-use applications can fuel misperceptions and miscalculations among the states deploying them.

How can countries overcome these challenges with costly signaling? For one, governments and the firms that develop military AI capabilities could use tying hands mechanisms—making public commitments to communicate intent about where they will or will not use AI. The US State Department’s “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy,” released in February and boasting more than 40 signatories, enshrines a commitment to “ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life cycles.” Though the United States could plausibly walk this commitment back, it would incur political and reputational costs among allies and competitors. Similarly, China could use a tying hands signal, such as allowing the People’s Liberation Army to discuss AI risk reduction measures with the United States—an option it reportedly refused during bilateral defense talks in 2021—to indicate that it is serious about international AI safety standards and reducing escalatory risks.

In addition, sunk cost measures for reducing risks could include investing in test and evaluation infrastructure for military AI and increasing transparency around safety best practices. Sharing information could help all parties gain a better understanding of where AI-enabled systems are being employed in military and political decision-making. For example, if China integrated AI into early warning systems—systems whose failure served as the source of terrifying near-missesfrom the Cold War—would Chinese leaders in a crisis today regard such failures as unintended mishaps or preludes to an intentional attack? Given the uncertainties around the relevant law, doctrine, and policy for managing incidents related to AI-enabled systems, a crisis involving such platforms could easily escalate to conflict.

The United States, China, and other nations could add installment costs atop sunk costs by publicly committing to information sharing, transparency measures, and inspections of designated AI models, testing sites, and hardware in an “International Autonomous Incidents Agreement.” Public commitments, alongside the start-up and maintenance costs of investments in testing infrastructure, would lend credibility to pledges to abide by international norms in military AI.

Governments could use reducible costs for confidence-building measures around military AI by funding or partnering with industry and academia to support interpretable AI research, creating incentives for improving AI model development transparency through model cards or red-teaming exercises. Investing in global research entities that can monitor and measure AI capabilities or improve interpretability could help all states better understand the dynamics that drive decisions involving human-machine teams in high-pressure environments.

Signals may have unintended effects among different audiences. All of that said, costly signals between states are not sent in a vacuum. A costly signal intended for one audience may be picked up and interpreted by another, further vexing communication. For example, there has been a flurry of activity by the United States and democratic partners to commit to building AI with respect for democratic values, both individually and at bodies including the European Union, the Group of Seven, the Organisation for Economic Co-operation and Development, and through the Summits for Democracy. States have committed to ensuring that AI models and systems do not undermine human rights, civil liberties, privacy, election integrity, or trust in information environments. Democratic states use such agreements to signal by tying hands, indicating to their publics, private sectors, and authoritarian competitors that they intend to build and use AI in line with their values.

RELATED:
AI and the A-bomb: What the analogy captures and misses

Commitments to democratic AI are important and worthwhile. The distinction between “democratic” and “authoritarian AI” may also be a useful shorthand for differentiating U.S. approaches to AI and China’s use of AI for surveillance and suppressing political dissent. Yet, as US National Security Advisor Jake Sullivan recently wrote, the United States must often engage countries with diverse political systems, including authoritarian governments. How authoritarian states observe commitments to democratic AI remains unclear, but it is important to consider as the United States competes with China over global technology adoption and technical standard setting.

For example, two decades of war and counterterrorism operations in the Middle East cemented the Gulf monarchies as United States regional partners. Recently, though, US policymakers have raised security concerns over tightening Chinese-Gulf ties, including in artificial intelligence and in 5G telecommunications infrastructure. Signals about democratic AI might lead authoritarian states like those in the Gulf to prefer Chinese-made AI capabilities, depending on how they interpret democratic AI signals, potentially strengthening China’s influence. Similar scenarios could arise in other theaters.

The United States should not muffle its costly signaling around democratic AI based on its relationships with such states. However, US diplomats and strategists should be aware of the downstream implications costly democratic AI signals might have and the ensuing diplomatic challenges.

Signals for a new era. Crises driven by misperception are not new in international relations, but the multipurpose applications of AI, private sector entanglement, and proliferation beyond governments mean that signals today are not necessarily “loud and clear” compared with previous eras in diplomatic statecraft. Signals may be inadvertently costly in reaching different audiences, and they must be embedded in comprehensive strategies incorporating different policy levers to be truly effective. In today’s competitive and multifaceted information environment, there are even more actors with influence on the signaling landscape. Context is key to conveying signals clearly and credibly.

One path forward is for governments to leverage procurement practices and regulations to shape norms around AI development and use. For example, policymakers could work with industry experts and academic researchers to enshrine norms around AI transparency (such as the release of model cards, system cards, or similar documentation) through procurement policies, including appropriate protections for privacy and security. Policymakers should also consider incorporating costly signals into dialogues and tabletop exercises with allies and competitors to clarify assumptions, mitigate escalatory risks, and develop shared understandings around crisis communications.

As for the administration’s executive order, it’s hard to know for certain that President Biden’s team intended to send costly signals in the precise way laid out here. Not all signals are intentional, and commercial actors may tally the costs differently from governments or from industry players in other countries. Nonetheless, the executive order may be seen as tying hands by publicly signaling the administration’s commitment to responsible AI, as well as employing a mix of costly signals by calling for steps at federal agencies like appointing chief AI officers, watermarking official communications, evaluating and streamlining visa criteria to bring talented immigrants to the United States, and establishing safeguards in areas like biosecurity. The executive order may well signal to competitors, such as China, or to allies in Europe, who are developing their own AI standards and regulations, that the US government is seriously assessing how best to implement and capitalize on the responsible application of AI.

Whether allies and competitors receive these signals as intended is another story. One hopes that it will not take another Cuban Missile Crisis for the countries deploying AI to establish open lines of communication and avoid escalation driven by new technology. Signals can be noisy, occasionally confusing some audiences, but they are still necessary.

The opinions and characterizations in this piece are the authors’ and do not necessarily represent those of the US government.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments