The authoritative guide to ensuring science and technology make life on Earth better, not worse.

How politics and business are driving the AI arms race with China

By Will Henshall | May 12, 2023

President Biden and VP Harris meeting with business leaders to discuss AI safetyOn May 4, 2023, President Biden and Vice President Harris met with the CEOs of four companies at the forefront of AI innovation, underscoring the companies' responsibility to make sure their products are safe. Credit: Twitter/@WhiteHouse

In March, thousands of tech leaders—Elon Musk among them—signed an open letter asking artificial intelligence (AI) labs to stop developing next-generation training systems for at least six months. There is a precedent for such temporary pauses in other fields of research: In 2019, for example, scientists successfully called for a moratorium on any human gene editing that would pass along heritable DNA to genetically modified children.

While a pause in the field of AI is unlikely to happen, at least it means the United States is finally starting to realize the importance of regulating AI systems.

The reasons that a pause in AI won’t happen are multifold—and are about more than just the research itself. Critics of the proposed pause argue that regulating or restricting AI would help China pull ahead in AI development, causing the United States to lose its military and economic edge. To be sure, the United States must keep its citizens secure. But failing to regulate AI or to coordinate with China in cases where that is in the United States’ interest would endanger US citizens.

History shows us that this worry is more than just theoretical. As a presidential candidate, John F. Kennedy invented the “missile gap” narrative to make President Dwight D. Eisenhower seem weak on defense, claiming that the Soviet Union was overtaking the United States in nuclear missile deployment. Kennedy’s rhetoric may have helped him politically but also hindered cooperation with the Soviet leadership. Historically, arms races are often driven more by domestic economics and politics than by rational responses to external threats.

China, which is actually regulating AI much more tightly than the United States or even the European Union and is likely to be hamstrung by US semiconductor export controls in the coming years, is far behind the United States in AI development. Much like the Cold War nuclear arms race, today’s US-China AI competition is heavily influenced by domestic forces such as private interest groups, bureaucratic infighting, electoral politics, and public opinion. By better understanding these domestic forces, policy makers in the United States can minimize the risks faced by the United States, China, and the world.

Private interests. In the US-China AI competition, companies developing AI systems and promoting their own interests might lobby against domestic or international AI regulation. There is historical precedent for this. In 2001, the United States rejected a Protocol to strengthen the Biological Weapons Convention, in part because of pressure from the US chemical and pharmaceutical industries, which wanted to limit inspections of their facilities.

US AI companies appear to be aware of the risks of using their products. OpenAI’s stated mission is “to ensure that artificial general intelligence benefits all of humanity.” DeepMind’s operating principles include a commitment to “act as responsible pioneers in the field of AI.” DeepMind’s founders have pledged not to work on lethal AI, and Google’s AI Principles state that Google will not deploy or design AI for weapons intended to injure humans, or for surveillance that violates international norms.

However, there are already worrisome signs that commercial competition may undermine these commitments. Google, fearing that OpenAI’s ChatGPT could replace its search engine, told employees it would “recalibrate” the amount of risk it is prepared to accept when deploying new AI systems. While not strictly relevant to international agreements, this move suggests that tech companies are willing to compromise on AI safety in response to commercial incentives.

Another potentially concerning development is the creation of links between AI startups and big tech companies. OpenAI partnered with Microsoft in January, and Google acquired DeepMind in 2014. Acquisition and partnership may limit the ability of AI startups to act in ways that lower risk. DeepMind and Google, for example, have clashed over the governance of DeepMind projects since their merger.

Lobbying may also raise risks. The big tech companies are experienced lobbyists: Amazon spent $21.4 million on lobbying in 2022, making it the 6th largest spender; Meta (the parent company of Facebook, Instagram, and WhatsApp) came in 10th with $19.2 million; and Alphabet (parent company of Google) was 19th with $13.2 million. Last year, big tech companies increased their donations to US foreign policy think tanks in an effort to promote the argument that stricter rules will harm their ability to compete with China.

RELATED:
Who needs a government ban? TikTok users are already defending themselves

In the future, suppliers of military AI systems might increase the chances of an AI arms race by lobbying for the development of more advanced weapons systems, or by opposing arms control agreements that would limit their future sales. This is probably a long way off. Analysis from the Brookings Institution—a nonprofit public policy organization—found that 95 percent of federal contracts from the last five years with “artificial intelligence” in the description were for professional, scientific, and technical services (essentially external funding for research and development). The same analysis found that there were 307 different vendors and 474 total contracts.

Taken together, this analysis suggests an immature market, with many smaller vendors focused on developing AI systems rather than on larger contracts for supplying hardware or software, which are more typical for military procurement. In the future, though, larger contracts for military AI and a more concentrated supplier base would probably mean increased lobbying by military AI suppliers—and increased chances of a military AI arms race.

Bureaucratic politics. There were many instances of bureaucratic politics exacerbating the Cold War nuclear arms race. As Slate columnist and author of several books on military strategy Fred Kaplan has described, the Air Force and the Navy repeatedly came up with new nuclear strategies and doctrines that would give them more of the nuclear weapons budget. For example, the Navy’s think tank came up with “finite deterrence,” which suggested that the United States could deter the Soviet Union by deploying a relatively small number of nuclear missiles on submarines, obviating the need for large numbers of nuclear bombers and missiles (which were operated by the Air Force).

Bureaucratic incentives often cause organizations to attempt to accumulate more resources and influence than is optimal from the perspective of the state. Although most cutting-edge AI development is currently carried out in the private sector, that could change. History suggests that as a technology’s strategic importance and cost grow, the inclination and capacity for the state to exert control over its development and deployment will also grow.

There is another reason for concern about AI developed in or for the public sector—particularly the defense sector, despite the current private-sector dominance. As former US Navy Secretary Richard Danzig has written, military development and use of technology tends to be particularly risky, for several reasons: secrecy makes oversight and regulation more difficult; the unpredictability of warfare environments; and the adversarial, unconstrained nature of military operations. The military already accounts for a significant proportion of US government spending on AI.

Regardless of how the military uses AI, it is likely there will be resistance to any AI arms control initiatives. An arms control agreement almost always interferes with the interests of one or more groups within the defense establishment. Military support is particularly important for ratification, which is why President Kennedy had to abandon his push for a comprehensive test ban in the face of resistance from the Joint Chiefs of Staff.

Electoral politics and public opinion. The relationship between foreign policy and electoral politics is not straightforward. An influential paper published in 2005 found that US foreign policy “is most heavily and consistently influenced by internationally oriented business leaders, followed by experts,” with some small influence for organized labor groups, and very weak or no influence from public opinion. (It should be noted that not all researchers agree with this finding, however: Many case studies and experiments have found that public opinion does influence decision makers in certain circumstances.)

RELATED:
‘I’m afraid I can’t do that’: Should killer robots be allowed to disobey orders?

Studies suggest public opinion is more important for high-salience issues—that is, issues that are seen as particularly noticeable or important. Public opinion does not come into play as much for issues that (rightly or wrongly) feel less relevant. For example, voters generally do not care much about trade policy: They do not know their political representatives’ trade policy positions, so trade policy does not affect their voting behavior. According to the “Secret Congress” theory, which contends that it is much easier to pass legislation on topics that are more under the radar and consequently not politically salient, if AI policy issues were politically salient and the parties were divided on them, it would be much more difficult to pass regulation and treaties that would reduce risks from AI.

At the moment, AI is too esoteric to be politically salient, although this is starting to change. The electoral politics of AI policy are overshadowed by broader concerns about strategic competition with China. In the United States, elite opinion, business opinion, and public opinion have shifted toward the view that engagement with China has failed and a more confrontational approach is now required. Current US policy toward China—including accelerating US AI development and restricting Chinese AI progress—commands bipartisan support.

However, if one party becomes more hawkish on China-related policy issues, public opinion on AI might split accordingly, with supporters of the more hawkish party viewing cooperation on AI policy less favorably. This may have happened in the past with nuclear weapons. There is some evidence to suggest that Obama’s 2009 Prague speech, in which he announced “America’s commitment to seek the peace and security of a world without nuclear weapons,” led to disarmament being associated with Obama personally. This polarized the issue of arms control and disarmament along partisan lines, making future policy making more difficult.

If AI policy issues do become politically salient, the history and political science literature suggest that electoral politics might impede arms control in a number of ways. For example, if arms control policy gets caught up in partisan politics, it becomes much harder to develop and implement, particularly given that treaty ratification requires a two-thirds majority in the Senate.

In the past, political groups have held dovish positions on some nuclear issues while holding hawkish positions on others. For example, the Nunn-Lugar Cooperative Threat Reduction program, which worked with the states of the former Soviet Union to dismantle and secure the legacies of the Cold War, had strong bipartisan support, even as arms control agreements faced resistance from many Republicans. Certain nuclear issues are idiosyncratic. For example, Iran issues are politicized in a different way than other nuclear issues, because of the link to Israel’s security: Many otherwise liberal Democrats who are Jewish or represent heavily Jewish districts are hawkish on Iran. AI may turn out to be similar, with political cooperation on some aspects of AI policy and partisan gridlock on others.

Finally, it is worth noting that the large number of potential uses for AI means that AI will touch people’s lives frequently and in significant ways. However, it is unlikely that these applications will cohere into a consistent pro- or anti-AI perspective. Public opinion on AI foreign policy will probably resemble other technology-related foreign policy issues—with the two major parties split according to their levels of hawkishness.

The United States has a tricky balance to strike. On the one hand, promoting AI development could create economic and social benefits, and the government has a duty to keep US citizens safe by maintaining technological superiority. On the other hand, if AI is not sufficiently well-regulated, and the United States and China can’t cooperate where necessary, the whole world could be at risk.

Striking this balance is like walking a tightrope. Domestic forces threaten to knock the United States off balance.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments