The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Watch the 2025 Doomsday Clock announcement on January 28

Memo to Trump: Develop specific AI guidelines for nuclear command and control

By Johnathan Falcone | January 17, 2025

Editor’s note: This is part of a package of memos to the president.

MEMORANDUM FOR THE PRESIDENT

FROM: JOHNATHAN FALCONE, RIVERSIDE RESEARCH

PURPOSE: SAFEGUARDING NUCLEAR COMMAND AND CONTROL ARCHITECTURE FOR AI APPLICATIONS.

The current AI Risk Management Framework from the Department of Commerce and the Defense Department’s  Data, Analytics, and Artificial Intelligence Adoption Strategy do not adequately address the unique challenges of deploying AI for low-frequency, high-impact scenarios, particularly the nuclear weapon decision-making process. These scenarios are characterized by data scarcity due to the rarity of events relating to nuclear weapons use, and static model deployment, where there are few or only simulation-based opportunities for updates to AI models used in nuclear decision making.

In the nuclear context, where strategic risks differ markedly from other applications of AI, the overarching governance question should be: “Does this AI application increase the risk of nuclear war?” To answer this, the Nuclear Posture Review should incorporate specific data governance to manage data scarcity, an AI risk-management framework to mitigate unique nuclear-related risks, and comprehensive recovery plans to ensure human control in the event of AI system failures or anomalies.

Background

Globally, nuclear stability is under threat due to conflicts in Europe and the Middle East, China’s expanding delivery capabilities and arsenal, and the potential for other authoritarian regimes to acquire nuclear weapons. At home, modernizing the United States’ aging nuclear command, control, and communications (NC3) systems is necessary, and integrating AI to achieve this is inevitable. But if done incorrectly, this modernization could undermine, rather than enhance, nuclear stability.

To hedge against this technological risk, Strategic Combatant Command leaders have firmly stated that they will “never allow artificial intelligences to make [nuclear use] decisions.” This echoes the 2022 Nuclear Posture Review, which commits to keeping a human “in the loop” for all critical actions related to nuclear weapon use decisions.

While AI could enhance decision-making in information-saturated and time-sensitive environments, the absence of specific governance for AI in nuclear systems risks deploying solutions that might inadvertently increase the possibility of nuclear use. Simply put, maintaining human oversight is not a sufficient policy to effectively manage these risks.

RELATED:
Memo to Trump: Create sensible AI policies that focus on real—not speculative—concerns

Current context

The Commerce Department’s AI Risk Management Framework and the Defense Department’s Data, Analytics, and Artificial Intelligence Adoption Strategy offer valuable guidance for developing AI applications in many areas. They appropriately emphasize data quality and performance metrics and encourage building AI models that are responsible, traceable, reliable, and governable.

But these protocols fall short when applied to models for low-frequency, high-impact events  like nuclear use decision making. These AI management approaches fail to consider models with the following characteristics:

  • Data scarcity: Nuclear launch and decision events (to include wargames) are rare, leading to a lack of representative training data. Even in optimal use cases like missile warning and tracking, data collection is limited and incomplete. Simulations fill gaps, but they depend on accurate intelligence about adversary systems. If intelligence is flawed, AI models can become biased, with no real-world events to validate them. Moreover, in these missile identification and assessment scenarios, data collection focuses on recording and reporting “events” (launches) and overlooks “non-events” (instances in which human analysis correctly overrides system outputs). This absence of false positives from datasets has the potential to skew models toward false negatives, potentially leading to a mistaken nuclear launch.
  • Static model deployment: With actual nuclear-related events being extraordinarily rare, there’s little chance for model retraining or fine-tuning after deployment. This relatively static model deployment risks obsolescence if new missile technologies emerge, as models might be over-reliant on historical data and unable to adapt to new scenarios. Unlike other AI applications with continuous feedback for updates, AI in nuclear command, control, and communications lack real-time validation. Depending on the direction of a misaligned model’s bias, these models increase the likelihood that the president fails to launch when an attack warning is real or executes an irretrievable retaliatory strike, even though a warning is false.
RELATED:
Project 2025's stance on nuclear testing: A dangerous step back

Proposal

The nuclear command, control, and communications system is more accurately described as a system of systems. It comprises various components: detection radars, space-based communications, and warning and attack assessment systems. AI applications can support and enhance modernization efforts, but existing frameworks don’t address the unique challenges of low-frequency, high-impact events and how applications may influence dynamics unique to the nuclear context.

To mitigate these risks, I recommend the forthcoming Nuclear Posture Review specifically address the following.

Data governance for nuclear command, control, and communications AI applications. Establish protocols for data collection to ensure quality and security. This governance must address synthetic data usage, leverage opportunities to learn from “non-events” to minimize model errors, and institute integrity checks of real-world data to ensure they reflect current realities without historical bias.

AI risk management framework for nuclear command, control, and communications. Focus on identifying and managing risks like operator-machine bias, data biases originating from simulation or historical datasets, cybersecurity threats, and how implementing these applications might affect escalation perceptions.

Incident response and recovery plan. Formalize and strengthen a human-centric decision-making process by preparing for AI system failures through redundant manual systems, regular personnel training, and clear protocols for human intervention when AI anomalies occur. Ultimately, operators and the president alike must be empowered and have processes to bypass AI inputs, at any point in the decision-making process.

Incorporating these protocols into the Nuclear Posture Review provides clear guidance for audiences both at home and abroad. Domestically, it encourages innovation and provides a framework to responsibly tackle cybersecurity and human vulnerabilities in nuclear command, control, and communications, enhancing nuclear safety and security. Internationally, it positions the United States as a leader in responsibly integrating AI into nuclear operations by acknowledging and addressing the unique challenges this integration poses.

The views expressed are those of the author and do not reflect the official policy or position of his current employer, the Department of Defense, or the US government.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments