By Johnathan Falcone, January 17, 2025
Editor’s note: This is part of a package of memos to the president.
MEMORANDUM FOR THE PRESIDENT
FROM: JOHNATHAN FALCONE, RIVERSIDE RESEARCH
PURPOSE: SAFEGUARDING NUCLEAR COMMAND AND CONTROL ARCHITECTURE FOR AI APPLICATIONS.
The current AI Risk Management Framework from the Department of Commerce and the Defense Department’s Data, Analytics, and Artificial Intelligence Adoption Strategy do not adequately address the unique challenges of deploying AI for low-frequency, high-impact scenarios, particularly the nuclear weapon decision-making process. These scenarios are characterized by data scarcity due to the rarity of events relating to nuclear weapons use, and static model deployment, where there are few or only simulation-based opportunities for updates to AI models used in nuclear decision making.
In the nuclear context, where strategic risks differ markedly from other applications of AI, the overarching governance question should be: “Does this AI application increase the risk of nuclear war?” To answer this, the Nuclear Posture Review should incorporate specific data governance to manage data scarcity, an AI risk-management framework to mitigate unique nuclear-related risks, and comprehensive recovery plans to ensure human control in the event of AI system failures or anomalies.
Background
Globally, nuclear stability is under threat due to conflicts in Europe and the Middle East, China’s expanding delivery capabilities and arsenal, and the potential for other authoritarian regimes to acquire nuclear weapons. At home, modernizing the United States’ aging nuclear command, control, and communications (NC3) systems is necessary, and integrating AI to achieve this is inevitable. But if done incorrectly, this modernization could undermine, rather than enhance, nuclear stability.
To hedge against this technological risk, Strategic Combatant Command leaders have firmly stated that they will “never allow artificial intelligences to make [nuclear use] decisions.” This echoes the 2022 Nuclear Posture Review, which commits to keeping a human “in the loop” for all critical actions related to nuclear weapon use decisions.
While AI could enhance decision-making in information-saturated and time-sensitive environments, the absence of specific governance for AI in nuclear systems risks deploying solutions that might inadvertently increase the possibility of nuclear use. Simply put, maintaining human oversight is not a sufficient policy to effectively manage these risks.
Current context
The Commerce Department’s AI Risk Management Framework and the Defense Department’s Data, Analytics, and Artificial Intelligence Adoption Strategy offer valuable guidance for developing AI applications in many areas. They appropriately emphasize data quality and performance metrics and encourage building AI models that are responsible, traceable, reliable, and governable.
But these protocols fall short when applied to models for low-frequency, high-impact events like nuclear use decision making. These AI management approaches fail to consider models with the following characteristics:
Proposal
The nuclear command, control, and communications system is more accurately described as a system of systems. It comprises various components: detection radars, space-based communications, and warning and attack assessment systems. AI applications can support and enhance modernization efforts, but existing frameworks don’t address the unique challenges of low-frequency, high-impact events and how applications may influence dynamics unique to the nuclear context.
To mitigate these risks, I recommend the forthcoming Nuclear Posture Review specifically address the following.
Data governance for nuclear command, control, and communications AI applications. Establish protocols for data collection to ensure quality and security. This governance must address synthetic data usage, leverage opportunities to learn from “non-events” to minimize model errors, and institute integrity checks of real-world data to ensure they reflect current realities without historical bias.
AI risk management framework for nuclear command, control, and communications. Focus on identifying and managing risks like operator-machine bias, data biases originating from simulation or historical datasets, cybersecurity threats, and how implementing these applications might affect escalation perceptions.
Incident response and recovery plan. Formalize and strengthen a human-centric decision-making process by preparing for AI system failures through redundant manual systems, regular personnel training, and clear protocols for human intervention when AI anomalies occur. Ultimately, operators and the president alike must be empowered and have processes to bypass AI inputs, at any point in the decision-making process.
Incorporating these protocols into the Nuclear Posture Review provides clear guidance for audiences both at home and abroad. Domestically, it encourages innovation and provides a framework to responsibly tackle cybersecurity and human vulnerabilities in nuclear command, control, and communications, enhancing nuclear safety and security. Internationally, it positions the United States as a leader in responsibly integrating AI into nuclear operations by acknowledging and addressing the unique challenges this integration poses.
The views expressed are those of the author and do not reflect the official policy or position of his current employer, the Department of Defense, or the US government.
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.