Six ways AI could cause the next big war, and why it probably won’t

United States’ M48 tanks face Soviet Union T-55 tanks at Checkpoint Charlie during the Berlin Crisis of October 1961. It began after Soviet premier Nikita Khrushchev ordered the closing of the border between what was then East and West Germany, and the construction of a wall surrounding West Berlin. Image courtesy US Army

Six ways AI could cause the next big war, and why it probably won’t

Historically, most technological breakthroughs do not fundamentally affect the risk of conflict. There have been notable exceptions. The invention of printing helped fuel the social and religious upheavals in Europe that later contributed to the outbreak of the Thirty Years’ War in 1618. By contrast, nuclear weapons have significantly dampened the risk of great power war since World War II.

Because advanced artificial intelligence (AI) could create far-reaching social, economic, and military disruptions, it could be another exceptional technology with important implications for international security (Kissinger, Schmidt, and Mundie 2024). Analysts need to seriously consider the possibility that AI may cause changes in the international security landscape that could lead to the outbreak of wars that would not otherwise happen (Mitre and Predd 2025).

Drawing on decades of research about what conditions make wars more or less likely throughout history, we examine six hypotheses about how AI might increase the potential for major interstate war (Van Evera 1994). The hypotheses reflect different ways that AI’s effects on militaries, economies, and societies might undermine international stability, with a focus on the pathways that appear most plausible and concerning. We evaluate these hypotheses by identifying what key conditions are needed for them to be valid and then assessing the likelihood that those conditions will align in ways that would make conflict more likely.

Exploring the consequences of advanced artificial intelligence that is much more sophisticated than what exists today, the analysis assumes that AI could eventually become capable of reliably matching human performance across a wide range of cognitive tasks, which some technologists refer to as “artificial general intelligence” (Kahl 2025).

Overall, the risk that AI will directly trigger a major war appears low, especially if governments take steps to manage the technology’s use. But AI could create destabilizing shifts in the balance of power or negatively influence human strategic judgment in ways that fuel misperceptions. Fortunately, prudent government policies can help limit these risks.

Will AI disrupt the balance of power and create new opportunities for aggression?

Advances in artificial intelligence could lead to war if AI-enabled military capabilities produce changes in the balance of power that persuade decision makers that previously unachievable aims are now within reach (Cronin and Kurth Cronin 2024). There are several key conditions that influence the likelihood of this scenario: The extent to which countries perceive large-enough military advantages to change their political calculus about using force, whether leaders believe they have fleeting windows of opportunity for military action, if their newfound advantages in the balance of power could enable them to achieve their goals through coercive diplomacy short of war, and whether they have objectives that require the use of force. The likelihood of all these conditions converging, however, appears relatively low in the near term.

First, AI would need to give the prospective aggressor an apparently large military advantage that changes its leaders’ decision-making calculus about the odds of victory. This includes the likelihood of not suffering a catastrophic retaliatory strike if they are considering attacking a nuclear-armed major power. The source of this advantage could be a breakthrough that leads to a “wonder weapon” (such as an incapacitating first-strike cyber capability or hyper-intelligent drone swarms) or, more likely, the institutional adoption of AI for many different military tasks that collectively offer transformational advantages. For AI to disproportionately benefit one side in the balance of power, that state would likely need a first-mover advantage in AI innovation and adoption, otherwise its rival could use AI-enhanced defenses to offset its new AI-enabled capabilities (Plumb and Horowitz 2025). Alternatively, AI could shift the balance of power by transforming a state’s economic productivity. If one state can vastly increase its defense spending due to an AI-driven economic boom, it might achieve a decisive military advantage through sheer scale rather than novel capabilities (Aschenbrenner 2024).

The magnitude of the economic and military advantages that AI may provide remains an open question. The risks of significant first-mover advantages are greatest in scenarios where the pace of AI innovation is faster rather than slower, the performance gains are a transformational leap rather than an incremental evolution in military capabilities, and the technology’s complexity or cost means that other countries are slow to develop advanced AI themselves—or at least take a long time to successfully integrate the technology into their military forces.

Second, in addition to the size of the military edge, the risks of conflict would be greatest if the advantage appears temporary. This could pressure leaders to believe they have a closing “window of opportunity” to strike before their advantage fades and the adversary closes the capability gap (Van Evera 2001). Leaders would most likely perceive fleeting windows of opportunity if they expected AI to diffuse rapidly. However, a very rapid rate of diffusion might make it unlikely that a leading state would have enough time to develop a large first-mover advantage in the first place.

Third, to result in a war, the advantage must not be so great that the weaker country seeks a political solution and offers concessions to avoid suffering a costly military defeat by a more powerful adversary (Fearon 1995). Enhanced military power could provide coercive leverage enabling a state to achieve its objectives through explicit or tacit threats without ultimately needing to use force. If both sides share the assessment that AI has decisively shifted the balance of power, that clarity could therefore make war less likely, though it would not make the military advantage less significant.

Finally, the country with the advantage would need to have goals that merit going to war—states do not attack each other simply because they expect to win (Blainey 1988). Countries like China or Russia could undermine global stability if they increased their relative power. This is because both nations have made territorial claims that require using force to achieve (like controlling Taiwan or Ukraine). But stability could increase if states that support the territorial status quo—such as the United States and its European and Asian allies—benefit more from AI than their challengers. This is because the military advantages they would gain from AI could reinforce deterrence by persuading revisionist powers who want to seize new territory, such as China and Russia, that their prospects for forcible conquest had become worse rather than better. While countries like Russia and North Korea lag with respect to this technology, China has made significant progress with AI, and its position in the balance of power is already much stronger than other revisionist states. This makes the prospect that China will achieve first-mover advantages in integrating AI into its military capabilities before the United States the most likely and concerning variant of this hypothesis.

Will fear of enemy AI development lead to preventive wars?

The corollary of the first hypothesis—AI will empower states with new military advantages that facilitate aggression—is that countries may resort to war to prevent their rivals from acquiring advanced AI capabilities (Hendrycks, Schmidt, and Wang 2025). If leaders believe that AI will result in explosive economic growth and decisive military advantages, they may fear that a rival acquiring advanced AI first could enable that state to coerce or attack them in the future. Faced with a “fight now or later” dilemma, preventive attacks to slow a rival’s AI development may seem like the lesser of two evils compared to defending against a rival’s AI-enabled attacks in the future (Edelstein 2017).

Several conditions would heighten these risks, but they are unlikely in combination (Burdette and Demelash 2025).

First, leaders would need to expect advanced AI will produce a large shift in the balance of power. This is possible because leaders may buy into transformative visions of AI’s potential. But leaders would still face significant uncertainty about what exactly an AI future looks like and what level of advanced AI would trigger a major crisis.

Second, leaders would need to expect that AI offers large first-mover advantages such that it will be difficult for them to catch up. But, historically, being a fast follower and adopting a new technology more effectively or efficiently is often more important than being the first to innovate (Ding 2024). For example, while Britain was the first to develop tanks, Germany achieved key doctrinal breakthroughs in how to use mechanized forces effectively.

Third, leaders would need to expect that preventive military attacks would meaningfully slow down the rival’s AI development (Mueller et al. 2006). In the US–China context, that would entail conducting strikes against large numbers of sensitive civilian and scientific targets, such as AI labs and data centers, in the homeland of a nuclear-armed great power. That would be difficult from a military perspective, and leaders would likely face large uncertainty about how long doing so would set back their rival’s technological development (Rehman, Mueller, and Mazarr 2025).

Finally, leaders would need to believe the costs and risks of the attacks would be worth the benefits. As with AI-empowered aggression under the first hypothesis, nuclear, conventional, and cyber retaliatory capabilities might deter strikes, especially when coupled with uncertainties about how much attacks could realistically achieve.

A variant of this hypothesis is that one country might achieve a breakthrough in advanced AI and then use force to preserve its monopoly. If early signs suggest that the technology is living up to its revolutionary promise, leaders may want to preserve those benefits for themselves and deny them to a rival. That stronger and more confident motive coupled with potential AI-enabled military advantages might make leaders more likely to use preventive attacks to try to preserve a monopoly than to derail a rival’s progress in the initial race for advanced AI. However, launching even limited preventive attacks against a capable rival to protect exclusive control of advanced AI might entail serious escalation risks or other political costs. If AI appears so transformative that states need it to ensure their future economic prosperity and national security, leaders of targeted states may view being denied access to the technology as a dire or even existential threat.

This file is in the public domain because it is the work of a computer algorithm or artificial intelligence and does not contain sufficient human authorship to support a copyright claim. Image courtesy Wikimedia Commons

 

Will AI reduce the expected costs of going to war?

This hypothesis rests on the belief that AI can substantially mitigate or remove many of the traditional factors that make war a costly and difficult undertaking. These include human casualties, cost, and political resistance. Countries with advanced AI-enabled militaries might be more willing to use force in pursuit of political aims if these barriers were not holding them back.

For this premise to be true, several conditions would need to hold.

First, AI-enabled drones and robots would need to replace human soldiers in dangerous roles to such an extent that the expected casualties from going to war would drop dramatically. As Ukraine’s experience suggests, drones and robots appear more likely to supplement than replace humans in combat roles for the foreseeable future (Watling and Reynolds 2025). While human casualties might decrease if militaries make drones more central to their force structures, that does not equate to human personnel becoming obsolete and casualties falling to such a low level that war will become something leaders undertake lightly.

Second, AI-enabled systems would need to be inherently cheaper than relying on humans. For AI to have physical capabilities, it needs to be paired with robotics. Drones and robots can offer valuable cost advantages over manned systems in some circumstances, but they are not cheap in absolute terms, especially when procured at the massive scale that many defense planners envision. If AI-enabled military capabilities led to shorter wars in place of costly protracted conflicts, it could make military actions less costly in both human and material terms. However, the ability to manufacture more, cheaper robotic systems using AI-enhanced automated production could increase countries’ capacity to continue fielding new forces to sustain long wars. And the cumulative human and economic costs could grow as wars protract.

A special but particularly salient case of this hypothesis is advanced AI undermining nuclear deterrence (Aschenbrenner 2024). While nuclear deterrence is not a panacea and does not always prevent limited aggression, it exerts a stabilizing force on the international system that AI is unlikely to eliminate. AI may enhance capabilities for offensive action against enemy nuclear arsenals, but it can also help defenders improve the survivability of their nuclear arsenals (Geist 2023). Similarly, while AI could make future air and missile defense systems more effective, it is unlikely to provide immunity against nuclear retaliation by major powers. This would require stopping all of an incoming nuclear strike, not just most of it—including warheads delivered by novel means developed or acquired specifically because of their potential to penetrate AI-enhanced defenses. Finally, while there might be concerns that AI could enable cyber-attacks to neutralize an enemy’s nuclear arsenal by paralyzing its command and control, this appears very unlikely given the emphasis nuclear powers place on security for these systems.

Will AI cause societal chaos that leads to war?

Another potential pathway to conflict may stem from domestic upheaval. There are concerns that integrating advanced AI into a nation’s economy could destabilize society by causing mass unemployment. In theory, leaders might attack foreign enemies to distract their populations and encourage them to “rally around the flag” and support the government. Although AI causing major economic disruption appears quite plausible (Hunter et al. 2023), this diversionary war pathway to conflict appears particularly unlikely. While leaders may seek to re-direct public ire toward internal or external enemies instead of their own governments, there is little historical evidence that they tend to respond to domestic unrest by provoking foreign wars (Fravel 2010).

Starting a full-scale war might make a leader’s domestic political problems worse rather than better, especially if it is a conjured crisis rather than a real threat. Instead, domestic upheaval tends to push leaders to look inward, toward either dramatic domestic reforms or political repression. For example, during the Great Depression President Roosevelt focused on far-reaching economic and social policies, and there was intense domestic opposition to entangling the United States even in a conflict with stakes as high as World War II.

A variant of this hypothesis is that AI might prime societies to be aggressive and imperialist. Rather than the government distracting the population, the population might call on the government to act more belligerently. For example, as Germany became more powerful in the years before World War I, there were societal calls to take its “place in the sun” and expand internationally (Renshon 2017). If AI results in explosive economic growth, there might be public demands to use those benefits for geopolitical advantage or territorial expansion. However, military aggression is not the only outlet for asserting greater status, and whether these economic advantages create new windows of opportunity for aggression depends on the conditions outlined in the first hypothesis—that AI will disrupt the balance of power and create new opportunities for aggression. Additionally, AI-enabled economic growth might be more stabilizing than destabilizing. If a society has fewer concerns about scarcity and has increased economic self-sufficiency, it could become less interested in international competition and conflict.

Alternatively, AI could make society more aggressive by reinforcing pathologies in public discourse rather than through its economic effects. This includes supercharging online echo chambers, inflaming fear and anxiety about the future, spreading disinformation, and encouraging scapegoating. These are all real concerns, though it is unclear to what extent more advanced AI would aggravate these problems relative to what human leaders have already been able to accomplish on their own (Narayanan and Kapoor 2025). AI’s potential impact on society ultimately depends on many assumptions about the technology, how it is adopted, and how governments manage the transition. How AI will reshape societal preferences remains particularly uncertain, and thus so does this variant of the hypothesis.

Will AI take actions on its own that start a war?

When AI leads to wars in movies like The Terminator, it is often after leaders have delegated control over their militaries to a machine that takes deliberate, malign action. A less dystopian but more plausible hypothesis is that AI could trigger a war through accidental or unauthorized action. An AI agent might have enough control over capabilities like autonomous military systems that loss-of-control events during crises could result in escalation (Danzig 2018). For example, autonomous drones might stray into a rival’s airspace, or an autonomous undersea vehicle might attack an adversary submarine. Such loss-of-control events could occur either because an adversary manipulates the AI system or because inherent technological complexity makes it hard to predict how AI will behave in new or changing environments.

But the conditions underpinning this hypothesis are unlikely. First, AI would need to increase the rate of accidents relative to legacy technologies without AI. This is possible because AI introduces complexity into systems, and more complexity increases the risks of accidents. On the other hand, sufficiently advanced AI might improve reliability and safety relative to human operation or today’s level of automation (Sagan 1993).

Second, the kinds of resulting accidents would need to be events sufficient to trigger wars. The costs of most military accidents throughout history have been borne by friendly forces (Herdman 1993), and most accidental attacks against adversaries are local events with only limited escalatory potential.

Third, leaders would need to fail to find ways to defuse tensions following accidents that seriously harm other countries. There is little historical evidence that leaders stumble into wars accidentally, in part because they have generally been effective at de-escalating crises when war is not in their interest (Lin-Greenberg 2024). For example, during the Iran–Iraq War, an Iraqi aircraft attacked the USS Stark, killing 37 sailors, after allegedly mistaking the frigate for an Iranian oil tanker (Crist 2012). Despite the significant loss of American life, both sides managed to de-escalate the situation.

There are encouraging signs that governments are interested in formal and informal safeguards to manage the risks of AI accidents in areas with the greatest escalatory potential. For example, despite their other differences, in 2024 the United States and China agreed to maintain human control over the decision to use nuclear weapons to reduce the risks of accidents (Renshaw and Hunnicutt 2024).

Will AI affect leaders’ decision-making in ways that make conflict more likely?

Finally, instead of directly triggering war by its actions, AI might influence the strategic decision-making of human leaders in ways that indirectly lead to conflict. Leaders who lean heavily on AI for intelligence information and decision support tools could create a pathway for AI to aggravate misperceptions and fuel instability.

The most important condition that would need to hold for this pathway to lead to war is that AI provides strategic assessments or advice that make aggression or miscalculation more likely. There is no reason to believe that advanced AI agents will have inherently stronger preferences for conflict than humans. But these support tools could still inadvertently contribute to misperceptions by suffering from “hallucinations”—that is, what happens when a large language model perceives patterns that do not exist, creating outputs that are nonsensical or fabricated.

Or these support tools could simply make incorrect inferences about a situation or a rival’s behavior that create misperceptions. Leaders might exhibit “automation bias,” placing excessive trust in AI outputs and assuming inherent objectivity or accuracy when they would have been skeptical of a human adviser providing the same information. Humans might not fully consider that the AI’s biases and fallacies might inadvertently encourage them to pursue more hardline policies or take greater risks (O’Hanlon 2025). This could lead to unwarranted confidence about a range of important assumptions regarding factors that are central to deterrence, such as a rival’s plans and intentions, the rival’s likelihood of retaliating or escalating in response to certain actions, and the probability of success in a war.

Moreover, AI systems might be tailored, deliberately or inadvertently, to reinforce a leader’s existing biases. For example, research on China’s DeepSeek has found that it exhibits hawkish and escalatory tendencies that are especially strong when discussing the United States (Reynolds, Jensen, and Atalan 2025). It is not difficult to imagine an AI-enabled echo chamber similar to that in Russia preceding its full-scale invasion of Ukraine in 2022 (Sonne et al. 2022) or the United States before the 2003 invasion of Iraq (Mazarr 2019) where an AI system confirms a leader’s preexisting belief that a window of opportunity exists and can be exploited effectively, even if such assessments are unsupported in reality.

Compounding these problems with bias, leaders may feel greater time pressures to make decisions if they worry that an adversary will use its AI decision support tools to decide and act faster. The content of the information and the timeline of decision-making are interrelated: The more leaders feel pressured to make decisions quickly, the less likely they are to critically interrogate the information that AI systems provide (Husain 2021).

On the other hand, humans have well-known psychological biases that distort their decision-making, often in ways that make escalation and conflict more likely (Kahneman and Renshon 2009). With proper precautions in design and training data, AI might help ameliorate rather than aggravate bias. Sufficiently advanced AI could produce better insight than human advisers by more accurately and rapidly compiling information to help leaders understand fast-paced and confusing crises (Paterson 2008), better communicating key uncertainties and competing interpretations when there is ambiguous information (Jervis 2006), and identifying ways to achieve objectives short of using force (Plokhy 2021). AI could also inform leaders of unpleasant information that human advisers are reluctant to share, though the leaders who most need this honesty may be the least likely to have their subordinates provide them with a candid AI adviser.

Whether AI decision support tools will have overall stabilizing or destabilizing effects depends on a range of factors, including how mature and reliable the technology is at different points over time, variation in what safeguards governments build into their AI advisers, and how much leaders come to trust AI to help them make decisions. But this hypothesis presents a credible and concerning pathway by which AI could increase the risk of war, especially because it is easy to imagine governments that may implement this technology poorly. Humans might make much better decisions when they use AI tools with the right mind-set and training, but they might make much worse decisions if the systems are designed and used negligently.

Image courtesy of Süleyman Akbulut / Pixabay.

 

Caveats and implications

Taken together, the assessment of these six hypotheses suggests that dystopian visions overstate the risk that AI will ignite a new wave of international conflict. Decisions to start wars are fundamentally about politics, not technology (Lieber 2005). New technologies like advanced AI can exert political effects through military, economic, and social pathways; but their influence competes with a range of other incentives and factors that in the absence of AI most often encourage restraint yet sometimes lead states toward war. AI could certainly play a role in the “road to war” for future conflicts, but it is unlikely to be a decisive causal factor on its own. Nevertheless, its potential to add fuel to the fire remains a reason that governments need to adopt precautionary policies to manage when and how they use AI.

Because AI is still an emerging technology, there are several caveats that come with these conclusions. The first is that forecasting the distant future, while always inherently difficult, is particularly so with AI given uncertainties about its developmental trajectory and the breadth of its potential effects across the world’s militaries, economies, and societies. These pathways could interact in dynamic and unpredictable ways that collectively fuel instability even if the risk associated with each individual pathway is low.

Although this analysis focused on the causes of major interstate conflicts, AI might pose more pronounced risks for civil wars by triggering social and economic upheaval in countries with weak state capacity. These internal conflicts can be even more lethal than interstate conflicts, and they have the potential to escalate beyond a country’s own borders (Rustad 2024).

While this analysis addressed ways that AI might undermine stability, it is important to note that AI’s net effect may tend toward strengthening rather than eroding international stability. Researchers could apply this same framework to the more positive side of the ledger: What are the hypotheses about how AI could promote stability and peace, and what conditions would need to align for that more positive vision to materialize?

Governments can manage most AI risks with prudent policy. Specifically, researchers and policy makers should pay special attention to the two hypotheses that pose the greatest challenges: The traditional risk that AI could lead to destabilizing shifts in the balance of power and the novel risk that AI could distort human strategic judgment.

To guard against shifts in the military balance that give decision makers newfound or exaggerated confidence in their ability to win a war, governments should first seek to avoid technological surprise. However, this will entail more than building AI expertise into intelligence organizations and observing the technical capabilities of adversary AI models. Recognizing emerging risks to deterrence involves tracking and anticipating the wider effects of AI on rivals’ perceptions and the military capabilities advanced AI might unlock. Governments should also consider what countermeasures might offset an adversary’s AI-enabled capabilities, including concepts such as an AI “fog-of-war machine” that plans deceptive tactics and orchestrates robotic decoys to offset a competitor’s advances in AI sensor fusion (Geist 2023). Militaries will need to ensure they are fast followers in AI adoption if not the leaders, though this push for speed could create tradeoffs with safety and readiness if not carefully managed.

Decision makers also need policies to hedge against the risks of how AI might aggravate misperceptions and fuel escalation. At the individual level, it will be critical for leaders to approach AI-generated information with the same innate caution and skepticism they should apply to assessments they receive from human advisers. At the institutional level, governments can design AI systems to manage these risks. For example, every strategic output that AI decision support tools produce could be required to include not only supporting factual evidence but also an assessment of uncertainty and potential adversary reactions, there could be human decision-making Red Teams tasked with critically evaluating and critiquing AI outputs, and part of the AI’s tasking could be to develop courses of action that buy additional time and flexibility to avoid rushing the decision-making process.

Enhancing communication measures, such as hotline arrangements between governments whose interests might collide in future crises—most notably the United States and China—would also be useful to provide recourse if the reliability of 21st-century information sources becomes more uncertain. These kinds of measures may entail sacrificing some of the decision-making speed and simplicity that today’s leaders might desire from tomorrow’s AI, but this tradeoff is worth accepting to manage risks that AI could inadvertently destabilize decision-making and make conflict more likely.

 

Funding

This research was independently initiated and conducted within the RAND Technology and Security Policy Center using income from an endowed contribution from the Ford Foundation. RAND donors and grantors have no influence over research findings or recommendations.

Disclosure Statement

No potential conflict of interest was reported by the authors.

Acknowledgments

The authors thank Salman Ahmed, Jasen Castillo, Matan Chorev, Casey Dugan, David Frelinger, Alison Hottes, Andrew Hoehn, Krista Langeland, and Joel Predd for their valuable comments on this article.

References

Aschenbrenner, Leopold. 2024. “Situational Awareness: The Decade Ahead.” Self-published essay. San Francisco, CA. https://situational-awareness.ai/

Blainey, Geoffrey. 1988. The Causes of War. New York: Free Press.

Burdette, Zachary, and Hiwot Demelash. 2025, The Risks of Preventive Attack in the Race for Advanced Artificial Intelligence. RAND Working Paper. https://osf.io/preprints/socarxiv/dx3aw_v1

Crist, David. 2012. The Twilight War: The Secret History of America’s Thirty-Year Conflict with Iran. New York: Penguin.

Cronin, Patrick M., and Audrey Kurth Cronin. 2024. “Will Artificial Intelligence Lead to War?” The National Interest. January 30. https://nationalinterest.org/feature/will-artificial-intelligence-lead-war-208958

Danzig, Richard. 2018. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority. Washington, D.C.: Center for a New American Security.

Ding, Jeffrey 2024. Technology and the Rise of Great Powers: How Diffusion Shapes Economic Competition. Princeton, NJ: Princeton University Press.

Edelstein, David M. 2017. Over the Horizon: Time, Uncertainty, and the Rise of Great Powers. Ithaca, NY: Cornell University Press.

Fearon, James D. 1995. “Rationalist Explanations for War.” International Organization 49 (3): 379-414. https://www.jstor.org/stable/2706903?seq=1

Fravel, M. Taylor. 2010. “The Limits of Diversion: Rethinking Internal and External Conflict.” Security Studies 19 (2): 307–41. doi:10.1080/09636411003795731

Geist, Edward. 2023. Deterrence Under Uncertainty: Artificial Intelligence and Nuclear Warfare, Oxford, UK: Oxford University Press.

Hendrycks, Dan, Eric Schmidt, and Alexandr Wang. 2025. “Superintelligence Strategy: Expert Version.” SuperIntelligence – Robotics – Safety & Alignment 2(1). https://doi.org/10.48550/arXiv.2503.05628

Herdman, Roger C. 1993. Who Goes There: Friend or Foe? Washington, D.C.: Office of Technology Assessment, Congress of the United States.

Hunter, Lance Y., Craig Albert, Josh Rutland, and Chris Hennigan. 2022. “The Fourth Industrial Revolution, Artificial Intelligence, and Domestic Conflict.” Global Society 37 (3): 375–96. doi:10.1080/13600826.2022.2147812

Husain, Amir. 2021. “AI is Shaping the Future of War,” PRISM, 9:3: 51-61. https://ndupress.ndu.edu/Portals/68/Documents/prism/prism_9-3/prism_9-3_50-61_Husain.pdf?ver=7oFXHXGfGbbR9YDLrnX3Fw%3d%3d

Jervis, Robert. 2006. “Reports, Politics, and Intelligence Failures: The Case of Iraq.” Journal of Strategic Studies 29 (1): 3–52. doi:10.1080/01402390600566282

Kahl, Colin H. 2025. “America Is Winning the Race for Global AI Primacy—for Now,” Foreign Affairs. January 17.

Kahneman, Daniel, and Jonathan Renshon. 2009. “Why Hawks Win,” Foreign Policy. October 13.

Kissinger, Henry A., Eric Schmidt, and Craig Mundie. 2024. “War and Peace in the Age of Artificial Intelligence.” Foreign Affairs, November 18.

Krepinevich, Andrew. 2023. Origins of Victory. New Haven, CT: Yale University Press.

Lieber, Keir. 2005. War and the Engineers: The Primacy of Politics Over Technology. Ithaca, NY: Cornell University Press.

Lin-Greenberg, Erik. 2024. “Wars Are Not Accidents: Managing Risk in the Face of Escalation,” Foreign Affairs. November/December.

Mazarr, Michael J.. 2019. Leap of Faith: Hubris, Negligence, and America’s Greatest Foreign Policy Tragedy. New York: PublicAffairs.

Mitre, Jim, and Joel B. Predd. 2025. Artificial General Intelligence’s Five Hard National Security Problems. Santa Monica, CA: RAND.

Mueller, Karl P., Jasen J. Castillo, Forrest E. Morgan, Negeen Pegahi, and Brian Rosen. 2006. Striking First: Preemptive and Preventive Attack in U.S. National Security Policy. Santa Monica, CA: RAND.

Narayanan, Arvind, and Sayash Kapoor. 2025. “AI as Normal Technology.” Knight First Amendment Institute at Columbia University.

O’Hanlon, Michael E. 2025. How Unchecked AI Could Trigger a Nuclear War. https://www.brookings.edu/articles/how-unchecked-ai-could-trigger-a-nuclear-war/

Paterson, Pat. 2008. “The Truth About Tonkin.” Naval History Magazine. February.

Plokhy, Serhii. 2021. Nuclear Folly: A History of the Cuban Missile Crisis. New York: W.W. Norton

Plumb, Radha Iyengar, and Michael C. Horowitz. 2025. “What America Gets Wrong About the AI Race.” Foreign Affairs. April 18.

Rehman, Iskander, Karl P. Mueller, and Michael J. Mazarr. 2025. “Seeking Stability in the Competition for AI Advantage.” RAND.org. https://www.rand.org/pubs/commentary/2025/03/seeking-stability-in-the-competition-for-ai-advantage.html

Renshaw, Jarrett, and Trevor Hunnicutt. 2024. “Biden, Xi Agree That Humans, Not AI, Should Control Nuclear Arms.” Reuters. November 16. https://www.reuters.com/world/biden-xi-agreed-that-humans-not-ai-should-control-nuclear-weapons-white-house-2024-11-16/

Renshon, Jonathan. 2017. Fighting for Status: Hierarchy and Conflict in World Politics. Princeton, NJ: Princeton University Press.

Reynolds, Ian, Benjamin Jensen, and Yasir Atalan. 2025. “Hawkish AI? Uncovering DeepSeek’s Foreign Policy Biases,” Center for Strategic and International Studies. April 16.

Rustad, Siri Aas. 2024. Conflict Trends: A Global Overview, 1946–2023. Oslo, Norway: Peace Research Institute Oslo. PRIO, 2024.

Sagan, Scott D. 1993. The Limits of Safety: Organizations, Accidents, and Nuclear Weapons. Princeton, NJ: Princeton University Press.

Sonne, Paul, Ellen Nakashima, Shane Harris, and John Hudson. 2022. “Hubris and isolation led Vladimir Putin to misjudge Ukraine.” The Washington Post. April 12.

Van Evera, Stephen. 1994. “Hypotheses on Nationalism and War.” International Security (18) 4: 5-39. https://doi.org/10.2307/2539176

Van Evera, Stephen. 2001. Causes of War. Ithaca, NY: Cornell University Press.

Watling, Jack, and Nick Reynolds. 2025. “Tactical Developments During the Third Year of the Russo–Ukrainian War.” Royal United Services Institute. https://static.rusi.org/tactical-developments-third-year-russo-ukrainian-war-february-2205.pdf

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.


Get alerts about this thread
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
S. Shiva Ramu
S. Shiva Ramu
12 days ago

There are many assumptions in this article. Some of them are:
1. AI is based on algorithms. This means one who prepares is expected to be empathetic and rational. This assumption is doubtful.
2. Leaders resort to external aggression to externalize domestic problems. Article assumes this is not likely. But present example of Isreal contradicts this assumption.
3. AI is a tool like any other. Every tool has two sides: positive and negative. It all depends on the user/s. One cannot assume that all are equally empathetic. 
4. Political leaders are mostly concerned with their survival and not human survival.

ALSO IN THIS ISSUE

RELATED POSTS