The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Sixty years after the Cuban Missile Crisis, how to face a new era of global catastrophic risks

By Christian Ruhl | October 13, 2022

A US Navy Lockheed SP-2H Neptune flying over a Soviet cargo ship with crated Russian bombers on deck in late 1962. US Navy photo

This month marks the 60th anniversary of the Cuban Missile Crisis. For two tense weeks from October 16 to October 29, 1962, the United States and the Soviet Union teetered on the brink of nuclear war. Sixty years later, tensions between the world’s major militaries are uncomfortably high once again.

In recent weeks, Russian President Vladimir Putin’s nuclear-charged threats to use “all available means” in the Russo-Ukrainian war have again raised the prospect of nuclear war. And on October 6, US President Joe Biden reportedly told a group of Democratic donors: “For the first time since the Cuban Missile Crisis, we have a direct threat to the use of nuclear weapons, if in fact things continue down the path they’d been going.”

Any uncontrolled escalation of these existing conflicts could end in global catastrophe, and the history of the Cuban Missile Crisis suggests that such escalation may be more likely to happen through miscalculation and accidents. Lists of nuclear close calls show the variety of pathways that could have led to disaster during the Cuban crisis. Famously, Soviet naval officer Vasili Arkhipov vetoed the captain of a nuclear submarine who wanted to launch a nuclear-armed torpedo in response to what turned out to be non-lethal depth charges fired by US forces; had Arkhipov not been on this particular vessel, the captain might have had the two other votes he needed to order a launch.

Today, artificial intelligence and other new technologies, if thoughtlessly deployed, could increase the risk of accidents and miscalculation even further. For example, imagine a re-run of the Cuban Missile Crisis in which Arkhipov is replaced by an AI-enabled decision aid. At the confluence of rapid technological progress and heightened fears of a major war involving the Great Powers, it’s easy to throw up one’s hands and see global catastrophic risks—including but not limited to the intersection of artificial intelligence and nuclear weapons—as fundamentally intractable. However, history also shows that policy makers, scientists, and ordinary people can come together to reduce global catastrophic risks. There are straightforward steps we can take away from the brink today: developing new confidence-building measures on AI, updating nuclear risk-reduction tools like the nuclear hotline, and restarting backchannel dialogues between non-governmental contacts in the United States and Russia, and the United States and China.

President John F. Kennedy confers with Defense Secretary Robert S. McNamara at the White House on October 29, 1962. Photo by Cecil Stoughton, US National Archives and Records Administration

New twists on old problems. The generation that lived through the Cuban crisis is shrinking. For example, US Secretary of State Antony Blinken was just 6 months old when President John F. Kennedy learned that U-2 spy planes had discovered Russian ballistic missiles stationed in Cuba. For those who remember them, the 13 days that followed were a time of profound awareness of humanity’s fragility.

RELATED:
On November 5, AI is also on the ballot

I was born in a reunited Germany after the Cold War and, like most people in my generation, had no direct experience with nuclear crisis until Russia’s invasion of Ukraine. NPR interviews with two women who lived through the crisis provide a glimpse into what the experience was like. One, who was a child in Miami at the time, “didn’t sleep for days” and “was very afraid.” The other, in Cuba, felt “the world was going to end.” Their lived experiences show that even close calls with global catastrophic risks can have very real effects on the mental health and well-being of people around the world.

Today US and Russian nuclear arsenals are smaller than in 1962, but technological progress has introduced a new element of uncertainty. The widespread adoption of AI in military technologies, including the deployment of autonomous and near-autonomous weapons, may introduce new risks: an increased speed of war, automation bias, and other risks related to the brittleness and documented failure modes of machine learning systems. To understand this, consider a 21st-century Cuban Missile Crisis with artificial intelligence in the mix. AI and international security expert Michael C. Horowitz—a professor at the University of Pennsylvania and now director of the US Defense Department’s Office of Emerging Capabilities Policy—has examined what the blockade of Cuba might have looked like with widespread use of AI-enabled ships. (Full disclosure: I previously worked for Horowitz at the University of Pennsylvania’s Perry World House.)

Not all AI applications, as Horowitz explains, are destabilizing, and “giving up human control to algorithms in a crisis that could end with global nuclear war would require an extremely high level of perceived reliability and effectiveness.” Rather, problems arise when the perception of reliability and effectiveness does not match reality. The speed of AI-enabled decision-making, for example, could compress a two-week crisis into two hours. That might not leave time for a future Arkhipov to block a potentially catastrophic decision.

Baby steps away from the brink. Efforts to control AI-enabled weapons systems have stalled at the United Nations, where discussions of a “killer robot ban” have led to long definitional debates and no real progress. Conventional conflict between major militaries, moreover, would likely derail even the least ambitious efforts at governance of emerging technologies. The devastation of such a conflict, combined with the weakened international system, may be among the greatest risk factors. William MacAskill puts it simply in his recent book What We Owe the Future: “When people are at war or fear war, they do stupid things.” (MacAskill is an adviser to Founders Pledge, my employer.)

RELATED:
Interview: Lawrence Norden on US election security

It’s easy to feel fatalistic about these issues, but I am optimistic that experts can do much in the near term. We’ve done it before. Right after the Cuban Missile Crisis, a flurry of activity led to tangible risk-reduction measures, most famously the nuclear hotline, which allowed direct leader-to-leader communication in times of crisis. Later in the Cold War, Ronald Reagan and the late Mikhail Gorbachev worked together to reduce the superpowers’ stockpiles; today, there are roughly one-sixth as many nuclear weapons in the world as there were in 1986. Thanks to their efforts, humanity is arguably safer in some respects today than 60 years ago.

Policy makers, philanthropists, and scientists can look to Cold War-era risk reduction for next steps on global catastrophic risks. On AI governance, the Cold Warrior’s favorite tools of confidence-building measures can build trust and reduce the risk of misunderstandings. On autonomous weapons, policy makers can emulate the success of the 1972 Incidents at Sea Agreement, and consider mechanisms like an International Autonomous Incidents Agreement to create a process for resolving disputes related to AI-enabled systems. On nuclear security, policy makers can update Cold War risk-reduction measures, including projects to increase the resilience of crisis-communications hotlines.

Existing hotlines and most communications systems appear most likely to fail at the very times when they are needed most (for example, when conditions are degraded in the early stages of a major war). Research to increase the resilience of such systems in extreme crisis conditions could therefore be especially valuable if it enables explicit leader-to-leader bargaining and limits escalation. Finally, Cold War dialogues like the Pugwash conferences can facilitate unofficial discussions between the United States and China on emerging technologies, which can help reduce tensions and avoid undesirable technology races.

At Founders Pledge, where I work, these emerging risks and the existence of manageable interventions are part of why we just launched a new Global Catastrophic Risks Fund focused on finding and financing effective solutions to these problems. Other efforts to reduce global catastrophic risks are also underway, including endeavors to raise the profile of these issues and to manage them (see, for example, the Global Catastrophic Risk Management Act in Congress).

Sixty years ago, humanity was lucky. Today, we can’t afford to keep relying on our good luck.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Brian Whit
Brian Whit
2 years ago

Love the sentiment, love the idea of practical ways to reduce risks. I want the goal to be disarmament, not ongoing low hanging fruit of deterrence AND lowering chance of miscalculations. So long as there are nukes, and as we in the US upgrade, after having made our nukes much more accurate, while China has the goal of 500 nukes in less than a decade and we provoke conflict in/of Taiwan. Talk is cheap. I hope the Fund counts its successes by counting actual implementation of said threat reductions, and not waste the world’s time by just enumerating them. Count… Read more »