The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Today’s AI threat: More like nuclear winter than nuclear war

By Daniel Zimmer, Johanna Rodehau-Noack | February 11, 2024

Robots standing with a nuclear winter landscape in distanceCredit: Thomas Gaulkin / Adobe Stock

Last May, hundreds of leading figures in AI research and development signed a one-sentence statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” While ongoing advances in AI clearly demand urgent policy responses, recent attempts to equate AI with the sudden and extreme immediate effects of launching nuclear weapons rest on a misleadingly simple analogy—one that dates back to the early days of the Cold War and ignores important later developments in how nuclear threats are understood. Instead of an all-or-nothing thermonuclear war analogy, a more productive way to approach AI is as a disruption to global systems that more closely resembles the uncertain and complex cascades of a nuclear winter.

Over the last year and a half, headline-grabbing news has fed into the hype around the awe-inspiring potential capabilities of AI. However, while public commentators brace for the rise of the machine overlords, artificial un-intelligence is already kicking off chains of widespread societal disruption. AI-powered disinformation sows distrust, social media algorithms increase polarization, and mass-produced synthetic media degrade democratic engagement while undermining our shared sense of reality.

Uncritically equating acute nuclear attack effects and AI threats risks reproducing the same kind of all-or-nothing thinking that drove some of the most dangerous dynamics of the nuclear arms race. Drawing these analogies also unduly distracts from the dramatic damage that even a comparatively “small” nuclear war or “dumb” AI can cause to today’s interconnected social, ecological, and political systems. Rather than fear a future AI apocalypse, policymakers should recognize that the world is already living through something like the early days of an AI nuclear winter and develop effective frameworks for regulation that factor in how it is disrupting political, social, and ecological systems in unpredictable ways today. Overemphasizing speculative dangers of superintelligence (systems that exceed human intelligence) jeopardizes urgently needed efforts to regulate AI with a view to the systemic impacts of actual and emerging capacities.

Nuclear risk revisited. In 1961, John F. Kennedy warned that “every man, woman and child lives under a nuclear sword of Damocles” based on the contemporary concern that global fallout from thermonuclear war could poison every living being. The Cuban Missile Crisis of October 1962 came within a hair’s breadth of bringing the sword down, elevating nuclear fear to an unprecedented pitch. That very same month, computer pioneer Irving J. Good said “the survival of man depends on the early construction of an ultraintelligent machine.” Such a machine would surpass human intelligence, and Good proposed that human beings stood poised on the cusp of unleashing a self-reinforcing artificial intelligence explosion that could transform human existence just as totally as thermonuclear war. “Ultraintelligence,” he noted, would possess the transcendent power to either solve all human problems or destroy all human life, becoming “the last invention that man need ever make.”

Over the years, this simple and compelling vision of a sudden and transformative AI apocalypse has persisted almost unchanged. Computer scientist Vernor Vinge rechristened Good’s “intelligence explosion” the singularity in the 1990s, further warning that if it cannot be averted or contained, AI could cause “the physical extinction of the human race.” Good’s misgivings finally went mainstream a half-century later with the publication of philosopher Nick Bostrom’s book Superintelligence, which warned of an impending runaway AI that could see “humanity deposed from its position as apex cogitator over the course of an hour or two”—a transformation so sudden and total that its only “precedent outside myth and religion” would be global thermonuclear war.

At the same time, while visions of extinction by AI explosion remained remarkably fixed, understandings of nuclear danger underwent a sea change. After realizing that radiological risks had been slightly overstated in the 1960s, scientists first began studying the global environmental effects of nuclear weapons in the 1970s. By the early 1980s, they started to realize that the global climatic impacts of nuclear war could be nearly as devastating as the radiological harm and required far fewer weapons to trigger. The firestorms of burning cities would fill the atmosphere with soot and particles that would block sunlight, causing surface temperatures to plummet and setting off a self-reinforcing cascade of collapses across interconnected ecological, agricultural, industrial, and social systems. Subsequent studies have confirmed that the resulting “nuclear winter” would likely kill the majority of those alive today, while even a limited exchange of several hundred warheads between India and Pakistan could still kill as many as two billion by starvation in the gloom of a milder “nuclear autumn.”

RELATED:
The campaign volunteer who used AI to help swing Pakistan’s elections: Interview with Jibran Ilyas

Over the decades, advances in planetwide data collection and computer modeling transformed understandings of nuclear danger, replacing mistaken certainties about universal death by fallout with a growing awareness of the uncertain consequences that would follow from cascades of environmental and social breakdown. Similarly, the last several years have seen rapidly enhancing AI capacities spread to transform whole networks of human relations—with already destabilizing political and ecological consequences. Deep fakes intended to influence voters erode trust, and digital assistants and chatbots affect humans’ capacity for cooperative behavior and empathy, while producing immense carbon footprints. Just as it would take only a tiny fraction of today’s nuclear arsenals to initiate a chain of global-scale catastrophic events, humans do not need to wait for a moment when “machines begin to set their own objectives” to experience the global, interconnected, and potentially catastrophic harms AI could cause.

Today’s AI products contribute to, and accelerate, global warming and resource scarcity, from mining minerals for computation hardware to the consumption of massive amounts of electricity and water. Notably, the environmental burden of AI gets short shrift from those worried about the technology’s existential threat, as the “Statement of AI Risk” lists AI alongside nuclear war and pandemics but does not include climate change as an existential issue. Beyond environmental harms, existing AI systems can be used for nefarious purposes, such as developing new toxins. OpenAI’s large language model interface ChatGPT has been successfully prompted to share bomb-making instructions and tricked into outlining the steps to engineer the next pandemic. Although these examples still require more human input than many realize, an AI system is reportedly generating targets in Gaza, and the race is on to deploy lethal autonomous weapons systems that could reset the balance of power in volatile regions across the globe. These examples show that it does not take an intelligence explosion to cause immense harm. The ability to leverage automation and machine efficiency to global catastrophic effect is already here.

Arms race to the bottom. More insidiously, the analogy between nuclear weapons and the infinite risk-reward calculus of an “artificial intelligence explosion” reproduces the dynamics of the arms race. There are just enough similarities between the rush for nuclear and AI superiority to encourage repeating the same mistakes, with the phrase “AI arms race” becoming a common refrain. One of the clearest similarities between these cases might be that, much as the nuclear arms race with the Soviet Union was driven by spurious bomber and missile “gaps,” some of today’s most heated arms-race rhetoric hinges on overhyping China’s prowess.

A closer inspection shows that nuclear and AI arms races differ fundamentally. While building nuclear arsenals requires accessing a finite supply of enriched fissile material, AI models consist of binary code that can be infinitely copied, rapidly deployed, and flexibly adopted. This radically transforms the scale of the proliferation hazard of AI, particularly because—in contrast to the strict governmental oversight of nuclear weapons—AI development is highly commercialized and privatized. The difference in proliferation between nuclear technology and AI matters for approaches to their governance. The former can generate both explosions and electric power, but its weaponization can be measured and monitored. Current benchmarks for AI development, by contrast, are too far removed from real-world applications’ effects to usefully assess potential harm. In contrast to nuclear technology, AI is not merely of a dual-use nature. Instead, the remarkable range of activities it can transform makes it a general-purpose, enabling technology like electricity.

Where the vast build-up of the nuclear arms race signaled each adversary’s resolve to potentially destroy the world but otherwise left it intact, the headlong race towards an artificial intelligence explosion promises to radically transform the world regardless of whether its ultimate destination is ever reached (or even proves reachable).

RELATED:
Why a misleading "red team" study of the gene synthesis industry wrongly casts doubt on industry safety

Neither all nor nothing. Disarmingly simple analogies between AI and immediate nuclear risks not only make for powerful rhetoric but also good marketing. Whether or not developers genuinely believe that their products pose an existential threat, framing the near-term future of AI as such has granted executives of OpenAI, Anthropic, Microsoft, and Google access to high-level policy discussions at the White House, the US Senate, and the notoriously secretive Bilderberg conference. The result has been a flurry of promises by the tech firms to police themselves as they rush to release ever-more capable AI products. By encouraging the public to fixate on how these applications might end the world, AI CEOs divert attention from the urgent need to regulate the ways in which they are already actively unraveling the social, economic, and ecological support systems of billions in their drive to outrun their rivals and maximize market share.

While tech companies are stakeholders, they should not be the loudest—let alone only—voices in discussions on AI governance. Policymakers must not be distracted by the specter of superintelligence and take action that goes beyond gathering voluntary commitments from AI developers. Existing guidance and directives are a good start, but policymakers need to push forward to develop binding and enforceable legislation addressing both current and potential AI harms. For example, the Bletchley Declaration resulting from the recent summit on AI safety held by the United Kingdom government widens the horizons of concerns. Going beyond immediate issues of data privacy, bias, and transparency, it also considers the potential effects of AI on political stability, democratic processes, and the environment. However, critics note that it remains a largely symbolic and highly elite-focused agreement without actual enforcement mechanisms.

Looking to the early nuclear era can provide valuable lessons for throttling the pace for AI superiority, but these lessons are not directly translatable. The current and future globe-spanning effects of AI can only be addressed through international cooperation, most importantly between the United States and China as the two major antagonists. While the talks between presidents Joe Biden and Xi Jinping at the Asia-Pacific Economic Cooperation summit in San Francisco in mid-November did not yield specific agreements or commitments on AI regulation from either side, both parties recognized the need for international AI governance. They also showed willingness to establish formal bilateral cooperation on the issue.

However, because the proliferation hazards of AI fundamentally differ from those of nuclear weapons, limiting the arena to those with advanced AI programs, even only initially, is short-sighted. A framework of global AI governance is only as good as its weakest-governed element, so it must be stringent and inclusive from the start. Such an effort won’t be exhausted by one international body modeled after the International Atomic Energy Agency. The general-purpose nature of AI technology calls for more than one regulatory regime of mechanisms that are bound by common principles. In addition to bilateral dialogue, the policymakers should closely follow and support multilateral efforts, such as the newly established High-level Advisory Board on Artificial Intelligence at the United Nations.

To be sure, refocusing on the already-unfolding complex harms of AI does not mean being complacent about the long-term and existential risks it might pose. That humans have narrowly avoided nuclear war since the Cuban Missile Crisis does not diminish the urgency of managing today’s evolving nuclear threat. Similarly, decades of unfulfilled expectations about the imminent creation of an “ultraintelligent machine” does not prove it is impossible. Should a viable path to achieving greater-than-human intelligence ever open, it will be far better to be prepared. The best way to make ready for any such eventuality begins by directly addressing the cascades of planet-wide harms that AI applications are already causing. Every step taken to mitigate ongoing damage and redirect AI development towards goals of greater justice, sustainability, and fairness will help to create societies that are better able to grapple with the unresolved legacies of nuclear weapons and the undiscovered horizons of AI.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Gino Grossi
Gino Grossi
7 months ago

Why is there not a single positive thing said about AI in this whole article? It would be like someone writing about nuclear energy and only taking about weapons and not all the other fields that benefit from it. AI can create more solutions than problems – it will depend on how men use it.

Jeff Tangel
Jeff Tangel
7 months ago

Hmmm…I thought it was a wonderful essay, thanks! I really appreciate the authors’ recognition that AI is causing great disruption and harm now. What is AI actually for Gino and anyone else so confused? Or most any technology nowadays? To increase the efficiency and speed of making money, the reproduction of a fiction that flows to the already wealthy? Anyone who thinks this tech will “trickle down” in any meaningful sense is utterly foolish. Likewise continuing the path we’re on, employing capital driven technology as savoir while in reality the unnatural obsession with technology has put us in the grand… Read more »

A painted Doomsday Clock surrounded by text snippets and illustrations from the Bulletin’s magazine archives appears beside text that reads, “Discuss the US elections, geopolitics, space, and more at the Bulletin’s annual gathering. On November 12, join 250 attendees and members of Bulletin leadership—including those who set the Doomsday Clock—at our annual gathering in Chicago.” Below it, a button that reads, “Get my ticket.”

RELATED POSTS

Receive Email
Updates