The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Closer than ever:

It is now 89 seconds to midnight

2025 Doomsday Clock Statement

Science and Security Board
Bulletin of the Atomic Scientists

Editor, John Mecklin

January 28, 2025

The Doomsday Clock set at 89 seconds to midnight

In Depth: Disruptive Technologies

Disruptive technologies to watch in 2025

As in years past, artificial intelligence continues to be a significant and disruptive technology. In particular, the sophistication of large language models (LLMs) such as GPT-4 has led to an intense public debate about the possible existential risks posed by such generative systems. The potential for this particular technology to constitute an existential threat on its own is highly speculative, but as succeeding generations of such models are released, the potential dangers, existential or otherwise, will increase. AI is a disruptive technology, but any physical threat to humans posed by AI must be enabled by a link to a device that can change the state of the physical world, or to decisions that humans make about that world.

Of continuing concern are applications of AI in weapons of war, and most especially in its possible future application to nuclear weapons. In the past year, there have been multiple reports of AI being incorporated in weapons targeting systems in the war in Ukraine. Israel has reportedly used an AI-based system to create target lists in Gaza. The US military is explicitly asking contractors to incorporate AI in non-nuclear command and control systems. According to public statements, Russia is planning to incorporate AI in its nuclear command and control systems. The head of the US Strategic Command recently stated that, while a human will always make the final decision on the use of nuclear weapons, it is conceivable that AI will be embedded in decision-support systems used for nuclear weapons. With respect to conventional military operations, AI is being extensively deployed in intelligence, surveillance, reconnaissance, simulation, and training activities. Of particular concern are lethal autonomous weapons, which identify and destroy targets without human intervention. Several countries are scaling up their capacity to use AI on the battlefield, including the United States, which has plans to deploy thousands of autonomous (though non-nuclear) weapon systems in the next two years.

Fortunately, many countries are recognizing the importance of regulating AI and are beginning to take steps to minimize its potential for harm. These initial steps include the AI Act, a new regulatory framework by the European Union, and an executive order on artificial intelligence issued by US President Joe Biden that aim to seize the promise and manage the risks of AI. The first challenge in any regulation will be to agree on specific domains—for example, the military or biotechnology—in which the use of AI might be governed by widely accepted rules or norms of behavior. The second challenge will be to agree on the specific content and implementation of those rules and norms.

Increasing chaos, disorder, and dysfunction in the world’s information ecosystem threaten society’s capacity to address difficult challenges, and it is clear that AI has great potential to accelerate these processes of information corruption. AI-enabled distortion of the information environment may be an important factor in preventing the world from dealing effectively with urgent major threats like nuclear war, pandemics, and climate change. This continuing problem took on extra significance for the United States in 2024, when, according to numerous reports, Chinese and Russian disinformation campaigns attempted to subvert the 2024 US national elections. Advances in LLM technologies and dramatic improvements in the phony video depictions known as deepfakes could have consequential future effects on the information ecosystem. Appropriate governance of AI and social media platforms is essential to an information ecosystem that supports truth and democracy.

The expansion of drone warfare, especially by Ukraine, has been dramatic. This has led many countries to begin incorporating drone warfare in their military doctrines, as well as ramping up production and trade in weaponized drones of all types. Development of new tactics and innovative uses of drones are changing military operations and will spark efforts to counter unmanned aircraft systems. As seen in the Ukraine conflict, both sides now employ drones in long-distance strikes well beyond the battlefield.

There is a growing belligerence among the United States, Russia, and China in space, and the probability of conflict in space continues to grow. China and Russia are far more active now than in previous decades, and US activities, both governmental and private, make it difficult to avert a military space race. The use of space systems—including privately owned Starlink satellites—to support military operations continues to expand. As a result, satellites—owned both by governments and corporations—become ever more important as military targets.

Finally, the growing presence of hypersonic weapons in contested regional theaters substantially increases the risk of escalation. Russia’s recent use of an advanced, experimental medium-range ballistic missile, presumably in response to Ukraine’s use of a US-supplied Army Tactical Missile System (ATACMS missile), is an example of this dangerous technology trend. The presence of Chinese hypersonic weapons also could worsen security dynamics in contested areas like the South China Sea.

 

Read the 2025 Doomsday Clock statement »

Learn more about how each of the Bulletin's areas of concern contributed to the setting of the Doomsday Clock this year:

About the Bulletin of the Atomic Scientists

At our core, the Bulletin of the Atomic Scientists is a media organization, publishing a free-access website and a bimonthly magazine. But we are much more. The Bulletin’s website, iconic Doomsday Clock, and regular events equip the public, policy makers, and scientists with the information needed to reduce man-made threats to our existence. The Bulletin focuses on three main areas: nuclear risk, climate change, and disruptive technologies, including developments in biotechnology. What connects these topics is a driving belief that because humans created them, we can control them.

The Bulletin is an independent, nonprofit 501(c)(3) organization. We gather the most informed and influential voices tracking man-made threats and bring their innovative thinking to a global audience. We apply intellectual rigor to the conversation and do not shrink from alarming truths.

The Bulletin has many audiences: the general public, which will ultimately benefit or suffer from scientific breakthroughs; policy makers, whose duty is to harness those breakthroughs for good; and the scientists themselves, who produce those technological advances and thus bear a special responsibility. Our community is international, with more than half of our website visitors coming from outside the United States. It is also young. Half are under the age of 35.

Learn more at thebulletin.org/about-us.