The authoritative guide to ensuring science and technology make life on Earth better, not worse.

A moment of historic danger:

It is still 90 seconds to midnight

2024 Doomsday Clock Statement

Science and Security Board
Bulletin of the Atomic Scientists

Editor, John Mecklin

January 23, 2024

bulletin of atomic scientists 2020 doomsday clock 100 seconds to midnight

In Depth: Disruptive Technologies

AI and other disruptive technologies to watch

The most significant development in the disruptive technology space last year was the dramatic advance in generative artificial intelligence. The sophistication of text generators based on large language models, such as GPT-4, led some respected experts to express concern about possible existential risks arising from further rapid advancements in the field. This point is highly contested, with other experts arguing that the potential for AI-related existential risk is highly speculative and distracts from real and immediate non-existential risks that AI poses today.

It is clear that AI is a paradigmatic disruptive technology. Any physical threat posed by AI must be enabled by a link to devices that can change the state of the physical world. For example, connecting the metaphorical nuclear launch button to ChatGPT would certainly pose an existential threat to humanity—but the existential threat would be from nuclear weapons, not AI. In work published in December 2023, an autonomous laboratory run by robots was coupled to the output from natural language models to create novel materials. Bad human decisions to put AI in control of important physical systems could indeed pose existential threats to humanity.

Increasing chaos, disorder, and dysfunction in our information ecosystem threaten democracy and our capacity to address difficult challenges, and it is abundantly clear that AI has great potential to vastly accelerate these processes of information corruption and deformation. AI-enabled corruption of the information environment may be an important factor in preventing the world from dealing effectively with other urgent threats, such as nuclear war, pandemics, and climate change.

Military uses of AI are accelerating, with extensive use already occurring in intelligence, surveillance, reconnaissance, simulation, and training. Generative AI is likely to be included in information operations. Of particular concern are lethal autonomous weapons, which identify and destroy targets without human intervention. The United States is dramatically scaling up its use of AI on the battlefield, including plans to deploy thousands of autonomous (though nonnuclear) weapon systems in the next two years.

The most significant development in the disruptive technology space last year was the dramatic advance in generative artificial intelligence.

Fortunately, many countries are recognizing the importance of regulating AI and are beginning to take steps to minimize its potential for harm. These initial steps include a proposed regulatory framework by the European Union, an executive order by US President Joe Biden, the Bletchley Declaration endorsed by 28 countries, and the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy endorsed by 51 states. The first challenge will be to agree on specific domains, such as military and biotechnology applications, in which the use of AI is governed by widely accepted rules or norms of behavior. The second challenge will be to agree on the specific content and implementation of those rules and norms.

The use of AI and other information technologies, combined with various sensors for real-time analysis, has accelerated the ability of authoritarian regimes to monitor the activities of citizens, repress and persecute dissenters, censor what citizens are able to see and hear, and manipulate public opinion. China is a leader in digital authoritarianism; in April 2023 the US Justice Department charged 34 People’s Republic of China police officers with using thousands of fake social media accounts to harass dissidents living in the United States. Russia also is an active purveyor of disinformation, spreading false and misleading narratives on its war in Ukraine in a variety of ways, including through websites that impersonate international news organizations.

While the rapid proliferation of small satellites promises greater access to an uncensored internet and increased resilience to attack, there is a growing belligerence among the United States, Russia, and China in space. Russia in particular continues to demonstrate aggressive behavior toward US systems, and China’s growing development of threatening space systems is worrying.

Some private-sector actors wield power and influence through their control of disruptive technology such as social media, artificial intelligence, and access to space-based internet service providers. One of the most significant recent events in the domain of cyber-enabled disinformation was the acquisition of Twitter by Elon Musk. Renamed “X,” the platform has all but abandoned previous measures to reduce online impersonation and sharply curtailed its efforts to reduce or identify conspiracy theories and malicious misinformation. Appropriate governance of such technologies is an essential aspect of managing their emergence.

Finally, the increasing presence of hypersonic weapons in regional theaters raises the escalatory stakes of a conflict. In particular, the mere presence of Chinese hypersonic weapons could force US aircraft carriers to assume stations farther from areas of potential conflict, such as the South China Sea.

Read the 2024 Doomsday Clock statement »

Learn more about how each of the Bulletin's areas of concern contributed to the setting of the Doomsday Clock this year:

About the Bulletin of the Atomic Scientists

At our core, the Bulletin of the Atomic Scientists is a media organization, publishing a free-access website and a bimonthly magazine. But we are much more. The Bulletin’s website, iconic Doomsday Clock, and regular events equip the public, policy makers, and scientists with the information needed to reduce man-made threats to our existence. The Bulletin focuses on three main areas: nuclear risk, climate change, and disruptive technologies, including developments in biotechnology. What connects these topics is a driving belief that because humans created them, we can control them.

The Bulletin is an independent, nonprofit 501(c)(3) organization. We gather the most informed and influential voices tracking man-made threats and bring their innovative thinking to a global audience. We apply intellectual rigor to the conversation and do not shrink from alarming truths.

The Bulletin has many audiences: the general public, which will ultimately benefit or suffer from scientific breakthroughs; policy makers, whose duty is to harness those breakthroughs for good; and the scientists themselves, who produce those technological advances and thus bear a special responsibility. Our community is international, with half of our website visitors coming from outside the United States. It is also young. Half are under the age of 35.

Learn more at thebulletin.org/about-us.