A moment of historic danger:
It is still 90 seconds to midnight
2024 Doomsday Clock Statement
Science and Security Board
Bulletin of the Atomic Scientists
Editor, John Mecklin
January 23, 2024
The expanding scope of biological threats
The revolution in the life sciences and associated technologies continues to accelerate and expand in scope, enabling an increasing number of individuals, in groups and alone, to pose threats arising from both accidental and deliberate misuse. During the past six months, the potential for artificial intelligence tools to empower individuals to misuse biology has become far more apparent.
As noted in our disruptive technology sidebar, generative AI capabilities are expanding exponentially. Concern and controversy continue to swirl around the possibility that generative AI could provide information that would allow states, subnational groups, and non-state actors to create more harmful and transmissible biological agents. Current evidence suggests that, with generative AI, the acquisition of known harmful agents is more likely at present than the creation of entirely new ones. But it is clearly also possible to use generative AI as a tool to enhance existing pathogens. It would be foolish to bet against AI-assisted design of novel biological agents and weapons happening in the future.
In October 2023, US President Joe Biden signed an executive order about “safe, secure, and trustworthy AI.” It calls for protection “against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.” While a useful step in managing the use of AI in biotechnology, these standards are not legally binding and are at best only a small deterrent to malefactors. The executive order calls for “extensive red-team testing” of AI systems and their ability to enable the acquisition of biological agents. There are also concurrent calls for transparency in the design and development of AI algorithms.
Transparency, however, may not be a good idea with respect to risks of misuse of AI in the life sciences. For example, recent work suggests that the public release of detailed information on large language models enabled hackers to easily evade safeguards and obtain “nearly all key information needed” to produce the 1918 pandemic influenza virus. High-level state-sponsored convenings to discuss management of AI risks, including the AI Safety Summit at Bletchley Park in the United Kingdom, offer hope for the development of guardrails and top-down risk oversight of AI development and use in the life sciences. But so far these efforts have resulted in largely aspirational and voluntary measures.
During this past year, the evolution of the war in Ukraine may have lessened the perception of existential risk to the leadership or viability of Russia. In turn, these developments may have diminished the likelihood of use of biological agents. At the same time, Russian policy on the use of biological weapons is opaque, the Russia-Ukraine conflict remains fluid, and the possibility of escalation persists.
Terrorist organizations continue to pursue biological agents and weapons, and events around the world heighten concern about the possible use of biological agents by terrorist groups in the Middle East and elsewhere. The use of a biological agent would lead to strong international intervention and (if accurately attributed) widespread condemnation of and action against the country or group that initiated the attack.
Two other types of biological risks remain causes for concern: accidental release of organisms from laboratories and naturally occurring infectious diseases, especially those with pandemic potential. Deforestation, urbanization, and climate change continue to destabilize microbe-host relationships and facilitate the emergence of infectious diseases. Meanwhile, high-biosafety-level laboratories have proliferated around the world, as has risky research motivated by interests in controlling these diseases. Despite the importance of understanding and countering naturally occurring biological threats, it isn’t clear that all of these high-biosafety-level laboratories or high-risk experiments are needed for achieving these goals. As the number of laboratories and amount of risky research increases, and the failure to standardize safe laboratory practices and to institute adequate research oversight persists, the risk of accidental release of dangerous pathogens worsens.
Read the 2024 Doomsday Clock statement »
Learn more about how each of the Bulletin's areas of concern contributed to the setting of the Doomsday Clock this year:
About the Bulletin of the Atomic Scientists
At our core, the Bulletin of the Atomic Scientists is a media organization, publishing a free-access website and a bimonthly magazine. But we are much more. The Bulletin’s website, iconic Doomsday Clock, and regular events equip the public, policy makers, and scientists with the information needed to reduce man-made threats to our existence. The Bulletin focuses on three main areas: nuclear risk, climate change, and disruptive technologies, including developments in biotechnology. What connects these topics is a driving belief that because humans created them, we can control them.
The Bulletin is an independent, nonprofit 501(c)(3) organization. We gather the most informed and influential voices tracking man-made threats and bring their innovative thinking to a global audience. We apply intellectual rigor to the conversation and do not shrink from alarming truths.
The Bulletin has many audiences: the general public, which will ultimately benefit or suffer from scientific breakthroughs; policy makers, whose duty is to harness those breakthroughs for good; and the scientists themselves, who produce those technological advances and thus bear a special responsibility. Our community is international, with half of our website visitors coming from outside the United States. It is also young. Half are under the age of 35.
Learn more at thebulletin.org/about-us.