The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Apathy and hyperbole cloud the real risks of AI bioweapons

SAN FRANCISCO, CALIFORNIA - OCTOBER 03: OpenAI Co-Founder & CEO Sam Altman speaks onstage during TechCrunch Disrupt San Francisco 2019 at Moscone Convention Center on October 03, 2019 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)OpenAI co-founder and CEO Sam Altman speaks at TechCrunch in 2019. Credit: TechCrunch, CC BY 2.0, via Wikimedia Commons.

“Can chatbots help you build a bioweapon?” a headline in Foreign Policy asked. “ChatGPT could make bioterrorism horrifyingly easy,” a Vox article warned.  “A.I. may save us or construct viruses to kill us,” a New York Times opinion piece argued. A glance at many headlines around artificial intelligence (AI) and bioweapons leaves the impression of a technology that is putting sophisticated biological weapons within the reach of any malicious actor intent on causing such harm with disease.

Like other scientific and technological developments before it, AI is dual use: It has the potential to deliver a range of positive outcomes as well as to be used to support nefarious activity by malign actors. And, as with developments ranging from genetic engineering to gene synthesis technologies, AI in its current configurations is unlikely to result in the worst-case scenarios suggested in these and other headlines—an increase in the use of biological weapons in the next few years.

Bioweapons use and bioterrorism has been, historically, extremely rare. This is not a reason to ignore AI or be sanguine about the risks it poses, but managing those risks is rarely aided by hype.

AI-enabled bioweapons? Much of the security discussion to date has focused on large language models (LLM), which power AI chatbots such as ChatGPT, and the potential these tools and models have for enabling biological weapons. As one recent piece put it, AI and bioweapons are the latest security obsession. OpenAI, which developed ChatGPT, stress-tested the chatbot for biosecurity concerns and publicly released a “system card” in spring 2023 addressing the risks as the company saw them. The company claimed that “a key risk driver is GPT-4’s ability to generate publicly accessible but difficult-to-find information, shortening the time users spend on research and compiling this information in a way that is understandable to a non-expert user.” The stress test indicated that “information generated by the model is most likely to be useful for individuals and non-state actors who do not have access to formal scientific training.”

A few weeks following the release of OpenAI’s system card, the prestigious journal Science published a news story on a thought experiment conducted at the Massachusetts Institute of Technology (MIT). Researchers had asked a set of undergraduate students to find out, with the help of a large language model, how to create and order a dangerous virus capable of unleashing a pandemic.

Within an hour, the chatbots had suggested a list of four potential pandemic pathogens. In some cases, the chatbots had pointed to genetic mutations reported in the literature to increase transmission. The large language models also described how the viruses could be created from synthetic DNA using reverse genetics and supplied the names of DNA synthesis companies judged unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills engage other organizations to overcome these challenges.

The experiment was held up as an example of how AI could make it easier for someone with evil intentions and no science background to gain the know-how to devise a biological weapon.

A slew of reports followed the Science news piece, helping to make policymakers at the highest levels aware of the potential security implications of the AI–bio intersection. Ahead of a UK-hosted summit on AI safety in November 2023, then-British Prime Minister Rishi Sunak warned that AI could make it easier to build biological weapons. The political and tech leaders in attendance echoed that concern in a declaration emphasizing the potential catastrophic harm from AI and biotechnology. Around the same time, the US Biden administration issued a landmark executive order on AI that included a plan to probe how emerging AI systems might aid malicious actors in plots to develop bioweapons.

RELATED:
Don’t panic: AI can strengthen democracy too

AI and Bioweapons: critical assessments. While AI appears destined to have a major impact on a wide range of industries and activities, both caution and skepticism are warranted. The flood of shoddy AI generated material in search results has been referred to as “slop” by some commentators. Others go further, and claim large language model’s errors, or “hallucinations,” should more accurately be called “bullshit”.

Between the hype and dismissal is a more complex reality.

For all the doomsaying, there are actually many uncertainties in how AI will affect bioweapons and the wider biosecurity arena. While AI can be used to predict and design new toxic compounds, proteins that have harmful effects, or enhancements that make pathogens even more harmful, the leap from scientific theory to bioweapons reality has rarely occurred and no deliberate use of disease has had a major impact on a conflict.

Large language models like ChatGPT may make it easier for non-experts to access dual-use knowledge and thereby lower barriers to intentional misuse, but much, if not all, of this information is already available to anyone with above average search capabilities on the internet. This is not a limitation only in the bioweapons area: In July 2023, an assessment of AI’s capabilities by UK intelligence experts concluded it was the equivalent to an extremely junior analyst and the technology served as a basic productivity assistant.

What is notable in assessments of large language models and their advantages and limitations is that they get things wrong, suffer from bias, can oversimplify complex relationships, and fail to take into account the social, political, organizational and technology context that shape decisions around biological weapons development and use. Like all datasets, the information within them must be sufficiently large and representative to reduce bias. There is the garbage-in-garbage out challenge: High-quality data is essential to efficient AI training, and such training is becoming very expensive.

In addition, within biology and the life sciences, data availability can be restricted for all kinds of reasons, including licensing policies, ethical and security considerations, and proprietary rights. The availability of high-quality biological data sets is not a given; neither is the completeness of biological data sets. Data and data sets used to train AI have been of variable quality and completeness, and scientists still need to evaluate computational results and validate them experimentally.

Knowledge and information alone are also insufficient: Evidence from state biological weapons programs and terrorist plots and attacks using biological weapons show that the weapons development process is anything but straightforward. Mass casualty biological weapons are not easy and cheap to produce, and claims of cheap, easy and simple present a distorted and even apocalyptic picture of the threat that is far from realistic.

Biological and toxin weapons exist or could exist on a spectrum from the relatively simple, such as ricin distributed by mail, to potentially more sophisticated weapons, which are often portrayed as genetically modified pathogens able to kill millions. Experts and policymakers conducting tabletop exercises often feature worst case scenarios, with two examples being an exercise in 2001 called Dark Winter that involved smallpox and another in 2021 featuring genetically engineered mpox virus.

Many analysts and policymakers stress that pathogens and toxins can be easily isolated from nature or obtained commercially because they also have legitimate commercial or pharmaceutical uses. They point out that lots of the equipment used in biology and the life sciences is essentially dual-use in nature and can be readily acquired, while scientific publications provide ample descriptions of experiments and techniques that many believe can be easily replicated. While such claims are not incorrect, beyond some pathogens at the relatively simple end of the spectrum, the unique nature of bioweapons materials creates steep challenges beyond simply acquiring pathogenic or toxin material; these include processing, handling, and producing sufficient amounts of a pathogen. Unlike nuclear weapons, which rely on materials with physically predictable properties, bioweapons are based on living organisms and living organisms evolve. They are prone to developing new properties and are sensitive to environmental and handling uncertainties. The behavior of living organisms, therefore, is unpredictable throughout all stages of development and use as a weapon.

RELATED:
‘I’m afraid I can’t do that’: Should killer robots be allowed to disobey orders?

This imposes an extended trial-and-error process to acquire the skills necessary to solve the problems that inevitably arise. Consequently, possessing the skills to handle and manipulate pathogens throughout the development process is a greater barrier to entry into the bioweapons field than is material procurement.

Structured risk and threat assessments. If the development of bioweapons were so simple, more states and terrorist groups would have achieved satisfactory results. The historical evidence shows otherwise. In addition, some of the key developments in science and technology have not found their way into offensive weapons over the last two decades: Biological weapons use has been and remains rare, and to date use by violent non-state actors has been basic, for example a cult’s poisoning of salad bars with salmonella.

The challenge, as it has been for more than two decades, is to avoid apathy and hyperbole about scientific and technological developments that impact biological disarmament and efforts to keep biological weapons out of the war plans and arsenals of violent actors. Debates about AI absorb high-level and community attention and, while initiatives and funding mobilized are welcome, they risk an overly narrow threat focus that loses sight of other risks and opportunities. It is crucial that the disarmament community maintains a broad view, locating risks and opportunities posed by new and emerging technologies within the larger social and technological context that shapes weapon selection and use decisions by both states and violent non-state actors.

As it currently exists, AI might help someone looking for information and the information generated is more likely to be of value to violent actors who aspire to bioweapons. The anticipated risk is hypothetical. More recent studies on the biothreat from AI are starting to recognize this. As AI matures it will pose other challenges and while there must always be a place for experts, concerned citizens, scientists, and others to identify issues and voice concerns, the complexity of scientific and technological developments and the interactions between them mean a more structured assessment of AI is required. This type of structured analysis and assessment could be from individual states, from a group of experts in civil society, or from a scientific advisory mechanism within the Biological Weapons Convention, the global treaty banning bioweapons.

Greater awareness of the risks and challenges AI poses in the biosafety and biosecurity realms should then serve as a basis for developing national, regional and multilateral responses to those risks by states and civil society actors.

ACKNOWLEDGMENT

This document was sponsored by and prepared under the auspices and guidance of Andreea Paulopol of the Department of State (DOS) Bureau of Arms Control, Deterrence, and Stability,  Key Verification Assets Fund.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Keywords: AI, bioweapons
Topics: Biosecurity

Get alerts about this thread
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Faouzi Braza
Faouzi Braza
2 months ago

The science news mentioned in this article reports on a study made by Kevin Esvelt’s Lab. The study demonstrated the possibility to leverage LLM capabilities to answer questions regarding the planning of a bio-terrorist attack. But the experimental design was biased with no control group that would allow to measure LLM capabilities against web Search for example. Other studies also reach the opposite conclusions (One from OpenAI, another from RAND). Others support similar findings while no empirical data have been shared (Anthropic and Gryphon Scientific work). A comprehensive benchmark developed by future house also showed that current LLM would be… Read more »

The Bulletin’s 2024 November Magazine Cover appears above text that reads, “November magazine: Fusion — the next big thing … again? Subscribe to start reading.”

RELATED POSTS

Receive Email
Updates