Recent advances in bioengineering and the rise of open-access literature have made genetic engineering accessible beyond academia. While many synthetic biologists, who fabricate or re-design biological systems, welcome community interest in their discipline, some biosecurity scholars worry that even teenagers can now make bioweapons.
Their concerns are understandable. Bioengineers have already used artificial DNA to reconstruct the polio virus, the 1918 influenza virus, and other pathogens. Even high schools are starting to cover recombinant DNA in their curricula. Meanwhile, a large, unregulated, amateur “biohacking” community has emerged, and its members aim to engineer new organisms that previously could be built only by highly trained scientists. Superficially, at least, it seems plausible that a rogue biohacker could make bioweapons.
In fact, though, this scenario remains very unlikely. I should know. I recently spent 13 months as an undergraduate researcher at the Department of Energy's state-of-the-art Joint BioEnergy Institute. There I learned that designing and executing cutting-edge biology experiments is difficult, especially for novices—and simple, low-cost policy interventions can make it harder still.
The rise of synthetic biology. Synthetic biology aims to create new organisms and biological functions by putting together interchangeable DNA parts with LEGO-like modularity. Beyond serving as blueprints for development, genes also encode mechanisms to regulate the expression of other genes, resulting in circuit-like interactions between genes. Starting around 2000, the new discipline of synthetic biology has rewired microbes to follow man-made genetic circuits. While early research focused on understanding gene interactions, more recent research has yielded practical “genetic devices” such as yeast and bacteria that synthesize drugs, produce biofuels, and detect environmental contaminants.
By now, synthetic biology has spread beyond the classroom. Since 2004, more and more students have tried their hand at the International Genetically Engineered Machines competition, with 280 university and high school teams participating in 2015. With few requirements beyond registration fees, safety forms, and documentation, competitors—often advised by more senior researchers—are free to devise their own projects: for example, engineering yeast to synthesize indigo, or programming genetic circuits to control whether bacteria sink or float. Experimental protocols and gene sequences from the competition, published openly online, have fueled a growing “biohacking” community of enthusiasts who are building their own laboratories to grow and manipulate microbes. Some want to advance biology; others want to make beer.
Biosecurity experts have long warned that biohackers, armed with the powerful tools of synthetic biology, will eventually engineer pathogens in the same way that computer enthusiasts in the 1970s developed viruses and adware. So far, however, there has been no convincing example. Evidently, garage biology is not as easy as it sounds.
Steps, and barriers, to weaponization. A bioweapons project requires five basic steps. The first is to build a safe facility that can keep the scientist alive long enough to build a bioweapon. This is demanding, because the whole point of bioweapons is to make something that kills in microscopic doses. The Soviet bioweapons program Biopreparat managed the risk by repeatedly vaccinating workers and acquiring operating-room-quality filtration systems and protective gear that sometimes rivaled spacesuits, according to Ken Alibek’s 1999 book Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World—Told from the Inside by the Man Who Ran It. Because purchasing advanced-grade filters and requesting vaccines for rare diseases would likely attract suspicion, a biohacker would have trouble matching his equipment to these standards—and even with proper protection, at least one Soviet bioweapon effort ended with the researcher’s death.
Second, biohackers need to purchase laboratory instruments. This seems simple. In principle, a biohacker only needs a PayPal account and shipping address to obtain everything he needs to build a weapon. In practice, though, most biohackers only purchase the bare minimum, and they improvise to save money. For instance, many sleep with test tubes under their armpits to avoid buying expensive incubators. This is fine for growing bacteria that glow in the dark, but trying the same shortcut with weaponized strains would likely lead to infection, and a trip to the hospital would immediately end the project.
A practical bioweapons facility needs to move beyond such improvisations. I estimate that a basic setup using last-century equipment would cost at least $15,000. Reagent kits, often hundreds of dollars for only a few reactions, and other disposable equipment would raise the cost to $30,000. Using newer, more reliable machines would cost perhaps ten times as much.
The third step would be to obtain the base bacteria or virus strains for modification or production. The most straightforward approach is to order and grow a known infectious strain, but this requires documentation. My supervisor at the Joint BioEnergy Institute—who works at one of the country’s premier laboratories—encountered so much paperwork trying to obtain an infectious yeast strain that he eventually gave up. Would-be terrorists, with much shakier credentials, would encounter even more difficulties, along with a high risk of exposure.
My supervisor instead ordered synthetic genes and created a novel organism to perform the functions needed. But companies that sell synthetic DNA check all sequences for “risky” genes, in accordance with the International Association of Synthetic Biology’s 2009 code of conduct for gene synthesis. My supervisor’s order, which encoded proteins that synthesize fat, went through fine. But a biohacker requesting multiple “suspicious” parts would almost certainly trigger alarms.
Fourth, a would-be bioweaponeer would have to design and execute a project. Most undergraduate “pipette monkeys” rely on more-senior researchers to design experiments. Indeed, even PhD-level researchers typically start by modifying experiments culled from existing literature. But of course bioweapon projects are classified. An aspiring bioweaponeer would have to find and modify a published project with “dual-use” potential.
For example, a biohacker might try to modify a project like a recent Berkeley publication describing how scientists engineered bacteria to enter and deliver drugs to human cells. Unlike researchers at established institutions, though, our rogue biohacker could not simply build atop the design by requesting specifications, and would instead need to re-invent the many details that invariably go unpublished. While modularity would help, trying to recreate unpublished sequences by trial and error would be like assembling an IKEA bed from a kit consisting entirely of bolts and small parts. The Berkeley drug-delivery bacterium has five modules, each with up to six parts. Weaponizing it would require still more modules and parts. In theory, this is simple. But in practice, I spent a month experimenting with assembly protocols to build one four-part module, and even then, it was unclear whether it was functional. A biohacker, working with limited resources and less-accurate equipment, would need years to re-engineer and weaponize a drug-delivery bacterium.
Fifth and finally, a weapon created by a biohacker would probably need to be tested before being deployed on human targets. During the Cold War, nation-states had isolated bioweapon-testing facilities, with the Soviets testing their strains on monkeys on an isolated island, but individuals would have a harder time ordering primates or other animals in large quantities without raising questions.
Observers often speculate that bioweaponeers could simply forgo testing and attack immediately. But bioweapons hardly ever work the first time. In the 1990s, the Japanese terrorist group Aum Shinrikyo developed strains of anthrax from vaccines, but because these strains were weak and their delivery methods ineffective, all of the group’s known anthrax attacks failed. While genetic engineering has indeed become more efficient since the 1990s, increased biosecurity regulations limit biohackers to working with far more innocuous organisms. Introducing pathogenic traits to these organisms probably poses a larger challenge than restoring virulence to vaccines, and in the meantime, each failed attack would dramatically increase the risk of apprehension.
Policy chokepoints. Even if biohackers clear these hurdles, biology remains so heavily chance-dependent that many experiments cannot be reproduced. This makes bioweapons even less attractive to potential amateurs. Nevertheless, the converse is also true—even clumsy bioweaponeers could simply “get lucky.” Government regulators should reduce this chance by imposing sensible chokepoints.
First, the government should build community by subsidizing the “biohacking labs” that have already emerged across the United States. This is worth doing on social and economic grounds alone, but it also has a security value: Suspicious behavior is much more likely to be noticed and reported when it takes place within a strong, tightly knit community. Federal funding would encourage legitimate enthusiasts to come into the open—and report security risks in real time. While lone wolves would not be eliminated, they would receive far less help from friends—and would stick out more than under the current system, where going it alone is essentially the only option.
Second, regulators should place more restrictions on purchasing lab equipment. Requiring discarded-equipment buyers to submit credentials would only impose minor costs on legitimate laboratories and start-ups. Most biohackers would be unaffected. Many perform experiments that tolerate the inaccuracies of improvised equipment, and others could reach out to regulated local biohacking labs.
Lastly, the government should tighten regulations on synthetic DNA. Many scientists have argued that the United States, like Europe, should require licenses for using synthetic DNA. Granted, genuine synthetic biology enthusiasts would suffer. Few European biohackers engineer organisms, despite similar interests among academics, suggesting that such policy would significantly hinder the legitimate biohacking movement.
Incremental changes in accessibility and cost are unlikely to lower the barriers that currently prevent amateurs from developing bioweapons. Only fundamental technological advancements, such as significantly increasing the efficiency of DNA recombination or bringing long-strand DNA synthesis within the reach of individuals, can do that. Novice biologists are not likely to construct advanced weapons any time soon.