Dual-use research after the avian influenza controversy

By Michael J. Imperiale | July 11, 2012

The past nine months have been trying for researchers who study the H5N1 avian influenza virus, the committees that have been discussing dual-use research in the life sciences, and the entities that fund and publish such research. The details have been reported in many venues and need only brief summary here: Two laboratories funded by the National Institutes of Health (NIH) embarked on studies to determine whether the H5N1 virus — a bird flu virus that has caused a relatively small number of human deaths — could be made to transmit between people.

This was an interesting scientific question; to date, almost all confirmed human infections with this deadly pathogen have been due to direct contact with an infected animal. Through genetic engineering and the passaging of viruses through ferrets, the best animal model for human infection, the labs successfully isolated virus variants that could transmit by aerosol from one ferret to another and, potentially, between humans.

When the labs submitted the work for publication, the United States government asked the National Science Advisory Board for Biosecurity (NSABB), a 22-member group that advises the government on dual-use research issues, whether the studies raised concern. Using tools and criteria developed and presented to the government in 2007, the advisory board performed a risk-benefit analysis and concluded that the detailed results could be misused by an individual, group, or nation-state that wished to cause harm, while the benefits of the research were limited. The board therefore recommended that the general result be communicated but not the specific mutations that allowed transmission. Realizing that there were potential benefits to the specific results of the research, however, the board also recommended that the details be shared with those who had a legitimate need to know, such as public health officials involved in surveillance.

The recommendations led to a spirited debate, and the World Health Organization (WHO) convened a meeting of influenza experts, who read the manuscripts and recommended full publication. The authors stated publicly around the same time that the viruses they isolated were not as lethal as originally reported. Earlier this year, revised papers were submitted and considered by the NSABB. The NIH also informed the board that its original solution — selectively sharing the information — was not workable. Faced with the binary choice for or against full publication, the advisory board recommended that the new manuscripts be published; one recommendation was made by unanimous decision, and the other in a split vote. Both manuscripts have now appeared, one in Nature and the other in Science.

Regardless of the side of the debate on which they fall, most of those involved agree that this situation uncovered serious flaws in the way that the dual-use research issue is thought about and dealt with. To effectively oversee dual-use research in the life sciences, three main questions need to be addressed: What is dual-use? Who should conduct the risk-benefit analysis that determines whether dual-use research is funded and published? And what should be done if a serious risk related to dual-use research is identified?

To date, these basic questions have not been answered in a way that allows for effective oversight of dual-use research of concern taking place in countries around the world.

Defining the problem. As it relates to the life sciences, the term “dual-use” has a broader meaning than in other fields, where it signifies civilian technologies that can have military uses. As defined in the National Research Council report, Biotechnology in an Age of Terrorism, dual-use life sciences research is legitimate research that might be misused. If one considers this definition carefully, one realizes that it can refer to almost any biological research. For example, polymerase chain reaction (PCR) technology has revolutionized research and medical diagnostics, but it can certainly be misused to recreate the sequence of a dangerous pathogen.

Realizing this lack of clarity, in its 2007 report the NSABB attempted to define a threshold at which dual-use research becomes a concern. The advisory board proposed that research of concern would be “research that, based on current understanding, can be reasonably anticipated to provide knowledge, products, or technologies that could be directly misapplied by others to pose a threat to public health and safety, agricultural crops and other plants, animals, the environment, or materiel.” Unfortunately, even this definition is very subjective because it is often difficult to assess the potential for misuse. About whom are we concerned? Nation-states? Terrorist groups? Lone wolves? Each has different motives. And how likely is it that a given agent or technology or result can be misused?

Who should decide. Because dual-use is such a slippery but crucial concept in the life sciences, the question of who should determine the relative benefits and risks of research outcomes is an important one. Most life science research has an obvious legitimate purpose, the potential benefits of which are usually well-defined. The time frames in which these benefits may be realized can differ, of course, depending on whether the research involves “pure” science or is a more applied type of project. In either case, though, the potential benefit is usually clear to see.

Sometimes, however, the benefits of research are debatable, or at least not clearly recognizable. One of the most contentious issues about the H5N1 publications was whether the viral sequence information could be of any positive utility: Some argued that public health authorities could use the data to aid in surveillance; others posited that the surveillance system is not robust enough to take advantage of this knowledge.

The assessment of the possible risk of dual-use research is even more difficult. Society may know an actor’s intentions in the broadest sense: For example, Al Qaeda has made it known that it would like to use bioterror. But specifics are hard to come by, and the democratization of life sciences technology means that there are many more individuals and groups that have the capability to do research that might be misused than one can even begin to identify. As such, the risks are often theoretical, and there is certainly a danger of confusing true risks with what is simply imaginable.

Given the extent of the unknown (and perhaps even the unknowable) risks of life science research, it seems prudent to follow the precautionary principle in determining how widely particular research results should be disseminated. That is, the burden should be placed upon those who wish to engage in research and publish their results to make a convincing argument as to why the risks of disseminating those results are insignificant. There seems to have been relatively little emphasis on risk analysis on the front end, when the NIH reviewed and funded the H5N1 projects, and the NSABB and WHO deliberations about publication therefore came late in the game.

The US government issued a stopgap policy at the end of March that allows restrictions on communication of experiments that fall under defined categories and that involve Tier 1 agents — those pathogens and toxins deemed to be the most dangerous — but this type of list-based risk assessment has serious flaws, as has been elegantly described elsewhere. Notably, such an approach ignores other agents that, while less lethal, may still be misused. In addition, it does not distinguish highly virulent from attenuated versions of a given listed pathogen and can discourage scientists from working on important pathogens because of the added regulatory burden.

Given that the life sciences research enterprise is global, and the entire world stands to benefit from the progress being made (or possibly suffer from its misuse), it seems obvious that the risk-benefit assessment process for dual-use research of concern must be an international undertaking. Any such system must also involve all the stakeholders, including scientists, funding agencies, experts in biosecurity, and the public, to name just a few. The H5N1 experience illustrated how a group composed largely of scientists (those at the WHO meeting) can come to a different conclusion than a more broadly-constituted group (NSABB). Creating such a multidisciplinary, global assessment body is easier said than done, of course, and international politics will certainly rear its ugly head. But the global risks of dual-use research cannot be properly addressed by one country alone, no matter how influential or scientifically developed it might be.

How might dangerous research be controlled? Regardless of what body makes the determination, once agreement is reached that the result of a particular study poses risks to the public, various paths forward could be followed. One is censorship or classification of the data. This option is distasteful to scientists for a number of reasons, including the importance of open dissemination of results to the scientific process itself. Just the same, such a solution is standing policy, at least in the United States. National Security Decision Directive 189, first issued by the Reagan administration in 1985 and reaffirmed by the Bush administration after the 9/11 attacks, states, “It is the policy of this administration that, to the maximum extent possible, the products of fundamental research remain unrestricted. It is also the policy of this administration that, where the national security requires control, the mechanism for control of information generated during federally funded fundamental research in science, technology and engineering at colleges, universities and laboratories is classification.”

A second path of control — and one originally recommended by the NSABB for the bird flu studies — would keep certain details of research results from being widely disseminated; they are shared only with those who have a legitimate need to know. The lesson from the H5N1 episode is that, at present, there is no mechanism in place to implement such a recommendation. To many scientists, this second option is every bit as distasteful as the first. But the ease of access to life science technology and the intention to cause harm that terror groups have expressed makes for a dangerous combination. It has been argued that once information is released even to a limited number of researchers, keeping it under control is difficult. This is surely true, but it might be better to develop a new paradigm for how potentially dangerous results are published now than to face the draconian restrictions almost certain to be issued in the aftermath of a next bioterror event.

Participation in the discussion of dual-use research controls needs to be broadened. Given the rapid progress of the research enterprise, projects that are similar to the H5N1 transmissibility studies, or that build on those studies, are likely in the planning stages, if not already underway. And the discussion should not focus narrowly on influenza virus. Experiments involving other pathogens, or in other rapidly advancing fields, including synthetic biology, also warrant thoughtful consideration. The way the two recent H5N1 manuscripts were handled is not sustainable for many reasons, some of them practical. The review involved thousands of man hours of work by NSABB members, taking them away from their primary employment responsibilities. Moreover, while many other countries are looking to the United States for its guidance on this topic, there is also a fair amount of distrust of the United States in other quarters. This is another argument for international dialogue and a globally-accepted review system for dual-use life science research of concern.

Discussion of dual-use research controls also needs to be pressed forward with much greater urgency. Although the NSABB made its original recommendations regarding oversight of dual-use research in 2007, the NIH seemed to be caught off guard when the two H5N1 manuscripts came to light four years later. This lack of preparation is obvious because the work had not been identified earlier, during the grant review or administration processes. The NSABB is administered through the Office of the NIH Director, and the advisory board has had ex officio members from the National Institute of Allergy and Infectious Diseases, which funded the laboratories that conducted the H5N1 studies in controversy. There was ample time and opportunity for engagement with the scientists who conducted the research and their institutions, but that engagement apparently did not occur.

The life science research enterprise holds tremendous promise for continued improvements to human, animal, and plant health, and to the future of the world. Society acknowledges the importance of that enterprise through substantial public funding that supports basic and applied life science research. In return, life scientists have a responsibility to the public. One positive outcome of the recent debate has been increased awareness of the dual-use issue. The global life sciences community must build on this awareness, taking the lead in developing viable solutions that ensure a vigorous research agenda is pursued in a manner that demonstrates systematic, accountable, and global attention to potential risks.

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Topics: Biosecurity, Opinion

Get alerts about this thread
Notify of
Inline Feedbacks
View all comments


Receive Email