The biosecurity risks not yet addressed

By Jens H. Kuhn, February 15, 2008

It is a relief to me as a bench scientist, but a positive surprise to me as a biodefense
professional, that all of the discussants in this roundtable argued in favor of publicly available
pathogen sequences. This is an indication of the closing gap in thinking between life scientists
and policy experts. In the past, this gap has made it difficult for either group to understand and
accept the viewpoints of the other. Pathogen sequences
need to be available to every scientist, argue the scientists, because otherwise research
would come to a screeching halt or at least be impeded dramatically. Pathogen sequences
can be available to every scientist, argue the policy experts, because the information is
impossible to control or to withdraw, and transparency in research trumps secrecy. Both groups are
largely in agreement on the availability of pathogen sequences being of no great help to
terrorists. It is the
weaponization of agents, rather than their acquisition through in vitro synthesis, that is
most difficult for terrorists to achieve, although how difficult it really is will remain a focus
of future debate.

This roundtable has not adequately addressed the risks that publicly available sequences may
pose in the future once scientists create artificial organisms, resurrect extinct agents, or make
more virulent or more transmissible known pathogens, either accidentally or deliberately. Iris
Hunger correctly argues that we need transparency in these areas of research first and foremost and
that the more dangerous a research agent (or project), the greater the concern should be. While I
am absolutely in favor of the public deposition of any sequence derived from any finished research
project, I also favor a priori oversight and consequently possible disapproval of planned research
in areas where it is foreseeable that “dangerous” sequences may be created in the absence of a
clear public benefit. The emphasis here lies on
foreseeable.

I disagree with Gigi Kwik Gronvall’s view that the CISSM oversight system would not be a net
gain for biosecurity. She argued that it “would slow down science a great deal.” However, I
demonstrated in a working paper that the system would only affect a very small percentage of
researchers and institutions. Only high- and maximum-containment facilities, which already are
subject to other oversight requirements, and researchers that plan a very limited set of
deemed-to-be-risky experiments (making an agent more transmissible or multi-drug resistant, etc.)
would fall under the system.

The issue of what makes an agent dangerous, which set of experiments could be considered risky,
and what should be overseen is, of course, contentious, as Gronvall points out. But this did not
stop the Fink Committee of the U.S. National Academy of Sciences from outlining what it considered
to be dual-use “experiments of concern.” Moreover, researchers can and should be able to modify the
list of covered research activities to keep pace with changes in science and technology, as has
been done for over 30 years for recombinant DNA research. I also don’t think that the CISSM system
would necessarily require scientists “to devote many hours to the review process [of planned
experiments],” as the system is tiered and requires oversight on either the local (i.e. within an
institution through, for instance, the already existing institutional biosafety committees),
national, or international level depending on the planned project. Some oversight could also be
integrated into the grant-review process, thereby further involving bench scientists in the review
system.

Finally, that there might not be 100 percent participation in a dual-use review system is not a
legitimate argument for dismissing it out of hand. I suggest that how all of this could or would
not work be the focus of a follow-up roundtable discussion.



 

Share: [addthis tool="addthis_inline_share_toolbox"]