By Laura H. Kahn, August 27, 2012
When scientist Ron Fouchier, from Erasmus Medical Center in the Netherlands, presented his research at a conference in Malta last year, he described how he and his colleagues induced mutations into the H5N1 virus, ultimately giving the deadly virus the ability to become airborne and transmit infection as efficiently as the seasonal flu. Fouchier was ostensibly trying to learn more about the virus in order to protect humanity from its dangers, but his work also meant risking that the virus he created would escape the lab or be mimicked by a rogue scientist with terrorist ties.
It’s what is known as a “dual-use” dilemma. Research pursued for peaceful ends could also be used in warfare or terrorism.
The foundation of bioethics. The field of bioethics largely began after Nazi physicians conducted crimes against humanity in the name of science during World War II. The Nuremberg trials brought to light these “experiments” and led to the Nuremberg Code, which established 10 directives for experiments involving human subjects, including: the risks should never exceed the benefits, voluntary consent is essential, physical and mental suffering should be avoided, and the experiment should be terminated at any stage in which continuation would likely lead to injury, disability, or death of the subject.
In 1964, the World Medical Association passed guidelines even broader in scope with the Declaration of Helsinki: Recommendations Guiding Medical Doctors in Biomedical Research Involving Human Subjects, which promoted 20 principles for medical research and an additional five principles for medical research combined with patient care. One of these principles states that research protocols must be submitted for consideration to independent ethics committees prior to the start of a study in order to decide if the study meets international norms and standards.
But neither the Nuremberg Code nor the Declaration of Helsinki were enough to get the US Congress to pass laws mandating the ethical treatment of human subjects. For that, public outrage had to be closer to home. Embarrassment from the infamous Tuskegee experiments — conducted from 1932 to 1972 by government physicians who withheld lifesaving antibiotics from poor black men infected with syphilis — was finally enough to prompt legislators to pass the National Research Act of 1974. The act established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, which created its own fundamental ethical principles for research involving human subjects. (Current ethics policies for federally funded research can be found in the Code of Federal Regulations.)
For basic science research not involving human subjects, however, there are no equivalent ethical principles. Instead, Institutional Biosafety Committees oversee the biosafety of experiments with the goal of protecting people and the environment from potentially harmful genetically modified microbes. Their focus, in other words, is on safety and containment — not ethics.
Ethics in life sciences research. While Institutional Biosafety Committees focus on safety over ethics, organizations that do examine ethics, like the National Institutes of Health and Institutional Review Boards, zero in on two main areas: the morals and behavior of the researchers themselves and the morals and values of experiments being conducted on human subjects. No entity currently oversees the morals and values of basic life sciences research analogous to human subject research.
Researchers. All researchers in laboratories and clinical settings are expected to adhere to strict codes of conduct and be held accountable when they breach the public’s trust by misrepresenting or falsifying data. In 1989, the National Institutes of Health even required all graduate students on training grants to receive education on the responsible conduct of research; and the National Academy of Sciences issued a number of publications dedicated to responsible science and conduct.
Unfortunately, despite these and other efforts, more and more papers in scientific journals are being retracted for ethics violations. This troubling trend is likely due to increasing pressure to publish in an environment of diminishing jobs and grant funding.
Basic Life Sciences Experiments. Then, there is the challenge of the ethics of the experiments themselves — arguably more difficult to address than the researchers’ behavior. It brings us to the debate surrounding the H5N1 influenza research conducted by Fouchier and other scientists as well as the subsequent flare-up over the publication of their research in Nature and Science.
The H5N1 controversy centered on biosecurity and biosafety concerns, including whether to publish the results in part or in full. What has not been discussed is whether or not this research should have been done in the first place. It’s easy to see why: Scientists will always argue that their research is justified because good science merits support and publication. Concerns about safety and security are acknowledged but frequently considered secondary to the need for unimpeded scientific inquiry. This was the case with the H5N1 brouhaha; Time magazine even named Fouchier one of its “100 Most Influential People in the World,” declaring that he is among a “new breed of virologists” willing to tackle risky work. Scientists largely defended the H5N1 research, claiming it was good science.
But security experts were not so laudatory. Of course, the security experts weren’t aware of the project until after it had already been conducted.
Limiting the risks. The National Institutes of Health has biosafety guidelines for research involving recombinant DNA molecules. And the National Academy of Sciences has delineated “seven experiments of concern,” which are presented primarily from a biosecurity perspective. But research proposals typically don’t undergo ethical- and risk-assessment evaluations. No one is vetting experiments for ethical and security concerns from the beginning.
Was the H5N1 research, for example, ethical and did its benefits outweigh its risks? The answer depends on whom you ask. Scientists insist that the benefits outweigh the risks, while security experts believe that the risks outweigh the benefits. But it’s difficult to judge how strong a case the security experts have, because right now all the decision-making power about the value of an experiment rests mainly with the scientists.
Even if some security risks are worth taking in the name of life sciences research, it certainly doesn’t mean there isn’t a better way to oversee dual-use experiments. After all, societal interest in ethically challenging science has never been higher. Issues such as human embryonic stem-cell research, cloning, nanotechnology, and genetically modified food, to name a few, generate intense debates. And now, there’s enthusiasm about the ethics of synthetic biology. Though, in 2010, the Presidential Commission for the Study of Bioethical Issues released a study on synthetic biology and emerging technologies stating, “While many emerging technologies raise ‘dual use’ concerns — when new technologies intended for good may be used to cause harm — these risks alone are generally insufficient to justify limits on intellectual freedom.” (Emphasis mine.)
The question is: At what point do the risks justify limits on intellectual freedom? I’d argue that it comes before scientists are creating airborne versions of H5N1.
Just as there are ethical principles and guidelines for protecting individual human subjects from undue harm, there must be ethical principles and risk-assessment guidelines for protecting societies and ecosystems from the potential undue harm caused by the creation of enhanced or synthetic pathogens. The National Science Advisory Board for Biosecurity has proposed a framework for the oversight of dual-use research with an emphasis on engaging scientists in issues of biosecurity. And Britain’s Royal Society has been working on developing new approaches to biological risk assessment. But these issues should not be left to the scientists themselves. Bioethicists and security experts should get more involved, too — by holding seminars, classes, and debates on dual-use research. And dual-use research proposals must be evaluated before proceeding.
Ethical principles and guidelines, including risk assessments, for the ethical oversight of dual-use research must be fully developed. There has been movement toward a more engaged bioethics community, but it’s time for them to get much more involved in the dual-use dilemma.
Editor’s note: This column was updated on August 28, 2012.
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.