Lessons not learned: Insider threats in pathogen research

By Derrin Culp | April 3, 2013

In the classic film Dr. Strangelove, Brig. Gen. Jack D. Ripper was the ultimate insider threat. As the nuclear-armed B-52s that Ripper unilaterally dispatched proceeded toward their Soviet targets, the American president confronted Air Force Gen. Buck Turgidson in exasperation: “When you instituted the human reliability tests, you assured me there was no possibility of such a thing ever occurring.” To which Turgidson replied, “Well, I don’t think it’s quite fair to condemn a whole program because of a single slip-up, sir.”

In the classic film Dr. Strangelove, Brig. Gen. Jack D. Ripper was the ultimate insider threat. As the nuclear-armed B-52s that Ripper unilaterally dispatched proceeded toward their Soviet targets, the American president confronted Air Force Gen. Buck Turgidson in exasperation: “When you instituted the human reliability tests, you assured me there was no possibility of such a thing ever occurring.” To which Turgidson replied, “Well, I don’t think it’s quite fair to condemn a whole program because of a single slip-up, sir.”

Turgidson’s rejoinder is similar to the response of much of the US microbiology community — scientists, funding agencies, and regulators alike — to the Justice Department’s conclusion that the infamous 2001 anthrax mailings were the work of an insider. Since 2008, when investigators led by the FBI’s Washington Field Office identified Bruce E. Ivins, an Army civilian research scientist, as the sole perpetrator, the collective response has been to minimize discussion of the problem, indulge in wishful thinking, and enact cosmetic changes.

Within a year of the Justice Department’s finding, a US National Research Council (NRC) committee and the federal government’s standing biosecurity advisory panel both had examined the issue of insider threats in pathogen research. Those panels concluded that existing procedures to keep tabs on scientists who work with dangerous pathogens are sufficient. Both determined that intrusive monitoring of microbiologists engaged in unclassified research would not necessarily increase protection against insider threats and rejected broad adoption of procedures that scientists and military personnel who work with nuclear weapons and fissile material must endure, such as random testing for alcohol, marijuana, cocaine, or amphetamines; observation of off-duty behavior; video monitoring of laboratory activity; annual psychological assessments; or mandatory privacy waivers to allow supervisors to review mental health treatment records. Neither panel’s report discussed the risk of mass casualties from an insider threat or considered that such risks might lead to different conclusions about intrusive screening and monitoring.

The panels also concluded that laboratories can successfully manage insider threat risk primarily by creating a supportive environment that somehow will induce emotionally or mentally troubled researchers with malevolent intentions to voluntarily give up their lab privileges before they do harm. Likewise, this supportive environment will enable managers with minimal mental health training and expertise to identify such researchers even if they don’t reveal themselves.

Since the conclusion of these two major reviews, all but a handful of the articles and opinion pieces addressing pathogen research in leading science and biosecurity journals and blogs and in major US newspapers, as well as all biosecurity pronouncements from US government regulators and research funding agencies, have been silent on the issue of insider threats or ignored the dissenting perspective that aggressive monitoring of scientists’ mental and emotional health could have prevented the anthrax mailings and should be part of routine supervision.

In October 2012, the government updated the regulations — put in place in 2002 after the anthrax mailings — that control the possession, use, and transfer of dangerous biological agents and toxins known as Select Agents. Although the new regulations expressly address the issue of insider threats, they embrace the expert panels’ recommendations and preserve a largely hands-off posture by regulators.

In 2012, the federal government also issued a new biosecurity policy designed to increase routine scrutiny of risky pathogen research by federal funding agencies. The policy says nothing about insider threats. Likewise, a companion policy on institutional oversight of possibly risky research, as well as a framework for guiding US government funding decisions on H5N1 research — both released in February of this year — ignored the insider issue altogether. On this subject, a March 18, 2013 Congressional Research Service report said only that “a deliberate release by a disgruntled or disturbed laboratory worker” is a concern to “some experts.”

A culture of responsibility. In 2011, the Center for Biosecurity, a leading biosecurity think tank, summarized the microbiology profession’s “right answer” to insider threats: “enlightened leadership, trust, and openness … making sure that laboratory leaders have the time, responsibility, and training to be able to observe and evaluate what is happening in their laboratories day to day.” This approach relies almost entirely upon researcher self-policing and gives lab managers few rights to explore researchers’ emotional and mental health. It’s no surprise that scientists would favor it.

Along the same lines, reports in 2009 and 2011 by the National Science Advisory Board for Biosecurity (NSABB) and a 2009 book by a National Research Council panel exhorted laboratories to cultivate a “culture of responsibility with respect to biosecurity,” meaning a work environment that encourages discussion of researchers’ security obligations, reliance on self- and peer reporting of physical and emotional health issues, and non-punitive, non-stigmatizing mechanisms for troubled researchers to temporarily surrender their access to pathogens.

The NSABB and NRC also reviewed the scientific evidence on the efficacy of credit checks, polygraphs, high-level security clearances, examination of pharmacy and psychiatric records, psychological tests, and random drug and alcohol testing in detecting insider threats. With the exception of random drug testing and investigation of “sudden unexplained affluence” (which the NRC panel deemed worth considering where research involves the most transmissible and lethal organisms), the panels did not find enough evidence to justify what they perceived as the potential negative consequences of employing those techniques. Ironically, the two panels provided little evidence of the effectiveness of the “culture of responsibility” that they embraced enthusiastically.

The reports speculated that expanded use of intrusive techniques might put a chill on US infectious disease research by discouraging talented microbiologists from pursuing such work or inducing them to relocate to countries with more lenient regulation. Such techniques would, they feared, impose additional costs on laboratories, run afoul of various federal and state privacy laws, and create an unacceptable risk of incorrectly flagging innocent researchers. Finally, the NRC panel asserted that monitoring that may make sense for nuclear research doesn’t make sense in microbiological research because the two types of science are fundamentally different.

Unexamined alternatives. Neither the NRC panel nor the NSABB adequately challenged its own assumptions. For example, the privacy laws currently on the books are not immutable. With sufficient political will, Congress and state legislatures could relax those constraints in order to permit more intrusive screening and monitoring. Although the NRC panel conceded that polygraphs might have a deterrent effect regardless of the tests’ accuracy, it did not attempt to assess the magnitude of such an effect. NSABB noted that “some psychological profiling is conducted for certain elite military units” and that “psychological tests are also routinely used as a component of the employment screening process … for airline pilots or within the nuclear industry,” but it did not discuss why those sectors employ psychological profiling or explain why those rationales don’t apply to pathogen research.

The NRC panel did not consider that it might not be desirable to provide innocent scientists absolute protection from false suspicion when the negative consequences of failing to identify a genuine risk are great. Nor did it make a compelling case that the inherent differences between pathogen and nuclear research necessitate much less intense and intrusive screening and monitoring of microbiologists. Nobody has explained why it is acceptable for the US Army and Lawrence Livermore National Laboratory to subject their microbiologists to periodic random drug tests and annual psychological assessments, respectively, but unacceptable to do the same with university and private-sector scientists conducting high-risk pathogen research.

Finally, there is hardly any evidence to justify concerns about a “brain drain” away from US infectious disease research or to indicate that the onerous restrictions imposed on nuclear weapons scientists for decades have significantly impeded recruitment or retention. A 2009 report on biological safety and security from the Defense Science Board asserted that although the US government agencies that administer the nuclear weapons complex and gather electronic intelligence engage in “extremely intrusive monitoring,” those efforts are widely accepted by the people who work there.

The Ivins case. Within a year of Bruce Ivins’ death in 2008, a federal judge authorized an independent panel of six psychiatrists and three other experts to review the scientist’s court-sealed psychiatric records. This Expert Behavioral Analysis Panel concluded that had the Army examined those records, as it was legally empowered to do, it would have determined — long before 2001 — that Ivins should not have access to anthrax. According to the panel, the anthrax mailings “could have been anticipated — and prevented.” The panel’s review found that over the 20 years preceding the anthrax mailings, Ivins “had committed repeated acts of breaking and entering as well as burglary without having been caught,” and that he had disclosed this only to his personal psychiatrists. Furthermore, Ivins had “cultivated a persona of benign eccentricity that masked his obsessions and criminal thoughts” and was “exploitive and manipulative.”

Ivins repeatedly authorized the Army to obtain and review his medical and psychiatric treatment records. According to the panel, however, the Army neither examined Ivins’ mental health records nor paid close attention to his daily behavior. The expert panel urged organizations to retain the right to examine such records, to keep that access as broad as possible, to use it even in the “absence of specific symptoms or diagnoses,” and to withhold access to pathogens from scientists who don’t renew privacy waivers. However, the national press and microbiology journals paid little attention to the audacious conclusions.

The H5N1 controversy. During the winter of 2011 and 2012, Americans witnessed a prime-time discussion about research on the avian flu virus, known to scientists as H5N1. This organism kills millions of birds annually but, unlike the seasonal flu that makes so many people miserable every winter, H5N1 rarely infects humans. When it does, however, it is incredibly lethal; the World Health Organization estimates that 59 percent of all human cases end in death.

The US National Institutes of Health funded two unclassified studies to better understand the likelihood that the H5N1 virus might naturally mutate in ways that would make it more transmissible among humans and, therefore, much more dangerous. When it appeared that at least one of the studies had created in the lab a strain of H5N1 that might be able to spread easily among humans, numerous commentators weighed in on whether publishing the studies would be tantamount to giving terrorists the blueprints for a biological weapon of mass destruction. Scientists and scholars not prone to hyperbole or histrionics indicated that, under certain conditions, the intentional release of a similarly modified virus could cause deaths in the tens or even hundreds of millions. The NSABB, which historically has been strongly opposed to publication restrictions, recommended unanimously that science journals limit what they published, arguing that “the deliberate release of a transmissible highly pathogenic influenza A/H5N1 virus would be an unimaginable catastrophe.” The controversy was so intense that virus researchers around the world adopted an open-ended moratorium on similar research, which they maintained for a year.

The risk from “terrorists” dominated the H5N1 discussion, and the potential for scientists to do harm barely lit up the radar — as if that hadn’t happened in a spectacular way just a decade earlier. One of the few people who thought it was germane to worry about researchers using their own findings in malevolent ways was Australian immunologist Ian Ramshaw: “I’m not so worried about bioterrorism. It’s the disgruntled researcher who is dangerous.” Rutgers microbiologist Richard Ebright, commenting at the time on the proposed Select Agent updates, wrote that failure to mandate video monitoring, a two-person rule, and psychological assessments for scientists working with the most dangerous pathogens “would represent a failure to learn lessons from the 2001 anthrax mailings [and] to address the ‘insider threat’ responsible for the 2001 anthrax mailings.” But that perspective was virtually invisible in the H5N1 debate.

When a major journal finally published one of the H5N1 studies, it included six related commentaries, but the closest any of them came to mentioning insider threats was to note that “each additional laboratory and individual worker adds to the risk of accidental or malicious release.” For the rest of 2012, as the moratorium continued, the dozens of articles that considered the future of H5N1 research focused overwhelmingly on the risks of accidental release of modified microbes and occasionally on the deliberate release by terrorists. None raised the risk of deliberate release by an insider or invoked the 2001 anthrax mailings.

New rules. The federal agencies that administer the Select Agency Regulations announced changes to those rules in October 2012, following a yearlong public comment process. Besides designating a new list of Select Agents — including the most dangerous pathogens, known as Tier 1 — the updates imposed new security requirements ostensibly intended to enhance labs’ ability to stymie insider threats. In reality, though, they strongly favor researchers’ preferences for a light touch by regulators.

The updates don’t mandate any additional screening or monitoring. Originally, the updates proposed a requirement that the compliance manager have “the appropriate training and expertise” to fulfill his or her responsibilities (which include managing insider threat risks, among others). But this obligation did not survive the comment period. Instead, the final rule instructs federal regulators to continue judging performance based on efficacy in implementing regulations; in effect, as long as nothing goes seriously wrong, regulators can conclude that the compliance manager is qualified. This is a glaring abdication of regulatory responsibility, given the potential harm from a compliance manager’s failure to deal properly with insider threats.

The updates require laboratory security plans to explicitly address how the compliance manager will learn of and report potentially criminal activity to law enforcement agencies. For labs handling the most dangerous pathogens, the security plans also must describe how the compliance manager will decide when to grant, suspend, or terminate researcher access to Select Agents. To help regulated labs comply with these new requirements, the federal government issued a guidance document on how to screen and monitor researchers and other employees to determine who may have access to Tier 1 pathogens, and how to identify insider threats. The guidance document is extremely long on process but short on actual advice. It lists a number of factors that might warrant concern, but offers no guidance on how to interpret or react to them. It suggests that the lab official who approves researchers for access to these select agents have “human resources expertise and experience,” but it does not suggest any role for medical or mental health professionals. It cedes most critical decisions to the sole judgment and discretion of laboratory management.

The guidance document also attempts to remove from consideration the idea that an insider threat could arise from a researcher’s chronic emotional or mental health issues and that psychological screening might be worthwhile. It states: “The FBI Amerithrax investigation identified a US scientist as the most likely perpetrator” (emphasis added) of the anthrax mailings, when in fact the Justice Department was unequivocal: “Ivins, alone, mailed the anthrax letters.” Its examples of insider threats include someone who pretends to be a legitimate researcher; someone who is the victim of coercion or manipulation; and someone who does harm after experiencing a “significant life-changing event.” None of these archetypes is consistent with what the expert psychiatric panel’s report revealed about Bruce Ivins and the factors — including a lifelong preoccupation with revenge — that may have motivated him to mail anthrax.

No exemptions. What if, instead of mailing anthrax spores, a microbiologist had released an aerosolized and highly transmissible pathogen near the ticket counters and security lines at Washington’s Reagan National Airport, ultimately causing 5,000 deaths instead of five? Would the prescription for addressing the insider threat risk be the same as the current approach? We don’t have to get anywhere near the seven-digit fatality numbers mentioned during the H5N1 controversy to be fairly certain that a “culture of responsibility” and regulatory delegation of screening and monitoring choices to scientists and their laboratories would be deemed a naïve and utterly inadequate level of protection. Something akin to the Department of Energy’s Human Reliability Program — one of those “extremely intrusive” regimes cited by the Defense Science Board — would be much more likely.

Microbiologists’ claims to an exemption from intrusive personal scrutiny in unclassified research are motivated by sincere (and perhaps even correct) beliefs that restricting them would impede scientific progress and unnecessarily constrain the abundant benefits that their work otherwise would deliver to humankind. But those claims also arise from understandable concerns for personal privacy and dignity.

Identifying the extremely high-risk types of pathogen research (both classified and unclassified) for which the government should mandate more oversight — and picking the right mix of screening and monitoring techniques — would undoubtedly be a complex and imperfect undertaking. Even with decades of evidence about what has and has not worked in nuclear research, intelligence gathering, and classified microbiological research environments, mistakes would be made and some blameless scientists might be faulted. Ultimately, though, microbiologists can’t be exempt from such scrutiny. They lost that privilege when they acquired the ability — or merely the potential — to generate mass casualties.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.


Topics: Biosecurity, Opinion

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments