The authoritative guide to ensuring science and technology make life on Earth better, not worse.
By Sonja Schmid, May 10, 2016
Preparedness for and response to a nuclear emergency are in large part technical, physical undertakings. They involve equipment such as pumps, generators, containment vessels, and helicopters. But they aren't just technical. Preparedness and response also involve, as my roundtable colleague Manpreet Sethi has pointed out, sophisticated legal instruments and organizational capacities. Moreover, as noted in the section for readers' comments, preparedness and response involve sophisticated regulatory and training abilities.
But no matter how many international conventions a nation has ratified, and no matter how flexible (or inflexible) its approach to beyond design-basis accidents, critical decisions during an actual emergency ultimately rest with a plant's staff—and on the staff's judgment of the emergency’s magnitude. As a disaster progresses, judgment evolves. Experts shift their views on the disaster's severity and its potential consequences. They discover entanglements between systems that had previously been thought unrelated.
All this hampers the sort of communication for which Sethi called in Round One—that is, effective communication during an emergency. Sethi, discussing the delays in communicating accurate information that accompanied both Chernobyl and Fukushima, wrote that "public officials [during an emergency] must have quick access to informed scientific opinion and expert judgment so they can make good decisions in extreme time pressure." Officials must be able, Sethi argues, to classify an accident's severity swiftly and correctly. But one problem with severe accidents is that it's typically very difficult, for officials and experts alike, to assess quickly and accurately just how bad things really are. Accurate classification may be impossible while a disaster is still unfolding—or rather, a disaster might deserve different classifications as it evolves.
The real challenge, then, may lie in communicating information that is incomplete, or imperfectly understood, and in making decisions based upon such information. Though Sethi's "informed scientific opinion and expert judgment" are absolutely critical in an emergency, they are not infallible.
Legitimate consequences. An area in which expert judgment is truly crucial—and at the same time hotly contested—concerns the medical consequences of nuclear disasters. In Round One Augustin Simo reported Chernobyl's death toll so far as about 56. Sethi provided a similar figure (though she acknowledged that deaths may go higher in the long term). Both authors presented these figures as evidence that the nuclear industry is safe enough. Implicit in their arguments were comparisons to fatalities associated with more mundane activities—for example, about 90 people in the United States die every day in car, truck, and motorcycle crashes. It's always tempting to make comparisons when high-risk technologies such as nuclear energy are examined. But comparisons of this kind suffer from a twofold problem.
First, a nuclear disaster's death toll will typically be a hugely controversial subject. A 2005 report on Chernobyl fatalities by the World Health Organization and several other agencies predicted that "up to 4,000 people could eventually die of radiation exposure." But a 2006 Greenpeace report challenged these numbers, estimating that the disaster would turn out to cause 250,000 cancers, nearly 100,000 of them fatal. What these vastly diverging estimates demonstrate is how incredibly tricky it is to attribute delayed deaths to specific causes. Indeed, the international community has accepted only a single direct causal connection between Chernobyl and cancer—involving thyroid cancer in children. Even this limited acceptance, argues the University of Pittsburgh's Olga Kuchinskaya in her brave book The Politics of Invisibility, is only due to the efforts of Belarusian scientists, who are increasingly marginalized in their own country.
Second, deaths are not the only consequences of disasters, whether nuclear or otherwise. The trauma of experiencing an emergency, the stress of undergoing (temporary) evacuation, or even Simo's "phobia about nuclear energy" (whether or not the phobia is justified)—all these can cause physiological and mental health effects no less debilitating than cancer. That is, fatalities are not the only legitimate negative consequence of nuclear accidents. Even cancers in remission can be more consequential than they sound—"curing" thyroid cancer often subjects patients to surgical removal of the thyroid and a life-long regimen of substitution therapy.
As the uncertainty over Chernobyl's death toll illustrates, "scientific opinion" is not always unanimous, and neither is "expert judgment." Scientific expertise is not immune to controversy. And expert judgment changes over time.
Share: [addthis tool="addthis_inline_share_toolbox"]