What Three Mile Island, Chernobyl, and Fukushima can teach about the next one

By Edward Moore Geist | April 28, 2014

Following the accident at the nuclear power plant, government authorities realized to their horror that their existing plans for such an emergency were too vague to address the challenges now facing them. Making matters worse, technical experts disagreed about the state of the crippled reactor and what might happen next. Some confidently asserted that events were under control, while others warned that ongoing radioactive emissions might portend an imminent release of catastrophic proportions. More worryingly still, no one could predict the likelihood or timing of such a development confidently enough to inform decisions about ordering evacuations. Should the local population be evacuated, or would that measure only incite unnecessary panic? Proximity to the capital gave the situation extra urgency. Might it, too, have to be evacuated, with all the unfathomable costs that might entail? Without reliable measurements of the total radioactivity released to the environment or estimates of how large it might grow, policymakers had no choice but to answer these fraught questions on the basis of guesswork.

These events played out three times—at the US state of Pennsylvania’s Three Mile Island Nuclear Power Plant in 1979, at then-Soviet Ukraine’s Chernobyl Nuclear Power Plant in 1986, and at Japan’s Fukushima Daiichi in 2011. During the accident at Three Mile Island, only authorities' overly optimistic assessment of the damage to the reactor forestalled them from ordering a general evacuation of the surrounding area, which might have included state capital Harrisburg. Several days after the explosion of Chernobyl Nuclear Power Plant Unit 4, the sudden acceleration of radioactivity releases led the Soviet government to fear that the Ukrainian capital of Kyiv, 100 kilometers (62 miles) distant, might have to be evacuated. In the course of the crisis at Fukushima Daiichi, the Japanese government grappled with a similar dilemma: Unable to predict how far serious contamination might extend from the crippled plant, it secretly pondered the prospect of evacuating Tokyo even as official pronouncements assured the public that events were under control.

In all three of these cases, uncertainty about source terms—the quantities and characteristics of the radioactive isotopes released in a nuclear event—hindered efforts to formulate an effective emergency response. Source terms determine populations' ultimate radiation exposure, and therefore decisions about shelter and evacuation necessarily depend on assessments of them. Experience has demonstrated that source terms from nuclear reactor accidents are extremely difficult to measure even after the fact, much less predict in advance. While plant operators are generally aware of the radiological inventory contained in the reactor core, the myriad means and processes by which they could escape during an accident are much more difficult to understand and predict. Many aspects of the source terms, including what radioisotopes are released, their amounts, their chemical forms, and the rate and altitude at which they are released, interact with the surrounding environment to determine the total radiation doses received by the population. While means for detecting even miniscule amounts of radioisotopes in the atmosphere are well-developed thanks to international efforts to detect illicit nuclear testing, these measures are not designed to determine either the quantity or distribution of radioisotopes released by a damaged reactor—a problem without a readily available technical solution.

The releases from both Chernobyl and Fukushima defied the source-term estimates used as the basis for emergency planning for similar accidents in the United States quantitatively and qualitatively, but were not as bad as the most pessimistic predictions of the radiological consequences of reactor meltdowns. Incorporating insights from these examples and improved practices in the remote monitoring of nuclear plants into emergency planning for nuclear accidents could offer a promising means of making such events more manageable.

Reliance on guesswork. When the first nuclear power plants were built in the United States, no one could predict the source terms that would result from a serious accident, encouraging a regulatory culture that sought safety through prevention. During the 1950s and 1960s, the US Atomic Energy Commission assumed that serious accidents would have immense radiological consequences, but that with adequate engineering precautions they could be prevented with a very high degree of assurance. Lacking a better analog, the Atomic Energy Commission used assumptions from studies of nuclear weapons fallout to estimate what the effects of a nuclear power plant accident would be. In 1957 it issued a report that made estimates of what would happen if the contents of a power reactor core were released in a way analogous to a nuclear weapon, which is extraordinarily efficient at dispersing its radiological contents. The first attempt to analyze the potential risks of a nuclear power plant accident, the report’s horrifying analysis predicted 3,400 people dead of radiation exposure, 43,000 injured, the possible need to evacuate the population from an area of up to 8,200 square miles (21,238 square kilometers), and as much as 150,000 square miles (388,500 square kilometers) of land placed under agricultural restrictions due to long-lived radioactive contamination. To forestall such outcomes, the Atomic Energy Commission mandated that nuclear power plants incorporate engineered safety systems to limit the probability of fuel damage, and containment buildings to prevent the release of radioactive material should those fail. Confident that a serious accident would be prevented, in the 1960s and early 1970s the Commission did not require reactor operators or local governments to plan for a nuclear accident with off-site consequences.

During the 1970s the potential perils of this overconfident attitude became apparent, leading to the development of the assumptions that still undergird planning for nuclear accidents in the United States. Widely publicized concerns originating from within the Atomic Energy Commission itself that important engineered safety systems might not perform as intended, as well as a continuing lack of assurance that containment buildings would prevent a major radiological release in an extreme accident, helped encourage major reforms in the regulation of nuclear power. These included the formation in 1975 of an independent agency, the Nuclear Regulatory Commission, to oversee the civilian nuclear industry, and the increasing adoption of probabilistic risk-assessment techniques to estimate the likelihood of a serious nuclear accident and its effects. During the late 1970s, the Nuclear Regulatory Commission worked with the Environmental Protection Agency to develop a theoretical basis for emergency management in nuclear accidents.

The EPA quickly recognized that different radiological releases called for radically different emergency management tactics, but the absence of confident source-term estimates made comparing the relative merits of shelter and evacuation difficult. In many circumstances, evacuating populations from areas around a damaged reactor appeared most effective at reducing individuals' radiation exposure, but this measure also had numerous downsides. For instance, a particularly extreme accident that dispersed large amounts of radioactivity quickly might not provide nearby populations with sufficient time to evacuate. Presciently, EPA analysts also recognized that evacuation stood to be expensive and disruptive, and that in less-extreme accidents these considerations might outweigh the benefits of lowered radiation doses.

Although the Nuclear Regulatory Commission and EPA attempted to set conservative standards for emergency management planning around nuclear power plants, the lack of practical experience with nuclear accidents and expert disagreement about their possible consequences forced the two agencies to make policy based on only approximate estimates. In 1978, the Nuclear Regulatory Commission and EPA agreed on the concept of “emergency planning zones,” which remain a prominent feature of US plans to ameliorate the consequences of reactor accidents. The agencies recommended two sizes of zones in anticipation of qualitatively different radiation hazards: one with a radius of 10 miles (16.1 kilometers) to address whole-body radiation exposure, and another with a radius of 50 miles (80.5 kilometers) aimed at preventing ingestion of radioactivity in food and water. On the basis of the most sophisticated analysis then available, the Nuclear Regulatory Commission and EPA concluded that an accident creating radiation hazards dire enough to require evacuation more than 10 miles from a plant was extremely unlikely, and recommended that relocation plans only address the 10-mile zone.

The minimal external impact of the accident at Three Mile Island Unit 2 in 1979 helped dispel some of the worst fears about the consequences of nuclear accidents, but also demonstrated America’s unpreparedness for a radiation emergency. Although only relatively minor radiological releases occurred, Three Mile Island revealed both the inability of the Nuclear Regulatory Commission to handle a crisis situation and the weakness of US emergency management in general. At the time of the accident the Commission was directly responsible for overseeing plants’ emergency management planning, and had only just begun implementing the recommendations it had developed in conjunction with the EPA. In the course of the crisis, it became apparent that no usable plans were available to evacuate the area around the plant, and Pennsylvania enlisted the assistance of federal civil defense officials to redress this inadequacy as quickly as possible. Three Mile Island also stoked demands for reforms to nuclear safety regulation, one of which was transferring oversight of nuclear plants’ emergency management plans from the Nuclear Regulatory Commission to the newly-established Federal Emergency Management Agency (FEMA). The new agency was required to approve nuclear power reactors’ emergency management plans before the reactors could receive operating licenses.

Despite these administrative changes, US emergency planning for nuclear accidents continued to follow the framework developed prior to Three Mile Island, even after information emerged challenging it. The discovery several years after the accident that much of the reactor’s fuel had melted without causing either the reactor vessel or the containment building to fail, as many theoretical studies predicted, demonstrated that engineered safety measures could sometimes prevent even extreme nuclear accidents from becoming catastrophes. Some analysts concluded that the source terms from a nuclear accident would be limited, making emergency planning relatively manageable. At the same time, the fact that this type of accident occurred at all, when most analyses prior to Three Mile Island considered it infinitesimally improbable, challenged existing assumptions about nuclear safety.

Learning from the past. The explosion of Chernobyl Nuclear Power Plant Unit 4 on April 26, 1986 and its disastrous aftermath both demonstrated the challenges of protecting populations from the consequences of nuclear accidents and the complexity of the source-term problem. In the days after the accident, the Soviet government found itself unable to gauge the amount of radioactive material escaping the destroyed reactor, much less predict how much more might be released and how far it might spread. Moreover, like at Three Mile Island seven years before, no usable evacuation plans were available—only this time, they were desperately needed. Forced to improvise, the Soviet authorities first evacuated the city of Pripyat, a few kilometers from the damaged plant, and then progressively expanded the evacuation to encompass areas within 10 kilometers (6.2 miles) and finally 30 kilometers (18.6 miles). Even beyond this area, isolated areas of contamination necessitated evacuations of populations as far away as western Russia. Nor were the chosen evacuation zones optimal—people were removed from some areas with only light contamination, while they remained in others where it was quite heavy. Less severe radiological contamination afflicted large areas of the Western USSR, threatening to expose Soviet citizens to radiation via food and drink. Effective protection of the population from radiation hazards demanded accurate assessments of the radioactivity released from Chernobyl, but the Soviet state discovered through hard experience the extreme difficulty of this task. To this day the Chernobyl source term, and consequently estimates of the radiation doses received by surrounding populations, remain the subject of acrimonious expert debate.

Even with the benefit of a quarter century of technological progress, during the crisis at Fukushima Daiichi in 2011 the Japanese government also found itself without the information necessary to make emergency management decisions due to the problem of source-term uncertainty. Deprived by the devastating tsunami of the power essential to run their emergency cooling systems, the cores of three of the plant’s six units experienced extensive fuel damage in the following days. With pressure building up within the stricken reactors’ containment buildings, the plant operators faced a terrible choice between intentionally venting an uncertain amount of radioactivity, or risking the possibility of the containment buildings failing and losing all control. They chose the former, only to have hydrogen released along with contaminated steam, producing dramatic explosions in Units 1 and 3 of the plant. Furthermore, the uncertain status of Unit 4’s spent fuel pool caused considerable concern. The pool contained many years’ worth of irradiated fuel assemblies and a much-greater quantity of long-lived radionuclides than the failing reactors. In theory, if the level of water in the pool fell below the fuel it might create conditions in which the assemblies’ zirconium cladding could burn, threatening to spread an immense amount of radioactivity over Japan. Without any experience with such a scenario, however, the Japanese government could neither ascertain how likely it might be to happen, nor determine how to best protect its citizens. It chose a precautionary approach to the releases from the damaged reactors, evacuating the area within twenty kilometers of the plant before substantial radiation releases took place, but elected to “wait and see” about the possible spent fuel pool fire.

As happened after Chernobyl, these arrangements both evacuated large numbers of people from relatively uncontaminated territory while leaving some in areas with substantial radiation hazards. While favorable wind conditions blew most of the radioactive material released from Fukushima Daiichi over the Pacific, an area of serious contamination extended outside the evacuated zone (even after it was expanded to a 30-kilometer radius) to encompass the village of Iitate, 39 kilometers (24.2 miles) from the plant. Following protests from the IAEA, the Japanese government recommended the evacuation of this area in late April 2011, more than a month after the disaster. Meanwhile, the evacuation caused immense stress to the population that may have been greater than the health benefits it produced through lowered radiation doses. Although even during the accident’s immediate aftermath some argued that computer models of the spread of radiological contamination should be utilized to plan evacuations and other protective measures, these programs can only make realistic predictions when provided with accurate source-term estimates. Not only did the Japanese government struggle to produce real-time estimates of the radiological releases, available information about conditions within the damaged reactors was too sparse to make confident predictions about possible future developments. Once again, source-term uncertainty proved a critical obstacle to emergency planning.

Although the experiences of Chernobyl and Fukushima seriously challenge the assumptions that have remained central to emergency planning for nuclear accidents in the United States since the late 1970s, they offer few easy lessons for emergency managers. While Chernobyl and Fukushima failed to fulfill the worst fears about the consequences of nuclear accidents, their source terms differed qualitatively from those that formed the rationale for 10-mile and 50-mile emergency planning zones. Such simplistic geographic categories failed badly in both cases, as evacuations proved necessary in areas well beyond a 10-mile radius. Despite the clear example set by these cases, the Nuclear Regulatory Commission continues to insist that evacuation and communication planning for areas more than 10 miles from nuclear plants is unnecessary. At the same time, Chernobyl and Fukushima also demonstrated that the costs of evacuation sometimes outweigh the benefits, and suggest that if possible, populations should only be relocated as a means of last resort. Weighing the relative benefits of evacuation and possible alternatives, however, is extremely difficult in the absence of reliable source-term estimates.

Fortunately, measures exist that demonstrate how emergency managers can reduce the impact of source-term uncertainty. Illinois, which hosts more nuclear reactors than any other state, possesses its own Division of Nuclear Safety within the Illinois Emergency Management Agency. Established after Three Mile Island, the Division of Nuclear Safety operates a Radiological Emergency Assessment Center colocated with the State Emergency Operations Center, which aims to help Illinois make the hard decisions about shelter and evacuation during a nuclear accident. The Radiological Emergency Assessment Center’s computers are directly linked to the instrumentation of all the nuclear plants in the state, as well as to radiological monitoring systems in the areas surrounding them—maximizing the information available to decision-makers during a crisis.

Illinois demonstrates the feasibility and affordability of these measures, and emulating them could provide emergency managers in other parts of the country with the means to make the best possible decisions to protect the public following a nuclear accident. This type of integration between remote monitoring and emergency management could be made even more effective with improved instrumentation and analytical techniques to produce better source-term estimates. Although in the past research in this area was hobbled by a lack of experience with serious accidents at light-water reactors, ongoing technical studies of the Fukushima releases should help alleviate this problem. Should the United States ever face a radiological emergency, investments in these areas could pay for themselves many times over.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
Inline Feedbacks
View all comments


Receive Email