The authoritative guide to ensuring science and technology make life on Earth better, not worse.

The human element

By Hugh Gusterson | September 1, 2011

The discussions about the safety of nuclear reactors in the new post-Fukushima world have focused on technical questions: Is it possible to make reactors earthquake-proof? What is the best way to ensure that spent fuel remains safe? What is the optimal design for coolant systems? Can reactors be made “inherently safe”?

Sometimes these discussions make it sound as though the reactors operate all by themselves — both when they run smoothly or during an accident. But that is to omit the human element. Nuclear reactors are operated by fallible human beings, and at least two meltdowns have been caused by poor human decisions: the 1961 meltdown of an experimental military reactor in Idaho, which killed three operators when one of them withdrew a control rod six times as far as he was supposed to (carrying out a high-tech murder-suicide over a love triangle, according to some accounts), and the Chernobyl accident, which was caused by an ill-conceived experiment conducted outside approved protocols.

So, if nuclear safety is a matter of human behavior as well as sound technical infrastructure, we should look to the social sciences in addition to engineering to improve reactor safety. After all, the machines don’t run themselves. The social sciences have five lessons for us here:

  • The blind spot. In what we might call the frog-in-boiling-water syndrome, human cognition is such that, in the absence of a disaster, individuals often filter out accumulating indications of safety problems that look like obvious red flags in retrospect — just as frogs do not jump out of a pot of water on a stove as long as the temperature goes up slowly. Diane Vaughan’s award-winning book on the Challenger disaster demonstrates a clear pattern in earlier space shuttle launches of O-ring performance degrading in proportion to declining launch temperatures — the problem that would ultimately kill Challenger‘s ill-fated crew. Some shuttle engineers had become concerned about this, but the organizational complex responsible for the space shuttle could not bring this problem into full cognitive focus as long as the missions were successful. Operational success created a blinding glow that made this safety issue hard to see.
  • The whistle-blower’s dilemma. The space shuttle program provides another example of human fallibility, explored in William Langewische’s account of the Columbia space shuttle accident: Large, technical organizations tend to be unfriendly to employees who harp on safety issues. The NASA engineers who warned senior management — correctly, as it turned out — that the Columbia shuttle was endangered by the foam it lost on takeoff were treated as pests. (The same is true of Roger Boisjoly, the Morton Thiokol engineer who was ostracized and punished for having warned correctly that the Challenger shuttle was likely to explode if launched at low temperature.) Large technical organizations prioritize meeting deadlines and fulfilling production targets, and their internal reward structures tend to reflect these priorities. This is especially true if the organizations operate in a market environment where revenue streams are at stake. In such organizations, bonuses tend not to go to those who cause the organization to miss targets and deadlines or spend extra money to prevent accidents that may seem hypothetical. It is not the safety engineers, after all, who become CEOs. Those with safety concerns report that they often censor themselves unless they are deeply convinced of the urgency of their cause. Indeed, there is — sadly — substantial literature on the various forms of mistreatment of engineers who do come forward with such concerns.
  • The politics of oversight. Regulatory apparatuses tend to degrade over time — especially in political systems such as America’s, which tend to facilitate the corporate capture of government functions. Thanks to the leverage afforded by campaign donations and the revolving door between public and private employment, industries have become extremely skillful at inserting their former employees, future employees, and other allies into the very regulatory agencies that oversee them. A brilliant piece of investigative journalism on the Securities and Exchange Commission in the latest issue of Rolling Stone shows how this can reduce a regulatory agency to an empty husk. Whether it’s the Nuclear Regulatory Commission, the Securities and Exchange Commission, or the Food and Drug Administration, the story is the same: Government agencies that started off as aggressive watchdogs have become absorbed over time by those over whom they have titular oversight. Americans recently saw the dire consequences of this trend in the banking meltdown of 2008.
  • Overwhelmed by speed and complexity. As Charles Perrow argues in his influential book Normal Accidents, which was inspired by the Three Mile Island accident, human operators function well in environments of routinized normality; but, when highly complex technical systems function in unpredicted ways — especially if the jagged interactions between subsystems unfold very rapidly — then the human capacity for cognitive processing is quickly overwhelmed. In other words, if a reactor is veering toward an accident caused by the failure of a single system in a way that operators have been trained to handle, then they are likely to retain control. But, if the accident-in-the-making involves unforeseen combinations of failures unfolding quickly and requires improvised responses rather than routinized ones, the outcome is far less hopeful.
  • The wild card. Finally, human nature being what it is, there are always the wild cards: people who kill romantic rivals via nuclear meltdown, freelance experimenters, terrorists, operators who should never have made it through personnel screening, operators who are drunk on the job, operators whose performance has declined through laziness, depression, boredom, or any host of reasons.

Many of these factors came together to produce the Deepwater Horizon disaster in 2010. As Gregory Button demonstrates in his book, Disaster Culture, the US Minerals Management Service had slackened its oversight of offshore drilling as many former regulators went on to work for the oil industry; in the absence of a major blowout, incipient safety problems remained unseen; those who did warn against cutting corners were marginalized; and the accident unfolded with such speed and ferocity that those aboard the rig were quickly overwhelmed.

The bottom line: Nuclear safety is threatened by human as well as technical malfunctions, and the risk of disaster can only be attenuated through attention to the principles of social engineering as well as nuclear engineering. While human behavior can always overflow the bounds of our plans for its containment, there are measures that can at least lower the risk of a nuclear disaster caused by human factors: First, the nuclear industry needs to do more to both protect and reward whistle-blowers; and, second, the industry needs regulators with a genuine desire to exercise oversight — rather than people hoping to increase their income by later going to work for the very companies that they were regulating. Unfortunately, this goes against the ethos of the contemporary United States, where the trend-lines are going in the wrong direction.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.


Topics: Columnists

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments