A year ago, I wrote a critique of fusion as an energy source, titled “Fusion reactors: Not what they’re cracked up to be.” That article generated a lot of interest, judging from the more than 100 reader comments it generated. Consequently, I was asked to write a follow-up and continue the conversation with Bulletin readers.
But first, some background, for the benefit of those just coming into the room.
I am a research physicist, who worked on nuclear fusion experiments for 25 years at the Princeton Plasma Physics Lab in New Jersey. My research interests were in the areas of plasma physics and neutron production related to fusion-energy research and development. Now that I have retired, I have begun to look at the whole fusion enterprise more dispassionately, and I feel that a working, everyday, commercial fusion reactor would cause more problems than it would solve.
So I feel obligated to dispel some of the gee-whiz hyperbole that has sprung up around fusion power, which has been regularly heralded as the “perfect” energy source and touted all too often as the magic bullet solution to the world’s energy problems. Last year’s essay made the case that the endlessly proclaimed features of energy perfection (usually “inexhaustible, cheap, clean, safe, and radiation-free”) are all debunked by harsh realities—and that a fusion reactor would actually be close to the opposite of an ideal energy source. But that discussion largely involved the characteristic drawbacks of conceptual fusion reactors, which fusion proponents continue to insist will somehow, someday, be surmounted.
Now, however, we are at a point where, for the first time, we can investigate a prototypical fusion reactor facility in the real world: the International Thermonuclear Experimental Reactor (ITER), under construction in Cadarache, France. Even if actual operation is still years away, the ITER project is sufficiently advanced that we can examine it as a test case for the doughnut-shaped design known as the tokamak—the most promising approach to achieving terrestrial fusion energy based on magnetic confinement. In December 2017, the ITER project directorate announced that 50 percent of the construction tasks had been accomplished. This important milestone offers considerable confidence in the eventual completion of what will be the only installation on Earth that even remotely resembles what is supposed to be a practical fusion reactor. As The New York Times wrote, this facility “is being built to test a long-held dream: that nuclear fusion, the atomic reaction that takes place in the sun and in hydrogen bombs, can be controlled to generate power.”
Plasma physicists regard ITER as the first magnetic confinement device that can possibly demonstrate a “burning plasma,” where heating by alpha particles generated in fusion reactions is the dominant means of maintaining the plasma temperature. That condition requires that the fusion power be at least five times the external heating power applied to the plasma. Although none of this fusion power will actually be converted to electricity, the ITER project is mainly touted as a critical step along the road to a practical fusion power plant, and that claim is our preoccupation here.
Let us see what can be deduced about some possibly irremediable drawbacks of fusion facilities by observing the ITER endeavor, concentrating on four areas: electricity consumption, tritium fuel losses, neutron activation, and cooling water demand. The physical layout of this $20-to-30 billion project is displayed in the photograph below.
A misguided motto. On the ITER website one is greeted by the proclamation “Unlimited Energy,” which is also the battle cry of fusion enthusiasts everywhere. The irony of this slogan is apparently lost on project staff and not suspected by the public. But anyone following the construction at the ITER site in the last five years—and it is easily followed by detailed photographs and descriptions on the project website—would have been struck by the tremendous amount of invested energy.
The website implicitly boasts of this massive energy investment, depicting every one of the ITER subsystems as the most stupendous of its kind. For example, the cryostat, or liquid-helium refrigerator, is the world’s largest stainless steel vacuum vessel, while the tokamak itself will weigh as much as three Eiffel towers. The total weight of the central ITER facility is around 400,000 tons, of which the heaviest components are 340,000 tons for the foundations and buildings of the tokamak complex, and 23,000 tons for the tokamak itself.
But boosters should be distressed rather than ecstatic, because biggest and greatest means big capital outlay and great energy investment, which must appear on the negative side of the energy accounting ledger. And this energy has been largely provided by fossil fuels, leaving an unfathomably large “carbon footprint” for site preparation and construction of all the supporting facilities, as well as the reactor itself.
At the reactor site, fossil-fuel-powered machines excavate huge volumes of earth to a depth of 20 meters and manufacture and install countless tons of concrete. Some of the world’s largest trucks (powered by fossil fuels) convey mammoth reactor components to the assembly site. Fossil fuels are burned in the extracting, transporting, and refining of the raw materials needed to make fusion reactor components and possibly in the manufacturing process itself.
One may wonder how that expended energy could ever be paid back—and of course it won’t. But the very visible embodiment of the tremendous energy investment represents only the first component of the ironic “Unlimited Energy.”
Adjacent to these buildings is a 10-acre electrical switchyard with massive substations handling up to 600 megawatts of electricity, or MW(e), from the regional electric grid, which is enough to supply a medium-sized city. This power will be needed as input to supply ITER’s operating needs; no power will ever flow outward, because ITER’s internal construction makes it impossible to convert fusion heat to electricity. Remember that ITER is a test facility designed purely to show proof of concept as to how engineers can mimic the inner workings of the sun to join atoms together in the real world in a controlled manner; ITER is not intended to generate electricity.
The electrical substation hints at the vast amount of energy that will be expended in operating the ITER project—and indeed every large fusion facility. As pointed out in my previous Bulletin story, fusion reactors and experimental facilities must accommodate two classes of electric power drain: First, a host of essential auxiliary systems such as cryostats, vacuum pumps, and building heating, ventilation and cooling must be maintained continuously, even when the fusion plasma is dormant. In the case of ITER, that non-interruptible power drain varies between 75 and 110 MW(e), wrote J.C. Gascon and his co-authors in their January 2012 article for Fusion Science & Technology, “Design and Key Features for the ITER Electrical Power Distribution.”
The second category of power drain revolves directly around the plasma itself, whose operation is in pulses. For ITER, at least 300 MW(e) will be required for tens of seconds to heat the reacting plasma and establish the requisite plasma currents. During the 400-second operating phase, about 200 MW(e) will be needed to maintain the fusion burn and control the plasma’s stability.
Even during the next eight years of plant construction and shakedown, the on-site power consumption will average at least 30 MW(e), adding to the invested energy and serving as a forerunner of the non-interruptible site power drain.
But much of the information about power drains—and the distinctions between ITER’s expected generation of heat instead of electricity—has gotten lost when the project was described to the public.
Energy enlightenment. Recently, the website New Energy Times presented a well-documented account, “The ITER power amplification myth,” about how the facility’s communications department disseminated poorly worded information about the ITER power balance and misled the news media. A typical widespread statement is that “ITER will produce 500 megawatts of output power with an input power of 50 megawatts,” implying that both numbers refer to electric power.
New Energy Times makes it clear that the expected 500 megawatts of output refers to fusion power (embodied in neutrons and alphas)—which has nothing to do with electric power. The input of 50 MW referred to here is the heating power injected into the plasma to help sustain its temperature and current, and it’s only a small fraction of the overall electric input power to the reactor. The latter varies between 300 and 400 MW(e), as explained earlier.
The New Energy Times technical critique is essentially valid and draws attention to the colossal electrical power demanded by any fusion facility. In fact, it has always been recognized that a huge amount of energy is required to start up any fusion system. But tokamak fusion systems also require an unceasing hundreds of megawatts of electric power just to keep them going. In an apparent response to criticism from New Energy Times, the ITER website and other outlets such as Eurofusion have corrected some misleading statements with regard to power flow.
Yet there are far more serious issues with ITER’s advertised operation than the misleading labeling of projected input and output powers. While the input electric power of 300 MW(e) and more is indisputable, a fundamental question is whether ITER will produce 500 MW of anything, a query that revolves around the vital tritium fuel—its supply, the willingness to use it, and the campaign needed to optimize its performance. Other misconceptions involve the actual nature of the fusion product.
Tritium tribulations. The most reactive fusion fuel is a 50-50 mixture of the hydrogen isotopes deuterium and tritium; this fuel (often written as “D-T”) has a fusion neutron output 100 times that of deuterium alone and a spectacular increase in radiation consequences.
Deuterium is abundant in ordinary water, but there is no natural supply of tritium, a radioactive nuclide with a half-life of only 12.3 years. The ITER website states that the tritium fuel will be “taken from the global tritium inventory.” That inventory consists of tritium extracted from the heavy water of CANDU nuclear reactors, located mainly in Ontario, Canada, and secondarily in South Korea, with a potential future source from Romania. Today’s “global inventory” is approximately 25 kilograms, and increases by about one-half kilogram per year, notes Muyi Ni and his co-authors in their 2013 journal article, “Tritium Supply Assessment for ITER,” in Fusion Engineering and Design. The inventory is expected to peak before 2030.
While fusioneers blithely talk about fusing deuterium and tritium, they are in fact intensely afraid of using tritium for two reasons: First, it is somewhat radioactive, so there are safety concerns connected with its potential release to the environment. Second, there is unavoidable production of radioactive materials as D-T fusion neutrons bombard the reactor vessel, requiring enhanced shielding that greatly impedes access for maintenance and introducing radioactive waste disposal issues.
In 65 years of research involving hundreds of facilities, only two magnetic confinement systems have ever used tritium: the Tokamak Fusion Test Reactor at my old stomping grounds at the Princeton Plasma Physics Lab, and the Joint European Tokamak (JET) at Culham, UK, way back in the 1990’s.
ITER’s present plans call for the acquisition and consumption of at least 1 kilogram of tritium annually. Assuming that the ITER project is able to acquire an adequate supply of tritium and is brave enough to use it, will 500 MW of fusion power actually be achieved? Nobody knows.
“First plasma” at ITER is supposed to occur in 2025. That will be followed by a relatively subdued 10 years of continued machine assembly and periodic plasma operations with hydrogen and helium. These gases produce no fusion neutrons, and thereby permit the resolution of shakedown problems and optimization of plasma performance with minimal radiation hazards. Plasma instabilities must be kept at bay to ensure adequate energy confinement, so the reacting plasma can be heated and maintained at high temperature. Influxes of non-hydrogenic atoms must be curtailed.
ITER’s schedule calls for deuterium and tritium use beginning in the late 2030’s. But there’s no guarantee of hitting the 500 MW target; generating fusion power in large quantities depends, among other things, on developing the optimal recipe of deuterium and tritium injection by frozen pellets, particle beams, gas puffing, and recycling. During the unavoidable teething stage through the early 2040’s, it’s likely that ITER’s fusion power will be only a fraction of 500 MW, and that more injected tritium will be lost by non-recovery than burned (i.e., fused with deuterium).
Analyses of D-T operation in ITER indicate that only 2 percent of the injected tritium will be burned, so that 98 percent of the injected tritium will exit the reacting plasma unscathed. While a high proportion simply flows out with the plasma exhaust, much tritium must be continually scavenged from the surfaces of the reaction vessel, beam injectors, pumping ducts, and other appendages for processing and re-use. During their several dozen traverses of the Tritium Trail of Tears around the plasma, vacuum, reprocessing and fueling systems, some tritium atoms will be permanently trapped in the vessel wall and in-vessel components, and in plasma diagnostic and heating systems.
The permeation of tritium at high temperature in many materials is not understood to this day, as R. A. Causey and his co-authors explained in “Tritium barriers and tritium diffusion in fusion reactors.” The deeper migration of some small fraction of the trapped tritium into the walls and then into liquid and gaseous coolant channels will be unpreventable. Most implanted tritium will eventually decay, but there will be inevitable releases into the environment via circulating cooling water.
Designers of future tokamak reactors commonly assume that all the burned tritium will be replaced by absorbing the fusion neutrons in lithium completely surrounding the reacting plasma. But even that fantasy totally ignores the tritium that’s permanently lost in its globetrotting through reactor subsystems. As ITER will demonstrate, the aggregate of unrecovered tritium may rival the amount burned and can be replaced only by the costly purchase of tritium produced in fission reactors.
Radiation and radioactive waste from fusion. As noted earlier, ITER’s anticipated 500 MW of thermal fusion power is not electric power. But what fusion proponents are loathe to tell you is that this fusion power is not some benign solar-like radiation but consists primarily (80 percent) of streams of energetic neutrons whose only apparent function in ITER is to produce huge volumes of radioactive waste as they bombard the walls of the reactor vessel and its associated components.
Just 2 percent of the neutrons will be intercepted by test modules for investigating tritium production in lithium, but 98 percent of the neutron streams will simply smash into the reactor walls or into devices in port openings.
In fission reactors, at most 3 percent of the fission energy appears as neutrons. But ITER is akin to an electrical appliance that converts hundreds of megawatts of electric power into neutron streams. A peculiar feature of D-T fusion reactors is that the overwhelming preponderance of thermal energy is not produced in the reacting plasma, but rather inside the thick steel reactor vessel as the neutron streams smash into it and gradually dissipate their energy. In principle, this thermalized neutron energy could somehow be converted back to electricity at very low efficiency, but the ITER project has opted to avoid addressing this challenge. That is a task deferred to delusions called demonstration reactors that fusion proponents hope to deploy in the second half of the century.
A long-recognized drawback of fusion energy is neutron radiation damage to exposed materials, causing swelling, embrittlement and fatigue. As it happens, the total operating time at high neutron production rates in ITER will be too small to cause even minor damage to structural integrity, but neutron interactions will still create dangerous radioactivity in all exposed reactor components, eventually producing a staggering 30,000 tons of radioactive waste.
Surrounding the ITER tokamak, a monstrous concrete cylinder 3.5 meters thick, 30 meters in diameter and 30 meters tall called the bioshield will prevent X-rays, gamma rays and stray neutrons from reaching the outside world. The reactor vessel and non-structural components both inside the vessel and beyond up to the bioshield will become highly radioactive by activation from the neutron streams. Downtimes for maintenance and repair will be prolonged because all maintenance must be performed by remote handling equipment.
For the much smaller Joint European Torus experimental project in the United Kingdom, the radioactive waste volume is estimated at 3,000 cubic meters, and the decommissioning cost will exceed $300 million, according to the Financial Times. Those numbers will be dwarfed by ITER’s 30,000 tons of radioactive wastes. Fortunately, most of this induced radioactivity will decay in decades, but after 100 years some 6,000 tons will still be dangerously radioactive and require disposal in a repository, says the “Waste and Decommissioning” section of ITER’s Final Design Report.
Periodic transport and off-site disposal of radioactive components as well as the eventual decommissioning of the entire reactor facility are energy-intensive tasks that further expand the negative side of the energy accounting ledger.
Water world. Torrential water flows will be needed to remove heat from ITER’s reactor vessel, plasma heating systems, tokamak electrical systems, cryogenic refrigerators and magnet power supplies. Including fusion generation, the total heat load could be as high as 1,000 MW, but even with zero fusion power the reactor facility consumes up to 500 MW(e) that eventually becomes heat to be removed. ITER will demonstrate that fusion reactors would be much greater consumers of water than any other type of power generator, because of the huge parasitic power drains that turn into additional heat that needs to be dissipated on site. (By “parasitic,” we mean consuming a chunk of the very power that the reactor produces.)
Cooling water will be taken from the Canal de Provence formed by channeling the Durance River, and most heat will be discharged into the atmosphere by cooling towers. During fusion operations, the combined flow rate of all the cooling water will be as large as 12 cubic meters per second (180,000 gallons per minute), or more than one-third the flow rate of the Canal. That level of water flow can sustain a city of 1 million residents. (But the actual demand on the Canal’s water will be only a very small faction of that value because ITER’s power pulse will be just 400 seconds long with at most 20 such pulses daily, and ITER’s cooling water is recirculated.)
Even while ITER is producing nothing but neutrons, its maximum coolant flow rate will still be nearly half that of a fully functioning coal-burning or nuclear plant that generates 1,000 MW(e) of electric power. In ITER as much as 56 MW(e) of electric power will be consumed by the pumps that circulate the water through some 36 kilometers of nuclear-grade piping.
Operation of any large fusion facility such as ITER is possible only in a location such as the Cadarache region of France, where there is access to many high-power electric grids as well as a high-throughput cool water system. In past decades, the great abundance of freshwater flows and unlimited cold ocean water made it possible to implement large numbers of gigawatt-level thermoelectric power plants. In view of the decreasing availability of freshwater and even cold ocean water worldwide, the difficulty of supplying coolant water would by itself make the future wide deployment of fusion reactors impractical.
ITER’s impact. Whether ITER performs poorly or well, its most favorable legacy is that, like the International Space Station, it will have set an impressive example of decades-long international cooperation among nations both friendly and semi-hostile. Critics charge that international collaboration has greatly amplified the cost and timescale but the $20-to-30 billion cost of ITER is not out of line with the costs of other large nuclear enterprises, such as the power plants that have been approved in recent years for construction in the United States (Summer and Vogtle) and Western Europe (Hinkley and Flamonville), and the US MOX nuclear fuel project in Savannah River. All these projects have experienced a tripling of costs and construction timescales that ballooned from years to decades. The underlying problem is that all nuclear energy facilities—whether fission or fusion—are extraordinarily complex and exorbitantly expensive.
A second invaluable role of ITER will be its definitive influence on energy-supply planning. If successful, ITER may allow physicists to study long-lived, high-temperature fusioning plasmas. But viewed as a prototypical energy producer, ITER will be, manifestly, a havoc-wreaking neutron source fueled by tritium produced in fission reactors, powered by hundreds of megawatts of electricity from the regional electric grid, and demanding unprecedented cooling water resources. Neutron damage will be intensified while the other characteristics will endure in any subsequent fusion reactor that attempts to generate enough electricity to exceed all the energy sinks identified herein.
When confronted by this reality, even the most starry-eyed energy planners may abandon fusion. Rather than heralding the dawn of a new energy era, it’s likely instead that ITER will perform a role analogous to that of the fission fast breeder reactor, whose blatant drawbacks mortally wounded another professed source of “limitless energy” and enabled the continued dominance of light-water reactors in the nuclear arena.