Climate Change

National Academies calls for a fusion pilot plant

By Robert J. Goldston, April 14, 2021

The National Academies of Sciences, Engineering, and Medicine recently completed two studies that together map out a strategy for the development of fusion energy. The first, issued in 2019, titled “Final Report of the Committee on a Strategic Plan for US Burning Plasma Research,” endorsed a new goal for US fusion energy research and development: a fusion pilot plant. It recommended that the United States should focus its fusion R&D on a minimum-cost device capable of putting electricity on the grid, with the capability of qualifying the technologies required for economically competitive fusion energy.

A second National Academies’ report, just issued, is called “Bringing Fusion to the US Grid.” It delves into the specific requirements for a pilot plant and reflects the strong impetus to accelerate fusion development, so that fusion can better help to battle climate change by contributing soon to the decarbonization of electrical energy sources in the United States. At the same time it notes that the world-wide demand for low-carbon electical power will grow by a further factor of 5 to 6 during the second half of this century, providing a large market for fusion electricity.

The drive for a near-term fusion pilot plant comes from three directions. First, the international ITER project—the world’s largest fusion experiment, a collaboration of 35 nations—is now moving forward smoothly, led by its energetic and savvy director general, Bernard Bigot. Low-power operation is to begin later this decade, and high-power operation with ITER’s planned fusion fuel—deuterium and tritium, the heavy isotopes of hydrogen—is to begin the next. ITER’s goal is to produce 400-to-500 megawatts of fusion power, 10 times the amount of power that will be injected into the hot fuel.

It is important to note, however, that ITER will not convert the fusion power it produces into electricity. Instead, ITER’s goal is to demonstrate that a real-world fusion device can maintain high power production for extended periods of time—and test the integration of fusion physics with key materials and technologies along the way. Both China and the United Kingdom have plans to commission net electricity-producing fusion devices to run in parallel with ITER’s high-power operation. Meanwhile, the European Union has a strong program keyed to obtaining results from ITER. This strategic environment lends a sense of urgency to answering the question of what the United States should do.

The second new item adding momentum to fusion’s development is the engagement of 24 industrial enterprises around the world that have collectively made investments of about $1.5 billion in the competition to put fusion electricity on the grid. Some are pursuing technological innovations and variations based upon the current front-runners for fusion power production, the “tokamak” and the “stellarator”—most notably, the use of new high-temperature superconductors. (More about tokamaks, stellarators, and other configurations for fusion below.) These new superconductors are capable of delivering much higher magnetic fields than were available at the time of the design of ITER—a tokamak—or the major stellarator experiments in Japan and Germany. Other enterprises are pursuing scientific innovations to invigorate less well-developed configurations. The success of public-private partnerships, such as SpaceX, suggests that industrial partnering may be key to accelerating fusion development. And industries, of course, are interested in winning the race to fusion and providing low-carbon electricity to complement renewable energy sources in combating climate change.

The third new driver for the development of fusion is that researchers into fusion have made dramatic progress in understanding the physics of the hot, ionized gas, called plasma, that needs to be contained and controlled for practical fusion power production. Remarkably, fusion scientists can now calculate accurately the turbulence in 100-million-degree plasmas—much hotter than the center of the sun—and the resulting transport of heat and particles from the core of a fusion device to its edge. Scientists also have descriptive models for how the heat and particles escape from the edge of the plasma, and how to mitigate their impact on the material surfaces they encounter. Major challenges surely remain, but the National Academies’ reports argue persuasively that the time has come to focus research on the goal of net electricity production.

One important new insight is that while net electric power production requires significant technological development, this effort is not as great as is required for a first-of-a-kind commercial fusion power plant. For example, existing materials are more than adequate to support the production of continuous net electricity for extended periods of time, even though materials able to withstand a higher fluence of fusion neutrons are needed for a deuterium-tritium fusion power plant. Consequently, a pilot plant can first put electricity on the grid using existing materials, and then test out new materials and technologies for a power plant while gaining practical experience to accelerate the process of learning by doing.

Nonetheless, it will be critical that certain technologies be developed for the pilot plant itself. For example, if the pilot plant runs on deuterium and tritium fuel, it will be necessary to “breed” at least a large fraction of the required tritium in neutron-absorbing “blankets” around the plasma, by converting lithium to tritium.

The “Bringing Fusion to the US Grid” report provides the requirements for a pilot plant to accomplish the two goals of net electricity production and technological development for a first-of-a-kind commercial fusion power plant. To define these goals, the panel consulted with utilities to understand the projected needs for the US grid over the decades ahead. They found that a renewables-only path to very low carbon emissions would be much more expensive than one that included cost-effective, firm, low-carbon energy sources that can be “dispatched” to compensate for the intermittency of renewable energy, such as wind and solar, as well as to support the  variation in demand for electricity over time. For example, during summer late afternoons when the demand for electricity to power air-conditioners is very high the wind is less likely to blow and the sun is beginning to set—or the sky could even be overcast.

The panel judged that large-scale energy storage for times up to four hours was likely feasible as a means to balance some variations of supply and demand, but issues of cost and the difficulty of siting near markets could preclude storage for much longer periods of time. As a result, it would be desirable for fusion systems to be able to ramp up in power in less than four hours and also be able to provide electricity to the grid for much longer periods as required.

With such considerations in mind, the panel concluded that a fusion pilot plant should aim to provide net electricity of 50-to-100 megawatts—similar to the Shippingport, Pennsylvania, fission pilot plant that first put power on the grid in 1957 and then functioned as a test-bed for fission technologies. Such a plant should provide the basis for a cost-effective, first-of-a-kind fusion power plant capable of producing firm and dispatchable, economically competitive, low-carbon electricity.

The National Academies’ report was not charged with selecting a configuration for the pilot plant’s plasma. The tokamak configuration, with a shape like a smooth doughnut, is the most advanced and is embodied in ITER. But the tokamak faces some major challenges as a fusion power system, perhaps the most severe of which is that it can be difficult to control, leading to a disruption of the large electrical current required in the plasma that can cause a sudden, potentially damaging release of energy to its internal structures. Methods of addressing this issue are being aggressively pursued both experimentally and theoretically—such as by using machine learning techniques to steer away from disruptions, and rapid densification of the plasma to dissipate the energy should a disruption occur. These will be given a full-scale, real-world test in ITER.

The second-most advanced concept is the stellarator, which uses external magnets to confine a plasma in a shape more reminiscent of a cruller. A stellarator does not require a plasma current and does not suffer from disruptions, but rather uses complex magnets to generate the plasma configuration needed for confinement. More optimization is required, however, to find the easiest configuration to manufacture and operate that will give the required plasma confinement.

There are many other plasma configurations that are well behind the tokamak and stellarator in plasma performance, but for which new ideas and approaches are being implemented by industrial groups, with the hope of overtaking the front runners. In general, they promise much simpler configurations to build and maintain. To select just one example, the plasma “pinch” carries a high current in a simple linear geometry, with its self-created magnetic field pinching the plasma to high pressure. It has been known for many years that such a configuration is prone to kinking-up and tearing itself apart, but researchers recently have found in tokamaks that “sheared” flows, where one region of the plasma flows at a different speed from its neighbor, can be used to quiet instabilities. This concept as applied to the much more unstable pinch configuration has resulted in remarkable advances, including the recently published detection of neutrons from fusion. While this approach is a very long distance from significant fusion power production, there are plans to scale it up rapidly. Another dark-horse concept, the “field reversed configuration,” has recently reported significant progress—again with a long way to go. These results remain to be published.

The National Academies’ panel, looking at the overall landscape, recommended that two to four teams be assembled—including participants from industry, national labs, and universities—to develop conceptual and then preliminary designs for fusion pilot plants that would both produce net electricity and support further technological development for a cost-effective, first-of-a-kind fusion power plant. The panel set an aggressive deadline of 2028 for the preliminary designs to be completed, with the goal of operating a fusion pilot plant in the 2035–2040 timeframe. This would put its operation in parallel with ITER’s high-power phase, but such overlapping steps are the hallmark of aggressive and successful technological development. Both of the recent National Academies’ reports, while endorsing a pilot plant, have also strongly underscored the value of full US participation in ITER.

The National Academies has not been alone; the American Physical Society led a broad effort by the fusion and plasma research community that resulted in a “Community Plan for Fusion Energy and Discovery Plasma Sciences.” This plan also strongly endorsed a pilot plant with the goals of net electricity production and the advancement of fusion technology. Subsequently, the Fusion Energy Sciences Advisory Committee (a federal advisory committee to the US Energy Department) developed “a long-range plan to deliver fusion energy and to advance plasma science” under a range of budget scenarios, in a report entitled “Powering the Future, Fusion & Plasmas.” It provided a more detailed development path, and also set a net electricity-producing pilot plant as a central goal for the US fusion program. Like the National Academies’ reports, these two reports strongly supported full US participation in ITER.

This excitement and sense of urgency for a fusion pilot plant to put electricity on the grid is welcome. Although fusion has its critics, it is widely viewed as an attractive energy source, as also gauged by recent industrial interest. Climate change is upon us. New electrical energy sources that can complement renewable systems such as wind and solar by providing firm, dispatchable, low-carbon electricity will provide practical means to sustain a vibrant US economy while drastically cutting back carbon emissions. And combatting climate change is a central pillar of the Biden administration’s new infrastructure plan. It behooves the United States to develop fusion energy as quickly as possible, and a pilot plant is a very attractive step towards this goal.

Editor’s note: The author, Robert Goldston, worked closely with Richard Hawryluk—chair of the recent National Academies of Sciences, Engineering, and Medicine report on fusion—when Hawryluk was deputy director of the Princeton Plasma Physics Laboratory (1997–2009) and Goldston was director.

As the coronavirus crisis shows, we need science now more than ever.

The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Support the Bulletin

View Comments

  • Fusion’s severe technical drawbacks have been reviewed in the Bulletin itself (https://thebulletin.org/2017/04/fusion-reactors-not-what-theyre-cracked-up-to-be/), but its biggest problem is that nobody needs it. 

    Dr. Goldston tells us that “the world-wide demand for low-carbon electrical power will grow by a further factor of 5 to 6 during the second half of this century, providing a large market for fusion electricity,” but that's conclusion-jumping: a market for low-carbon power is only a market for fusion if fusion can compete in that market, and there is no prospect that fusion will ever do so. Equally low-carbon PV and wind are already cheaper per kWh than natural gas, coal, and fission and getting inexorably cheaper (https://www.lazard.com/perspective/lcoe2020). In some settings, PV is already “the cheapest source of electricity in history” (IEA World Energy Outlook 2020). On that far-off day when fusion finally produces net electricity, it will have to compete in $/kWh not with renewables’ already disruptively low costs but with the even lower costs they will have achieved by then, including the cost of storage (also cheaper) to firm a fraction of capacity. Even fission, after 70+ years of development, can no longer compete with renewables: the chance that a completely novel nuclear technology for boiling water in cyclopean centralized plants can ever compete is zero.  Even a thermal plant with a free nuclear island -- zero-cost fusion -- couldn’t compete with today's renewables, never mind tomorrow's renewables (the balance of plant, ~80% of cost, is too expensive: 2019 World Nuclear Industry Status Report, p. 24). The fusion tortoise has lost to the renewables hare. In fact the hare is already back at the office, taking calls from investors, while the tortoise is boldly announcing that it will get to the starting line in just another decade or two -- if we will just give it another $X billion.

    Fusion bids to do elaborately and expensively what we can already do elegantly and cheaply. Its pursuit is a zombie enterprise driven by the inertia of committed careers and the fallacy of sunk cost. As a hyper-futuristic approach to boiling water, it is not and never will be relevant to the world’s energy and climate needs. It most certainly does not “behoove[] the United States to develop fusion energy as quickly as possible.” 

     

    • I did reference Dan Jassby's piece in this one. The Academies' panel set some cost requirements for a Pilot Plant, and argued that firm, economic, dispatchable power is required to complement renewables, not replace them.

  • It is difficult to avoid responding to campaigns like this (enabled by taxpayer-funded studies) without ridicule. I would never suggest what may be possible or not. However, for fusion scientists to suggest that they are in a position to specify the "requirements for a pilot plant" defies credulity. Longer plasma confinement times, higher temperatures and higher plasma density certainly give rise to encouragement. But no fusion reactor has yet to produce a single watt from fusion in excess of the total reactor power consumed. ITER is not designed to produce electricity, but neither is it designed for overall net reactor power if we compare apples-to-apples power values, normalizing the thermal output to the electric input. We still have a long way to go fellas.

    Steven B. Krivit
    Publisher, New Energy Times

    • A key point of a pilot plant is to do just what you suggest, produce net electricity. If the designs can't provide good confidence that this can be done, the pilot plant won't be built.

      • You would like us to believe that fusion researchers would spend the time, effort and money for conceptual design activity of an electricity producing fusion plant, and that there is even a remote chance that, after doing so, those researchers would conclude that success of such a design is unlikely? This seems to run counter to the law of self-preservation.

        Moreover, it appears to run counter to history. According to Bill Weston, GA competed its design for a DEMO-class reactor in 1978. PPPL and ORNL completed their designs for DEMO-class reactors in 1973. Then there was STARFIRE, in 1980.

        Also, if memory serves me, the scientific method says we perform experiments and then, based on the results, design new experiments. This means we wait until 2045 to see whether ITER's DT reaches Qfus = 10. Or are you saying that the scientific method can be bypassed? Or are you saying that there is a better path than ITER to validate that real (not extrapolated) Qfus >= 1 is possible?

        Steven B. Krivit, Producer
        ITER, The Grand Illusion: A Forensic Investigation of Power Claims

        • Regarding paragraph #2 above: Creating designs for electricity producing fusion plants have been done before. They don't seem to have led to electricity producing fusion plants.

          • All of those design were for first-of-a-kind power plants. This new idea is to put some electricity on the grid and get some learning-by-doing experience earlier.

          • You have no experimental evidence that a fusion reactor can produce power from fusion at a greater rate than the injected heating power (scientific breakeven/scientific feasibility). You have no experimental evidence that a fusion reactor can produce power from fusion at the same rate as it consumes electrical power (engineering breakeven).

            The most well documented and most credible fusion reactor design, ITER, if it works correctly, will achieve engineering breakeven sometime around 2045. That still won't produce enough thermal power from fusion to provide one net Watt of electricity.

            Yet you imagine that, now after 70 years of trial and error, you think that by some miracle, you can skip over the intermediate steps and go right to designing a reactor that would produce net electricity to put on the grid. And you imagine that that you can do this by more Edisonian trial-and-error "learning-by-doing." Wow.

        • I don't think that the advocates for a given pilot plant design would be given the right to judge if it will work. I imagine another NASEM panel for this, with a balanced membership like this one. As to waiting for first results from ITER, that is essentially the EU strategy. The NAS recommended a more aggressive approach.

  • One key point not mentioned by our distinguished colleague R. J. Goldston is the unsustainability of D-T nuclear fusion. He did not mention the frightening amount of beryllium that will be needed in the first wall of the fusion machines. It takes 12 tons in ITER. Probably more than 100 tons in DEMO. The total annual world production is 300 tons. It will be impossible to produce enough beryllium to build a series of fusion reactors. And let us not forget the extreme toxicity of this element. Other solutions for energy production exist, they must be developed.  It is important to finally move towards a complete and sincere communication on the subject of nuclear fusion for energy.

    • If you want more unpleasant materials, you could add the tons of lithium breeding compounds and the throughput of kilograms of tritium a week. How do Nuclear regulators view the tritium?

      • Nuclear regulators are aware of the tritium. To avoid any issue at the site boundary in a worst-case accident, it is estimated that you need to keep the inventory in the vacuum vessel below ~ 1 kg. ITER will have up to 700g, and this does not seem to be a problem with the French regulators. IMHO, one needs very strict accountancy on T.

  • We do not need it, transitioning from large centralized facilities needing security to distributed power systems which can be ubiquitous.

    My household and two electric cars are now powered primarily by the grid-connected PV system on our roof. It offers benefits to us and the power company.

    • I was a little surprised myself at the < 4 hour dispatchability requirement, but I pretty much quoted what they said in the report about storage, and they got this from utilities.

  • This is a case of: If we had bacon we could have bacon and eggs, if we had eggs.

    Compare with fission. The underlying science was discovered in 1939. By 1942 Fermi had a working reactor. Then it took about 30 years to have sizeable commercial reactors. The fusion reaction has been known longer than fission, but it hasn't gotten to the "1942" point yet. If that point ever comes, it will be a very large multiple of the three years from 1939 to 1942. Nature does not yield easily, so we can expect that commercialization would take a comparable multiple of the 30 years it took for commercialization of fission. Not real promising. Earth-based fusion, that is. Sun-based fusion is great.

  • It is really insane to think that the world needs New electrical energy sources that can complement renewable systems such as wind and solar by providing firm, dispatchable, low-carbon electricity (that) will provide practical means to sustain a vibrant US economy while drastically cutting back carbon emissions. Why? If you use global warming as an excuse, then you do have to consider the whole world, not only the US economy (which is by far the main cause of global warming) and all renewable sources are already capable of providing firm, dispatchable, carbon-neutral electricity via storage.

    • I am aware of this argument, but if you look at various detailed studies they tend come up with a need for dispatchable power capability comparable to the time-average power needs, due to the variability of wind and solar. The utilities consulted by the NASEM panel evidently said this as well.

  • The idea seems to be to spend about $5-6 billion for a plant in the 50-100MW range, which would be ready in about 15 years. For the same amount, about 5GW of solar could be built in about 2 years. This would generate about the same amount of electricity per year as a 1GW power plant running 24/7. There's no risk here. The US already has about 100GW of solar installed, including about 19GW installed in 2020. We would then get 10 to 20 times the amount of greenhouse gas-free electricity for 13 years before the fusion plant even comes into operation, if it works as hoped. The extra 5GW of solar would not even disrupt the solar installation market too much, as it is expected already that over 20GW of solar will be installed in each of the coming years.

    • The NASEM report explains how they think fusion research should proceed, but certainly doesn't say that this should be done instead of installing wind and solar. Indeed it is looking for dispatchable power to complement a lot of wind and solar.

  • Dear Professor Goldston, in addition to my comment below about beryllium, we know that DEMO is designed with a beryllium first wall. On the other hand, the Tritium Breeding Modules contain a large amount of Be. Probably more than a hundred tons inside this reactor. DEMO is a demonstration reactor, but you tell us that the DEMO technology will not be used in the pilot plant? This is surprising. Finally, perhaps a liquid lithium wall could be considered in a pilot plant,  but this is not really a technology that can be seriously considered on a large scale.In conclusion, my comment on the non-sustainable character of D-T fusion in tokamaks remains relevant I think.

    • In the US we are not considering using Be in either capacity. My understanding is that the first wall in the EU DEMO is ferritic steel with W cladding, but the breeding blanket is supposed to include Be. In the US we tend to think of using Pb as the neutron multiplier in the blanket.

    • Many fusion reactor conceptual designs specify a molten-salt blanket of FLIBE (fluorine-litihium-beryllium). In recent years personnel at MIT & Commonwealth Fusion— also very active in formulating the Academies’ pilot plant report— have been promoting the ARC power reactor design that utilizes a FLIBE blanket. Commonwealth intends to populate the world with countless ARC reactors, but the beryllium in a single ARC blanket would cost many hundreds of millions of dollars at today’s prices. 

      Many FLIBE and Pb-Li blanket designs also specify Li-6, another pricey component because it is surprisingly expensive to separate Li-6 from the dominant Li-7 with the presently used isotope separation technique. The Li-6 blanket component would also cost hundreds of millions of dollars per reactor, 

  • In contrast to solar fusion reactions that produce zero neutrons, 80% of the output of terrestrial deuterium-tritium fusion consists of barrages of neutron bullets. Any magnetic fusion facility consumes tens to hundreds of megawatts continually, but in 70 years of fusion R&D no one has tried to simultaneously generate even one watt of electric power from those neutron barrages and even the ITER project will not attempt it.  

    Now the faltering but ever hubristic U.S. magnetic fusion program proposes to achieve, IN A SINGLE STEP, the generation of at least tens of megawatts of net electrical power and replenishment of all the tritium burned and lost in a fusion device operating 15 to 20 years hence.

    As. practical matter, while effective fusion generators such as tokamaks can turn electrical energy into neutron streams, it’s unlikely that they will ever turn those neutron streams into net electrical energy at any cost.

    If it has any application at all, terrestrial fusion energy must be regarded only as a neutron source, not an electrical power source. Indeed the neutron is the basis of fusion’s present-day non-weapons applications— the use of small fusion neutron generators for radiography, activation analysis and radioisotope production.

  • I've followed the fusion power efforts during the last half-century. Most of today's advocates have never encountered articles such as the following. They tend to assume we have 20-30 years to ramp up such technologies to replace much of our addiction to fossil fuel use, even for mobile users such as cars, buses, trucks, heavy equipment, trains, ships and aircraft.

    ITER is a showcase … for the drawbacks of fusion energy
    https://thebulletin.org/2018/02/iter-is-a-showcase-for-the-drawbacks-of-fusion-energy

    UN chief: World has less than 2 years to avoid 'runaway climate change'
    https://thehill.com/policy/energy-environment/406291-un-chief-the-world-has-less-than-2-years-to-avoid-runaway-climate

    UN Chief warns countries that the 'point of no return' on climate change is fast approaching
    https://www.msn.com/en-gb/news/environment/un-chief-warns-countries-that-the-point-of-no-return-on-climate-change-is-fast-approaching/ar-BBXCJHl

    UN warns that world risks becoming 'uninhabitable hell' for millions unless leaders take climate action
    https://www.cnn.com/2020/10/13/world/un-natural-disasters-climate-intl-hnk/index.html