In his new book What We Owe the Future, William MacAskill outlines the case for what he calls “longtermism.” That’s not just another word for long-term thinking. It’s an ideology and movement founded on some highly controversial ideas in ethics.
Longtermism calls for policies that most people, including those who advocate for long-term thinking, would find implausible or even repugnant. For example, longtermists like MacAskill argue that the more “happy people” who exist in the universe, the better the universe will become. “Bigger is better,” as MacAskill puts it in his book. Longtermism suggests we should not only have more children right now to improve the world, but ultimately colonize the accessible universe, even creating planet-size computers in space in which astronomically large populations of digital people live in virtual-reality simulations.
Backed by an enormous promotional budget of roughly $10 million that helped make What We Owe the Future a bestseller, MacAskill’s book aims to make the case for longtermism. Major media outlets like The New Yorker and The Guardian have reported on the movement, and MacAskill recently appeared on The Daily Show with Trevor Noah. Longtermism’s ideology is gaining visibility among the general public and has already infiltrated the tech industry, governments, and universities. Tech billionaires like Elon Musk, who described longtermism as “a close match for my own philosophy,” have touted the book, and a recent article in the UN Dispatch noted that “the foreign policy community in general and the United Nations in particular are beginning to embrace longtermism.” So it’s important to understand what this ideology is, what its priorities are, and how it could be dangerous.
Other scholars and I have written about the ethical hazards of prioritizing the future potential of humanity over the lives of Earth’s current inhabitants. As the philosopher Peter Singer put it: “The dangers of treating extinction risk as humanity’s overriding concern should be obvious. Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth.”
MacAskill sees nuclear war, engineered pathogens, advanced artificial intelligence, and global totalitarianism as “existential risks” that could wipe out humans altogether or cause the irreversible collapse of industrial civilization. However, he is notably less concerned about climate change, which many longtermists believe is unlikely to directly cause an existential catastrophe, although it may increase the probability of other existential risks. MacAskill also makes several astonishing claims about extreme global warming that simply aren’t supported by today’s best science, and he is overly optimistic about the extent to which technology can fix climate change. In my view, policy makers and the voting public should not make decisions about climate change based on MacAskill’s book.
Defining existential risk. For longtermists, an existential risk is any event that would prevent humanity from fulfilling its “long-term potential” in the universe, which typically involves colonizing space, using advanced technologies to create a superior posthuman species, and producing astronomical amounts of “value” by simulating huge numbers of digital people. Avoiding existential risks is a top priority for longtermism.
The most obvious existential risk is human extinction, but there are many survivable scenarios as well. For example, technological “progress” could stall, leaving humans Earth-bound. Or, more controversially, Nick Bostrom—a philosopher at the University of Oxford’s Future of Humanity Institute who has been called “the father of longtermism”—argued in a 2002 seminal paper that if less “intellectually talented” individuals outbreed their more intelligent peers, the human species could become too unintelligent to develop the technologies needed to fulfill our potential (although he assured readers that humans will probably develop the ability to create super-smart designer babies before this happens).
Longtermists typically don’t regard climate change as an existential risk, or at least not one that’s as worrisome as superintelligent machines and pandemics. Bostrom’s colleague and fellow philosopher Toby Ord, for example, concluded in his 2020 book The Precipice that there’s only about a 1-in-1,000 chance that climate change will cause an existential catastrophe in the next 100 years, compared with about a 1-in-10 chance of superintelligent machines doing this. The first figure is based in part on unpublished research by Ord’s former colleague, John Halstead, who examined ways that climate change might directly compromise humanity’s long-term potential in the universe. Halstead, an independent researcher who until recently worked at the Forethought Foundation for Global Priorities Research directed by MacAskill, argued that “there isn’t yet much evidence that climate change is a direct [existential] risk; it’s hard to come up with ways in which climate change could be.”
It’s impossible to read the longtermist literature published by the group 80,000 Hours (co-founded by MacAskill), Halstead, and others without coming away with a rosy picture of the climate crisis. Statements about climate change being bad are frequently followed by qualifiers such as “although,” “however,” and “but.” There’s lip service to issues like climate justice—the fact that the Global North is primarily responsible for a problem that will disproportionately affect the Global South—but ultimately what matters to longtermists is how humanity fares millions, billions, and even trillions of years from now. In the grand scheme of things, even a “giant massacre for man” would be, in Bostrom’s words, nothing but “a small misstep for mankind” if some group of humans managed to survive and rebuild civilization.
One finds the same insouciant attitude about climate change in MacAskill’s recent book. For example, he notes that there is a lot of uncertainty about the impacts of extreme warming of 7 to 10 degrees Celsius but says “it’s hard to see how even this could lead directly to civilisational collapse.” MacAskill argues that although “climatic instability is generally bad for agriculture,” his “best guess” is that “even with fifteen degrees of warming, the heat would not pass lethal limits for crops in most regions,” and global agriculture would survive.
Assessing MacAskill’s climate claims. These claims struck me as dubious, but I’m not a climate scientist or agriculture expert, so I contacted a number of leading researchers to find out what they thought. They all told me that MacAskill’s climate claims are wrong or, at best, misleading.
For example, I shared the section about global agriculture with Timothy Lenton, who directs the Global Systems Institute and is Chair in Climate Change and Earth System Science at the University of Exeter. Lenton told me that MacAskill’s assertion about 15 degrees of warming is “complete nonsense—we already show that in a 3-degree-warmer world there are major challenges of moving niches for human habitability and agriculture.”
Similarly, Luke Kemp, a research associate at the University of Cambridge’s Centre for the Study of Existential Risk who recently co-authored an article with Lenton on catastrophic climate change and is an expert on civilizational collapse, told me that “a temperature rise of 10 degrees would be a mass extinction event in the long term. It would be geologically unprecedented in speed. It would mean billions of people facing sustained lethal heat conditions, the likely displacement of billions, the Antarctic becoming virtually ice-free, surges in disease, and a plethora of cascading impacts. Confidently asserting that this would not result in collapse because agriculture is still possible in some parts of the world is silly and simplistic.”
I also contacted Gerardo Ceballos, a senior researcher at the Universidad Nacional Autónoma de México’s Institute of Ecology and a member of the National Academy of Sciences, who described MacAskill’s claim as “nonsense.”
The renowned climatologist and geophysicist Michael Mann, Presidential Distinguished Professor in the Department of Earth and Environmental Science at the University of Pennsylvania, said MacAskill’s “argument is bizarre and Panglossian at best. We don’t need to rely on his ‘best guess’ because actual experts have done the hard work of looking at this objectively and comprehensively.” For example, Mann said, recent assessments by the Intergovernmental Panel on Climate Change have reported that at even 2 to 3 degrees of warming, “we are likely to see huge agricultural losses in tropical and subtropical regions where cereal crops are likely to decrease sharply—and more extreme weather disasters, droughts, floods, and interruptions of distribution systems and supply chains will offset the once-theorized benefit of longer growing seasons in mid-to-high latitude regions.”
The experts I consulted had similar responses to another claim in MacAskill’s book, that underpopulation is more worrisome than overpopulation—an idea frequently repeated by Elon Musk on social media. Ceballos, for example, replied: “More people will mean more suffering and a faster collapse,” while Philip Cafaro, an environmental ethicist, told me that MacAskill’s analysis is “just wrong on so many levels. . . It’s very clear that 8 billion people are not sustainable on planet Earth at anything like our current level of technological power and per-capita consumption. I think probably one to two billion people might be sustainable.”
Feedback and advice. Why, then, did MacAskill make these assertions? In the first few pages of the book’s introduction, MacAskill writes that it took more than a decade’s worth of full-time work to complete the manuscript, two years of which were dedicated to fact-checking its claims. And in the acknowledgments section, he lists 30 scientists and an entire research group as having been consulted on “climate change” or “climate science.”
I wrote to all but two of the scientists MacAskill thanked for providing “feedback and advice,” and the responses were surprising. None of the 20 scientists who responded to my email said they had advised MacAskill on the controversial climate claims above, and indeed most added, without my prompting, that they very strongly disagree with those claims.
Many of the scientists said they had no recollection of speaking or corresponding with MacAskill or any of the research assistants and contributors named in his book. The most disturbing responses came from five scientists who told me that they were almost certainly never consulted.
“There is a mistake. I do not know MacAskill,” replied one of the scientists.
“This comes as something of a surprise to me, because I didn’t consult with him about this issue, nor in fact had I heard of it before,” wrote another.
“I was contacted by MacAskill’s team to review their section on climate change, though unfortunately I did not have time to do so. Therefore, I did not participate in the book or in checking any of the content,” a third scientist told me.
[Editor’s note: The Bulletin contacted MacAskill to ask about the acknowledgements. In an email, he replied that acknowledging one scientist who declined to participate was “an administrative error” that will be corrected in the book’s paperback edition. MacAskill said the other climate experts he listed were contacted by a member of his research team and “did provide feedback to my team on specific questions related to climate change. Regrettably, the researcher who reached out to these experts did not mention that the questions they were asking were for What We Owe The Future, which explains why they were surprised to be acknowledged in the book.” MacAskill also said he “would never claim that experts who gave feedback on specific parts of the book agree with every argument made.”]
It is troubling that MacAskill did not ask all of his climate consultants to vet his bold claims about the survivability of extreme warming and the importance of increasing the human population. Longtermism has become immensely influential over the past five years—although this may change following the collapse of the cryptocurrency exchange platform FTX, which was run by an ardent longtermist, Sam Bankman-Fried, who may have committed fraud—and could affect the policies of national and international governing bodies. Yet some longtermist claims about climate change lack scientific rigor.
Humanity faces unprecedented challenges this century. To navigate these, we will need guiding worldviews that are built on robust philosophical foundations and solid scientific conclusions. Longtermism, as defended by MacAskill in his book, lacks both. We desperately need more long-term thinking, but we can—and must—do better than longtermism.
Correction: Owing to an editing error, the original version of this piece identified John Halstead as the leader of an applied research team at the Founder’s Pledge. Halstead left that position in 2019. Also, the original version of this article said the author had written to all 30 climate experts MacAskill thanked.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.