The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Three key misconceptions in the debate about AI and existential risk

By Jack Kelly | July 15, 2024

Adapting to the massive opportunities and disruptions of artificial intelligence (AI) will be one of the defining challenges of this generation. Succeeding will require clear priorities and a healthy awareness of potential pitfalls. Should mitigating the risk of extinction from AI be a global priority? Many leading AI experts have publicly said so, including two of the three “godfathers of AI,” Yoshua Bengio and Geoffrey Hinton. This weighty claim has understandably provoked skepticism from both the public and other AI experts, including the third “godfather,” Yann LeCun.

While experts disagree on the precise nature of AI’s risks, the public has grown vastly more concerned. In recent years, ever more capable AI models have been released at a breakneck pace. OpenAI’s ChatGPT-5, for example, is already in development and may be released later this year.

An online poll conducted in 2023 by the research organization Rethink Priorities estimated that 59 percent of US adults support prioritizing mitigating the risk of extinction from AI, while 26 percent disagree. While the majority of the public is concerned about AI, there is persistent skepticism about the importance of AI safety and regulation. This skepticism largely arises from three understandable, but mistaken, beliefs that were identified in Rethink Priorities’ analysis of the survey data.

Among the survey respondents who disagreed that mitigating the risk of extinction from AI should be a global priority, Rethink reported that “[t]he dominant theme, by quite a wide margin, was the claim that ‘other priorities’ were more important, which was mentioned by 36 percent of disagreeing respondents. The next most common theme was ‘not extinction,’ mentioned in 23 percent of responses, which simply involved respondents asserting that they did not believe that AI would cause extinction. The third most commonly mentioned theme was ‘not yet,’ which involved respondents claiming that AI was not yet a threat or something to worry about.”

Here’s why these widespread beliefs about AI and existential risk are mistaken.

Other priorities are more important. The most common reason respondents gave for not prioritizing AI was that other issues—such as climate change, nuclear war, and pandemics—are more pressing than AI as an existential risk.

Arguing about the relative importance of existential threats ignores the fundamental truth that any credible existential threat is one too many and that all must be engaged in parallel. If you accept that AI might pose an existential threat, then it should be a societal priority to address this threat, even if you are more concerned about another issue.

These are intersectional issues. For example, AI could exacerbate pandemic risk by enabling terrorists to create biological weapons. The main limiting factor in designing highly infectious and lethal diseases is not expense, but expertise. Existing AI models are censored to prevent misuse and are insufficiently capable of creating bioweapons, but these conditions are temporary and slated to disappear as models improve rapidly and companies like Meta release their cutting-edge models publicly. Leaving the most important decisions about AI in the hands of the very companies that profit from building these models is neither wise nor sustainable.

It is reasonable that people feel compelled to focus on issues that are affecting people now, over those that might not come to pass. AI researcher Meredith Whittaker, for example, recently told Fast Company that the alarm coming from AI pioneers like Geoffrey Hinton “is a distraction from more pressing threats.” But the dichotomy between either recognizing the existing harms of AI or taking action to safeguard against potential future harms is a false one. Treating AI regulation as a tradeoff in which government can either regulate existing misuse cases such as deepfakes or regulate development of potentially dangerous future models ignores the fact that it is critical to do both. The October 2023 White House AI regulation executive order does exactly that: It addresses existing harms like bias and discrimination, data privacy, and worker’s rights—and also institutes forward-looking principles to reduce existential risk by testing and evaluating models for dangerous capabilities. This is a clear example of how near-term and longer-term concerns can be addressed together.

RELATED:
AI and the A-bomb: What the analogy captures and misses

The AI threat is not extinction. A second recurring theme identified in the survey was that advanced AI will never threaten human existence. Researcher and author Jeff Caruso put it this way in a recent Bulletin essay: “Considering the lack of any substantive evidence supporting the existential risk theory, it’s puzzling why many in the fields of computer science and philosophy appear to believe AI is an existential threat. It turns out, there aren’t that many who have bought into the theory. A recent poll of more than 2,000 working artificial intelligence engineers and researchers by AI Impacts put the [median] risk of human extinction by AI at only five percent.”

If a technology has “only” a five percent chance of causing human extinction, that’s unacceptably high. Would you feel safe boarding an airplane with a five percent chance of crashing? Even more concerning was the average risk from that same poll: “Mean responses indicated an even higher risk, suggesting a nearly one-in-six (16 percent) chance of catastrophic outcomes—the same odds as dying in a game of Russian roulette.” The possibility of AI being catastrophically harmful must be regarded extremely seriously, not because we are confident that it will occur, but because the consequences of being wrong are too disastrous to leave to chance.

You don’t have to be an expert to see the fundamental concern. Creating tools that match or exceed humans in intelligence (and therefore are able to deceive humans and hack infrastructure), and then asking those tools to autonomously achieve real-world objectives, can easily lead to unintended consequences such as the AI seizing control of servers to avoid being turned off. AI expert Max Tegmark argues that “the real risk with artificial intelligence isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”

It’s not hard to imagine a world in which a profit-seeking AI holds an entire nation’s critical infrastructure hostage to a ransomware attack, much like human hackers have already targeted hospitals with such attacks. Whether or not AIs would pursue this avenue depends on how well we can align them to human values, a complex technical problem for which AI researchers and even AI companies agree we do not yet have a solution. As Steven Hawking put it, ignoring the possibility that AI could be catastrophic for humanity “would be a mistake, and potentially our worst mistake ever.”

RELATED:
Trump’s potential impact on emerging and disruptive technologies

AI is not yet an extinction risk. The third most common theme among respondents who disagreed with prioritizing AI as an extinction risk was that AI won’t become extremely powerful or exceed human capabilities soon enough to be worth worrying about. Yann LeCun, who is a professor at New York University and Meta’s Chief AI Scientist, told journalists at the World Economic Forum’s annual meeting in Davos earlier this year that “asking for regulations because of fear of superhuman intelligence is like asking for regulation of transatlantic flights at near the speed of sound in 1925.”

AI does not currently exceed human intelligence across the board, but GPT-4 is already smart enough to do things that look suspiciously similar to intelligence, such as passing a simulated bar exam with a score that put it near the top 10 percent. Today’s models have limitations in their competency and ability to perform complex tasks, as described in a May 2023 paper by a team of researchers who reported that “recent studies have shown that large language models can perform poorly on planning and reasoning tasks.” However, the team also noted that “the pace of development and deployment are still rapidly increasing, not decreasing. We anticipate that current limitations and barriers will be surpassed or ignored in the pressure to deploy.”

The rate of improvement in just a decade has been exponential, and the industry has received hundreds of billions of dollars in investment. Especially concerning is the possibility that AI capabilities will themselves speed up AI development, creating a positive feedback loop that could rapidly, and unexpectedly, accelerate the speed of technological progress. This isn’t merely a distant possibility, but a real goal of companies like Google that have already developed sophisticated AI code-writing systems such as AlphaCode, which ranked in the top half of programmers in a coding competition in 2022.

That same year, a survey of thousands of AI researchers found that the median guess for the date when AI would be at least 50 percent likely to overtake human intelligence in all possible tasks was 2060. A year later, that timeline had dropped to 2047. There is now broad expert consensus that human-level artificial intelligence is probable within the lifetimes of most people alive today. Expert predictions should be used to alert us to potential future threats—just as the climate movement has rallied in response to warnings from scientists about carbon dioxide emissions. The world’s collective failure to heed climate warnings offers a painful lesson that should not be repeated. Regardless of the exact timeline, the hard problem of how to properly regulate and control this extremely powerful technology (both technically and politically) is one that must be tackled now, rather than waiting until we are faced with a crisis.

That’s why I support the proposal made by the grassroots movement PauseAI to pause “frontier” AI development of the most highly capable general-purpose models at its current state, so that citizens and governments can develop comprehensive regulation that adequately addresses both present and future harms. Doing so would restrain tech companies from exacerbating existing harms and introducing new ones and would be a chance for world governments to collectively stop the tech industry from moving so fast that it breaks society.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Gordon Rogers
4 months ago

Putin says the nation that leads in AI ‘will be the ruler of the world’https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world -Just because one group is responsible does not assure all others will be or that either will be safe from either’s decision. Pausing may allow malevolent actors not pausing to develop capabilities they cannot themselves control or would intentionally loose on the world, while responsible parties with advanced methods and other infrastructure might be able to beneficially address issues created by those less so developed regardless of the former’s intent or action. This admits that proceeding has risks- Maybe ‘damned if you do, almost certainly… Read more »

Richard Blakemore
Richard Blakemore
4 months ago

AI is nothing more than a computer program that will accept a variable solution based on probability rather than a single absolute answer. Why would a computer care if you turn it off (none of mine ever have). They have no id, no ego, no self. No flesh, no muscle, no blood. No computer will ever act independently until it actually knows what pain and death are and that won’t happen until you can break a part of it and elicit an immediate and involuntary reaction indicating pain. If an AI computer wants to nuke the world or assassinate a… Read more »

Jim Collins
Jim Collins
2 months ago

First off, AI research will lead to solutions for dealing with our other existential risks. Secondly I would say that if you want to make sure the Homo Sapiens survive we need viable populations off planet. Even if we stop ourselves from killing us all there is a 100% chance of a big rock with our address on it. The moment that Humanity transcends will be when a fully functional quantum computer with an advanced AI can cross reference all of Human Knowledge to tell us that if only two scientists from very different fields had compared notes cancer research… Read more »