The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Another warning from industry leaders on dangers posed by AI

By Sara Goudarzi | May 30, 2023

ribbonsLarge Language Models. Credit: Unsplash/Tim West

On Tuesday, in a one-sentence statement, industry professionals issued yet another warning regarding the dangers posed by artificial intelligence. The statement, which read “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” was signed by OpenAI CEO Sam Altman; Demis Hassabis, chief executive of Google DeepMind; Geoffrey Hinton, Emeritus Professor of Computer Science, University of Toronto (also known as a godfather of AI); and more than 350 researchers, executives, and, other professionals.

In March, more than 1,000 researchers and tech leaders, signed an open letter urging AI labs to pause the training of systems more powerful than ChatGPT-4 for six months, citing “profound risks to society and humanity.”

Since the release of OpenAI’s ChatGPT last November, there’s been growing concern about large language and image models. The concerns range from obvious effects—such as spreading misinformation and disinformation, amplifying biases and inequities, copyright issues, plagiarism, and influencing politics—to more hypothetical, science fictionish possibilities, such as the systems developing human-like capabilities and using them for malign ends.

The latter concerns are often floated by those creating the technology, which raises the question: Why release, and continue to improve, a tech that is cause for such grave fears? Artificial intelligence isn’t a natural disaster, like a tsunami, over which humans have little control. If AI is causing existential worry, then maybe it’s time to put the brakes on.

Or perhaps the voices that are the loudest in this arena are not the ones describing the technology’s current abilities with the most clarity and transparency.

RELATED:
Apathy and hyperbole cloud the real risks of AI bioweapons

In response to today’s statement, Emily Bender, director of the Professional MS Program in Computational Linguistics (CLMS) at the University of Washington, tweeted: “When the AI bros scream ‘Look a monster!’ to distract everyone from their practices (data theft, profligate energy usage, scaling of biases, pollution of the information ecosystem), we should make like Scooby-Doo and remove their mask.”

Large language models might be fun to manipulate, but they aren’t very good at innovating. They predict text based on pattern analysis—not based on actual understanding or knowledge—and therefore quite often produce content that contains errors. That their output sometimes sounds authoritative does not mean their falsehoods should be believed. It’s a rule that could also be profitably applied to some human communication.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jerry Burman
Jerry Burman
1 year ago

Having trained and used ANN’S for years including LLM, there are multiple moguls where the technology gets trapped in the search for solutions to queries that approximate minimum cost or reliable responses. The search process can at times lead to interesting results, but by an large, they are mostly spurious. The character of the training data can also lead to erroneous conclusions. If the training gets stuck in a suboptimal cost mogul, the resulting queries may also be poor. When the training is expanded, different results may arise depending on whether or not optimal cost can be obtained via search… Read more »

Ravens
Ravens
1 year ago

Honestly. The barn door cannot be closed after the horses have broken free. I’m not and never have been a fan of AI. Didn’t any of these needs ever read RUR by Karel Capek??????

Donald Maclean
Donald Maclean
1 year ago

I am not competent in this field, but it seems to me that this situation is already out of control. How might brakes be applied, if research is ongoing in many nations around the world? It’s like trying to tell people around the world to be nice, when people are by nature competitive and ambivalently directed, and when some are angry.