Analysis

Another warning from industry leaders on dangers posed by AI

By Sara Goudarzi, May 30, 2023

On Tuesday, in a one-sentence statement, industry professionals issued yet another warning regarding the dangers posed by artificial intelligence. The statement, which read “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” was signed by OpenAI CEO Sam Altman; Demis Hassabis, chief executive of Google DeepMind; Geoffrey Hinton, Emeritus Professor of Computer Science, University of Toronto (also known as a godfather of AI); and more than 350 researchers, executives, and, other professionals.

In March, more than 1,000 researchers and tech leaders, signed an open letter urging AI labs to pause the training of systems more powerful than ChatGPT-4 for six months, citing “profound risks to society and humanity.”

Since the release of OpenAI’s ChatGPT last November, there’s been growing concern about large language and image models. The concerns range from obvious effects—such as spreading misinformation and disinformation, amplifying biases and inequities, copyright issues, plagiarism, and influencing politics—to more hypothetical, science fictionish possibilities, such as the systems developing human-like capabilities and using them for malign ends.

The latter concerns are often floated by those creating the technology, which raises the question: Why release, and continue to improve, a tech that is cause for such grave fears? Artificial intelligence isn’t a natural disaster, like a tsunami, over which humans have little control. If AI is causing existential worry, then maybe it’s time to put the brakes on.

Or perhaps the voices that are the loudest in this arena are not the ones describing the technology’s current abilities with the most clarity and transparency.

In response to today’s statement, Emily Bender, director of the Professional MS Program in Computational Linguistics (CLMS) at the University of Washington, tweeted: “When the AI bros scream ‘Look a monster!’ to distract everyone from their practices (data theft, profligate energy usage, scaling of biases, pollution of the information ecosystem), we should make like Scooby-Doo and remove their mask.”

Large language models might be fun to manipulate, but they aren’t very good at innovating. They predict text based on pattern analysis—not based on actual understanding or knowledge—and therefore quite often produce content that contains errors. That their output sometimes sounds authoritative does not mean their falsehoods should be believed. It’s a rule that could also be profitably applied to some human communication.

As the coronavirus crisis shows, we need science now more than ever.

The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Support the Bulletin

View Comments

  • Having trained and used ANN'S for years including LLM, there are multiple moguls where the technology gets trapped in the search for solutions to queries that approximate minimum cost or reliable responses. The search process can at times lead to interesting results, but by an large, they are mostly spurious. The character of the training data can also lead to erroneous conclusions. If the training gets stuck in a suboptimal cost mogul, the resulting queries may also be poor. When the training is expanded, different results may arise depending on whether or not optimal cost can be obtained via search as the neural parameters change.

    Morever, the ability to understand how the algorithms arrives at its results is very poorly understood since trying to interpret how millions of two parameter neurons (weights/biases) change over time and space depending on the size of the networks and extent of the training data. With poor explanation capability, it is unknown how reliable the results can be vs. a crap shoot.

    Using tensors and graph matching for training and queries only serves to complicate and add subterfuge to the results. Large scale graph matching is known to be next to impossible to perform (an NP or non-polynomial search time hard problem).

    Consequently, claiming that AI will cause human extinction is low risk unless people believe in what is mostly false or non-trustworthy results and interpret these as truths. It is really a battle between the somewhat random behavior of the algorithms and the gullible mind or real human neural net. After all, it is not really known how real neurons work and how responses are achieved. We can at times provide an explanation to our reasoning which is missing from ANN models.

  • Honestly. The barn door cannot be closed after the horses have broken free. I'm not and never have been a fan of AI. Didn't any of these needs ever read RUR by Karel Capek??????

  • I am not competent in this field, but it seems to me that this situation is already out of control. How might brakes be applied, if research is ongoing in many nations around the world? It's like trying to tell people around the world to be nice, when people are by nature competitive and ambivalently directed, and when some are angry.