On Tuesday, in a one-sentence statement, industry professionals issued yet another warning regarding the dangers posed by artificial intelligence. The statement, which read “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” was signed by OpenAI CEO Sam Altman; Demis Hassabis, chief executive of Google DeepMind; Geoffrey Hinton, Emeritus Professor of Computer Science, University of Toronto (also known as a godfather of AI); and more than 350 researchers, executives, and, other professionals.
In March, more than 1,000 researchers and tech leaders, signed an open letter urging AI labs to pause the training of systems more powerful than ChatGPT-4 for six months, citing “profound risks to society and humanity.”
Since the release of OpenAI’s ChatGPT last November, there’s been growing concern about large language and image models. The concerns range from obvious effects—such as spreading misinformation and disinformation, amplifying biases and inequities, copyright issues, plagiarism, and influencing politics—to more hypothetical, science fictionish possibilities, such as the systems developing human-like capabilities and using them for malign ends.
The latter concerns are often floated by those creating the technology, which raises the question: Why release, and continue to improve, a tech that is cause for such grave fears? Artificial intelligence isn’t a natural disaster, like a tsunami, over which humans have little control. If AI is causing existential worry, then maybe it’s time to put the brakes on.
Or perhaps the voices that are the loudest in this arena are not the ones describing the technology’s current abilities with the most clarity and transparency.
In response to today’s statement, Emily Bender, director of the Professional MS Program in Computational Linguistics (CLMS) at the University of Washington, tweeted: “When the AI bros scream ‘Look a monster!’ to distract everyone from their practices (data theft, profligate energy usage, scaling of biases, pollution of the information ecosystem), we should make like Scooby-Doo and remove their mask.”
Large language models might be fun to manipulate, but they aren’t very good at innovating. They predict text based on pattern analysis—not based on actual understanding or knowledge—and therefore quite often produce content that contains errors. That their output sometimes sounds authoritative does not mean their falsehoods should be believed. It’s a rule that could also be profitably applied to some human communication.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.