The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Why California Gov. Gavin Newsom should sign an AI model regulation bill

By Anthony Aguirre | September 27, 2024

US President Joe Biden (L) and California Governor Gavin Newsom at an event discussing the opportunities and risks of artificial intelligence at the Fairmont Hotel in San Francisco, California in June 2023. (Photo by ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)US President Joe Biden (L) and California Governor Gavin Newsom at an event discussing the opportunities and risks of artificial intelligence at the Fairmont Hotel in San Francisco, California in June 2023. (Photo by ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)

Editor’s note: On Sunday, two days after the publication of this article, Governor Newsom vetoed California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, indicating he did “not believe this is the best approach to protecting the public from real threats posed by the technology.”

Seven years ago, my colleagues and I convened hundreds of the world’s foremost experts to explore how to realize the best possible futures with artificial intelligence. Guests included leaders from the largest AI corporations, including Google, Meta, and OpenAI. At a meeting on the Monterrey Peninsula, where a groundbreaking conference on the regulation of genetic research was held in 1975, they all committed to 23 “Asilomar Principles”—rules they deemed critical for keeping AI safe and beneficial. One of the principles reads:

“Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.”

California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), currently with Governor Newsom awaiting his signature, embodies this promise. It requires that developers of the most powerful AI systems write and follow safety and security protocols to prevent critical harm. If they don’t, they can be held liable. By focusing only on the most capable (and therefore dangerous) AI systems, the bill aims to prevent the worst harms, while leaving room for smaller developers to innovate freely. It balances the need to keep people safe with granting the freedom to prosper—the goal of all laws to some degree. This balance is crucial to “having our cake” with AI.

This balance is also why the bill is so popular. It swept through California’s Senate 32-1, and its Assembly 49-16. The world’s foremost AI experts are behind it, lauding it as light-touch legislation that will “protect consumers and innovation.” Preeminent technologists have expressed their support, calling it a “good compromise.” World leaders have praised it as a step towards urgently needed AI governance. Crucially for Newsom, 59 percent  of Californians support it, (including 64 percent of tech workers), with only 20 percent against.

Yet Google, Meta, and OpenAI have spoken out against SB1047. Why? They continue to warn about the enormous risks of ungoverned AI —“lights out for all of us”, says OpenAI’s Sam Altman. By OpenAI’s own admission, their latest model has a “medium” risk of enabling the creation of bioweapons. They have restated their commitment to safety. They have repeatedly called to be regulated. But when concrete legislation is proposed that merely codifies their existing promises, they cry overreach. What gives?

Perhaps these companies object to the bill’s other provisions, like those that protect whistleblowers, who speak out about irresponsible and unscrupulous corporate behavior. These brave individuals are absolutely critical to delivering accountability. But perhaps companies fear legal action if, under competitive pressures, they cut corners to rush products to market.

The underlying explanation is simpler—Big Tech is gonna Big Tech. The leaders of large tech companies resist any actual constraints, in their unwavering belief that they always know best. Calling for regulation in general and then lobbying furiously against specific laws is straight out of their playbook. just look at data privacy or social media.

But their resistance is precisely why we need lawmakers to step in. In the heat of AI’s frantic arms race, companies may not keep their word. Voluntary commitments will never be enough when the stakes are this high. When private companies are making decisions about so many lives and livelihoods, the public must have a say. In the fight against climate change, Governor Newsom has shown the leadership and foresight to combat escalating threats in the face of intense corporate pressure. By not caving to Big Tech now, he can help keep tech leaders honest, and the public safe.

For AI to be sustainable, it must be safe. As with any transformative technology, the risks imperil the benefits. Beyond the massive harm that an AI-enabled catastrophe would cause—be it bioweapons, cyberattacks, or a financial crash—the subsequent shuttering of the industry would deny millions the incredible benefits AI could bring about. By signing SB1047, Newsom can help prevent those catastrophes and safeguard future benefits. He can set a global standard for sensible AI regulation and help safeguard our future with it.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments