California AI bill becomes a lightning rod—for safety advocates and developers alike

By Owen J. Daniels | June 17, 2024

Flag of California with wireframe bearIllustration by Erik English; Tarik Gok, TAW4 via Adobe

The California State Senate passed a bill last month to regulate the development and training of advanced, cutting-edge AI models, aiming to ensure they can’t be used by bad actors for nefarious purposes. The passage of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act is generating uproar among developers, many of whom run their operations out of the state, arguing that it could critically stifle innovation. Supporters of the bill, however, believe the rapidly evolving technology requires guardrails and the legislation is a reasonable—if limited and imperfect—first step toward legally enshrining safety best practices.

State Sen. Scott Weiner introduced SB 1047 into the California Senate in February, and the bill is now being amended in the Assembly. June amendments to the bill clarified several clauses that had created definitional confusion, but also introduced language that essentially guaranteed the law would primarily be applicable to the largest AI model developers, including Anthropic, Google Deepmind, OpenAI, and Meta, among others. The bill will face an Assembly vote in August to determine if it will be passed into law.

At its core, SB 1047 aims to hold developers of large-scale AI models legally accountable for providing reasonable assurances that their models do not have hazardous capabilities that could cause critical harms. The bill’s definition of critical harms includes the creation of chemical, biological, radiological, or nuclear weapons that could lead to mass casualties; cyberattacks on critical infrastructure that cause at least $500,000,000 in damages; or actions by an autonomous AI model that cause the same level of damages, harm humans, or result in theft, property damage, or other threats to public safety. To avoid such harms, developers must be able to fully shut down their models, effectively building in kill switches.

The proposed legislation applies to AI models “trained using a quantity of computing power greater than 10^26 integer or floating-point operations [FLOPS], and the cost of that quantity of computing power would exceed one hundred million dollars ($100,000,000).” In other words, the bill primarily targets the largest future AI models (likely larger than the majority of even today’s advanced models) built by companies that can afford some of the costliest model training. It uses the same metric for training computing power (10^26 FLOPS) referenced in the Biden Administration’s AI Executive Order to delineate models covered by the bill. Under the law, unless a model receives an exemption, its developers would need to submit an annual certification of model compliance, and to report AI safety incidents involving their models, to a newly created Frontier Model Division within California’s Department of Technology.

The debate around Senate Bill 1047 highlights an important point: Regulators will need to walk a fine line to create legislation that both protects against the future risks of frontier models—loosely defined as general purpose AI at the cutting edge of advancement—without negatively affecting innovation. How the development and potential enactment of the bill plays out may foreshadow wider regulatory debates.

Pros and cons. SB 1047 was always likely to receive scrutiny for trying to regulate global leaders in AI on their home turf, and California’s regulation could influence other state or federal policies in a year where over 600 AI bills are reportedly being considered in the United States. That said, from a policy perspective, SB 1047 does not necessarily read as a controversial attempt to rein in big tech at the expense of AI advancement. It requires developers to take reasonable steps to prevent their models from causing societal damage, such as basic safety testing and other (nascent) industry best practices. It does not enforce strict liability against developers, meaning developers are legally liable for damages caused by their models only if they fail to adopt precautionary measures or if they commit perjury in reporting model capabilities or AI incidents. Developers will not be punished for paperwork accidents. Nor will they be penalized should a model create damage, if the developer made a good faith effort to report its risks to the Frontier Model Division.

RELATED:
How to better research the possible threats posed by AI-driven misuse of biology

Prominent voices in the AI safety community have offered support, but others argue that the bill does not go far enough to protect the public from AI risks, and that it is merely a basic first step towards mitigating large-scale risks by enforcing consistent safety practices across large AI developers who are presently left to police themselves. AI firms should not be considered unique compared to other industries in requiring regulation, AI safety advocates insist.

Developers, however, are concerned that the bill could stifle innovation by limiting open foundation model development and fine-tuning (though few, if any, of today’s open models might be affected by the bill). Foundation models are large-scale, general-purpose models trained on huge datasets that can perform different tasks (think generative AI), while open models are those whose weights (algorithmic parameters) can be adjusted by any users with expertise and resources to produce different model outputs. Open model approaches to development use inputs from multiple users to improve performance and uncover security flaws.

Proponents of open models worry developers could be held legally responsible for weight adjustments to their models by third parties that result in harms, and argue that the bill would threaten the beneficial open model ecosystem. While this could indeed be true for covered AI systems under the proposed legislation, given the size and capabilities of the foundation models involved and the harm levels laid out in the bill, caution in open model weighting is likely warranted. Limiting the opportunities for malicious actors, including state actors, to try to manipulate powerful models in politically destructive or socially harmful ways is important. Other concerns include worries that developers cannot reasonably anticipate or provide assurances for all harmful misuses of their models, and that it is unreasonable to expect them to do so. Some also argue that state regulations could clash with eventual mandatory federal regulatory frameworks and create process burdens for AI firms.

Perhaps unsurprisingly, SB 1047’s opponents include TechNet, a network of technology companies that includes Anthropic, Apple, Google, Meta, and OpenAI, among others, as well as the California Chamber of Commerce, the Civil Justice Association of California, and the California Manufacturers and Technology. That said, the number of companies likely to be affected by the legislation in the short-term is small, and its near-term chilling effects on innovation seem limited. Much of the concern about the bill from the private sector is future-oriented, and relates to the potential regulatory burden smaller companies might one day bear (since smaller companies are unlikely at present to have access to the training funding or compute specified in the bill).

The challenge of creating agile policy. An under-discussed aspect of the debate around the bill is the potential implications for moving from a voluntary self-regulatory approach to a mandated approach to model safety for tech companies. At the federal level, AI regulation to date has relied on a combination of so-called soft law mechanisms, which entail voluntary compliance on the part of companies with legally non-binding policies, as well as agency guidelines for using AI. As such, when it comes to big picture questions around AI safety, voluntary compliance and good faith commitments to responsible practices on the parts of AI developers have largely been the name of the game in the United States. See for example President Biden’s summit with tech executives at the White House, or the National Institute of Standard and Technology’s publicly available and voluntary AI risk management framework.

RELATED:
Responsible science: What Sam Altman can learn (and not learn) from Nobel and Oppenheimer

Soft law mechanisms can be a sensible approach when government capacity in areas like resourcing and human capital or expertise are limiting factors. Congresspeople and their staffers face a pacing problem: Regulatory and ethical frameworks often struggle to keep up with advancements in emerging technology due to complexity and technical expertise requirements;  also, AI can often strike at politically sensitive and socially complex issues. Members of the U.S. Congress themselves have acknowledged that they and their colleagues can lack understanding of artificial intelligence and have struggled to reach consensus around the most pertinent types of risks to address, leading to slower rollouts of AI regulatory initiatives. Federal agencies are introducing guidance for AI or indicating how existing policies may cover applications in specific sectors; this approach offers the advantage of not needing to cut new regulations for AI from whole cloth, but it is, generally speaking, a narrower regulatory approach.

As SB 1047 represents a move from a soft law to a legal liability approach, the oppositional industry response could portend similar challenges to such a transition at the federal level. Major AI developers have already begun to implement safety practices and reporting in model development to varying degrees and largely seem intent on avoiding major AI risks, yet have still opposed the California legislation. It is possible that tech firms would have supported differently framed safety regulation, and that they found the particular approach of this bill was unfavorable. Nonetheless, walking the tightrope between soft and hard policies for AI regulation, particularly considering private sector resistance, appears to remain difficult for policymakers.

The episode also highlights the difficulty of creating agile, forward-looking legislation. SB 1047 attempts to prevent the gap between technology development and public adoption and policy from widening, using looser definitions that do not circumscribe risks by taking an overly prescriptive approach to risk mitigation. Yet definitional uncertainty has allowed the bill’s opponents to speculate, sometimes incorrectly, about the bill being harshly enforced to the detriment of business. Perhaps as a result, a clause in the bill that originally would have covered more efficient future models that perform as well as the models trained on 10^26 FLOPS in 2024 was removed in the latest round of amendments. This essentially decreases the likelihood that the bill’s safety requirements would be applicable to small business, startups, or academic researchers who might eventually develop and train powerful models with less compute and at lower training costs than major companies due to future improvements in algorithmic efficiency.

Ultimately, SB 1047 is likely to remain a lightning rod for some AI safety supporters and industry developers alike as it works through legislative channels. Despite adopting a legislative approach aimed at preventing the bill from becoming obsolete, the draft legislation generated pushback that led to changes that may lessen its longevity. As wider regulatory conversations take place federally and around the United States, would-be regulators should take note.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments