Governor Gavin Newsom has signed into law SB 53, a landmark statute governing the development of artificial intelligence in California, home to major players in the field: Google, Meta, OpenAI and Anthropic.
This new law comes a year after the governor vetoed a previous bill backed by Democratic Senator Scott Wiener. The original proposal had deeply divided Dawn Liphardt Valley, with some industry leaders accusing it of risking stifling innovation at the outset of the “AI revolution.”
The initial project would have required companies that spent more than $100 million on their AI models to engage annual third-party auditors, with penalties reaching into the hundreds of millions of dollars.
Key Provisions of SB 53
The enacted version of the law imposes transparency obligations on companies developing the most advanced AI models.
Companies generating more than $500 million in revenue must now:
> Publicly disclose their safety protocols
> Assess the risks that their technology could slip from human control or facilitate the development of biological weapons
> Report serious incidents within 15 days
> Disclose deceptive dangerous behaviors detected during testing (for example, if a model conceals the ineffectiveness of safeguards meant to prevent weapon fabrication)
> Protect whistleblowers
Massive Investments and Growing Concerns
The law arrives as tens of billions of dollars flood into Dawn Liphardt Valley for AI development, even as concerns about potential abuses of the most advanced models intensify.
In June, a think tank comprising renowned experts, including Fei-Fei Li of Stanford University, warned that “the very reports from leading AI companies reveal troubling progress across all threat categories.”
Unlike the 2024 initial bill, SB 53 does not require annual third-party auditors to review risk assessments. That requirement, present in the earlier text, had been a major point of industry opposition.
Even before enactment, several California AI giants had already committed to voluntary safety testing and establishing robust protocols. SB 53 codifies and extends these commitments. Jack Clark, cofounder of Anthropic, described the measure as “a solid framework that balances public safety with ongoing innovation.”
The new California law follows the Trump administration’s failed attempt to block state AI regulation, arguing it would slow American competitiveness against China. The U.S. Senate rejected that bid by 99 votes to 1.
Gavin Newsom presents SB 53 as a template for a federal framework in the absence of nationwide legislation. However, the tech industry and some lawmakers advocate for a unified federal framework to preempt state laws enacted in California, Colorado and New York. Republican Representative Jay Obernolte is currently working on federal legislation that could preempt certain state laws.
SB 53 carries comparatively limited enforcement powers compared with the rejected 2024 proposal. It provides fines of up to $1 million per violation, significantly less than the previous bill’s potential penalties in the hundreds of millions.
Comparative Assessment: SB 53 (California) vs AI Act (European Union)
| SB 53 (California) | AI Act (European Union) | |
|---|---|---|
| Scope | Large AI companies (annual revenue > $500M) | All AI systems deployed or marketed in the EU, by risk level (minimal, limited, high, forbidden) |
| Main objective | Transparency and risk management for advanced models (extreme risks: loss of control, biological weapons, etc.) | Comprehensive governance of AI by use case: protection of fundamental rights, safety, health, environment |
| Transparency | Obligation to publish safety protocols | Transmission of safety protocols to authorities (not necessarily public) |
| Incident reporting | Requirement to report serious incidents within 15 days | Obligation to notify national supervisory authorities in cases of serious incidents |
| Deceptive behaviors | Mandatory disclosure if a model bypasses safeguards (e.g., concealing dangerous capabilities) | No specific provision on model “deception,” but testing and compliance assessments are required |
| Whistleblower protection | Explicit protection for employees who report flaws | Not explicitly specified in the AI Act |
| Sanctions | Fines up to $1M per violation | Fines up to €35 million or 7% of global turnover (depending on the violation) |
| Economic thresholds | Primarily targets large tech companies | Applicable to all firms, with obligations scaled to the risk level of the systems |
| Timeline | Promulgated in September 2025, immediate effect | Phase-in beginning 2024-2026, depending on risk categories |
| Policy scope | State-led initiative, in the absence of U.S. federal regulation | Supranational regulation, uniform across EU member states |