California Lawmakers Approve AI Safety Bill SB 53 — But Governor Newsom’s Signature Is No Sure Thing

California’s state Senate has passed a wide-reaching artificial intelligence safety bill, SB 53, on the same day it cleared its last policy committee hearing before a full Assembly vote and sent it to Governor Gavin Newsom for his signature—setting up what could prove to be a high-stakes fight over how the nation’s tech capital treats AI. Supporters describe it as a landmark step to shield the public from powerful algorithms, while critics call it a potential innovation-stifling albatross for startups.
A Landmark Vote in Sacramento
Following months of committee hearings and last-minute amendments, the California Legislature voted in early September to pass SB 53.
The bill would mean that companies developing advanced AI systems—particularly those capable of making decisions on their own in key sectors like finance, health care, and public safety—will have to go through robust safety testing and ensure products are safe before they are sold.
Key Provisions
- Before-Deployment Testing:
AI developers must test their systems for potential harms such as bias, privacy risks, and the prospect of malicious misuse. - Independent Audits:
Third-party experts would conduct annual audits of high-risk AI tools to ensure they meet safety standards. - Transparency Requirements:
Companies must disclose data sources, training and testing methods, and any weak spots in their models. - Enforcement and Penalties:
The state attorney general would be empowered to fine companies or obtain injunctions against them for noncompliance.
During floor debate, the bill’s author, State Senator Scott Wiener, said:
“California has an obligation to lead on AI safety just as we have on climate change and consumer privacy. We can’t wait for the next tragedy to act.”
Fans Are a Guardrail for an Explosive Business
Supporters of SB 53 contend that AI systems are evolving too rapidly for mere voluntary guidelines to suffice. Examples of generative AI generating misleading medical advice, discriminatory hiring recommendations, or made-up legal citations were offered as evidence of the need for enforceable rules.
The bill has been supported by civil rights groups, consumer protection advocates, and some tech workers.
“This bill establishes a floor of accountability,” said Maritza Lopez, policy analyst at the nonprofit Digital Fairness Coalition. “We need ways to make sure these powerful systems are tested and transparent before they affect millions of people.”
Several large labor unions also backed the initiative, citing concerns that unregulated AI could automate jobs without giving workers protections or safety nets.
Industry Pushback: Innovation at Risk?
Not everyone is celebrating. Silicon Valley’s influence in California is formidable, and its tech industry has pushed back hard against SB 53, cautioning that it could stymie innovation and push AI research outside the state.
Industry groups say that the definitions of “high-risk AI” in the bill are so broad as to potentially capture everything from chatbots to recommendation engines.
“This bill, while well-meaning, could lead to a patchwork of state regulations that hold startups back,” said Maria Chen, spokesperson for the California Technology Council. “We want responsible AI, but the compliance costs and legal uncertainty may drive away startups.”
A number of the world’s leading artificial intelligence companies—not just those in Silicon Valley but also in other tech hubs—wrote letters asking legislators to slow down. They contend that federal standards, not piecemeal state rules, would be clearer for companies and keep the United States competitive in the global AI race.
Newsom’s Pivotal Decision
Now all eyes are on Governor Gavin Newsom, who has cast himself as both a tech ally and an advocate for consumer safeguards. In recent months, Newsom has described AI as:
“One of the most transformative technologies of our time.”
He also announced a state-funded partnership with major universities to conduct AI research. But he has expressed concern about:
“The profound risks of unregulated AI.”
SB 53 is pending final action by the governor, who has two weeks to sign or veto it. His decision is far from a foregone conclusion.
Newsom’s administration has previously vetoed technology-related bills that he felt could chill economic growth or lead to regulatory conflicts. For example, he vetoed a 2022 bill that would have imposed strict privacy requirements on digital advertising, saying it duplicated federal efforts.
Political observers note that the governor faces a delicate balancing act. Supporting SB 53 could bolster his national credentials as a tech ethicist and elevate his stature beyond California. But a veto would align him with the state’s influential tech lobby and business community—key constituencies for California’s economy.
National and Global Implications
The stakes extend beyond California. The state is home to the developers and startups of some of the world’s biggest AI companies, from Silicon Valley giants to cutting-edge research labs.
If SB 53 becomes law, it could effectively establish a de facto national standard, as companies often adopt California’s regulations to avoid maintaining separate systems.
Legal scholars point out that California’s privacy law, the California Consumer Privacy Act (CCPA), served as a model for other states and influenced federal proposals. SB 53 might trigger a similar ripple effect at a time when Congress has stalled on passing comprehensive AI legislation.
Internationally, the bill shares similarities with aspects of the European Union’s AI Act, which introduces risk-based rules and hefty penalties for noncompliance. Some analysts believe California’s decision may encourage greater transatlantic cooperation on AI governance.
Public Opinion and Next Steps
Polling suggests a desire for at least some form of AI regulation. Recent surveys show a majority of Americans support government action to prevent AI misuse, particularly in critical areas like health care, policing, and elections. Yet voters disagree on how strict those rules should be and whether they should come from federal or state governments.
- If Newsom signs SB 53, state agencies will have one year to draft detailed regulations such as safety standards and audit procedures.
- Companies developing high-risk AI systems will then have another year to meet the requirements.
- Enforcement would begin in 2027, giving companies a few years to adjust.
If the governor vetoes the bill, lawmakers may try to bring it up again next session, possibly with changes aimed at alleviating industry concerns. Advocacy groups are already signaling they will keep pushing for strong safeguards.
A Moment of Decision
The approval of SB 53 highlights California’s dual identity as the birthplace of technological innovation and a testing ground for progressive regulation.
Over the next few weeks, we’ll learn whether the state chooses to impose some of the country’s strictest AI safety regulations—or whether economic interests will outweigh calls for caution.
As the AI revolution sweeps across the economy, the debate in Sacramento serves as a test case for governments everywhere: how to both promote and control transformative technology when few precedents exist.



