AIArtificial IntelligenceIn the News

California Senator Introduces Bill Requiring AI Safety Reporting With SB 53

California Senator Scott Wiener discussing SB 53 AI safety bill in state assembly
Image credit : medial.app

On Wednesday, California State Senator Scott Wiener put forth updated amendments to his current bill, SB 53, that would mandate the world’s biggest AI companies to make public their safety procedures and report incidents in which their product poses potential harm.

This proposed law would be the first of its kind in the country, and would require openness from some of the foremost AI developers (such as OpenAI, Google, Anthropic, and xAI), as defined by the bill.

Wiener’s earlier AI bill, SB 1047, had similar provisions: Developers would have been required to share safety documentation. But that bill encountered fierce resistance from Silicon Valley, and it was vetoed by Gov. Gavin Newsom. In return, Governor Newsom asked AI leaders — including Stanford researcher and AI lab co-founder Fei-Fei Li — to create a policy working group to advise the state on the best way to keep AI safe.

The group of state AI policy experts recently unveiled its final recommendations at the time, which emphasized functioning in a “robust and transparent evidence environment” and called for developers of state AI tools to disclose certain information. The most recent changes to SB 53 were closely modeled after this report, according to a press release from Wiener’s office.

“This bill is in no way perfect, and I welcome feedback on how to make the legislation stronger, but with the future of science under attack, we must put forward the best, strongest bill possible,” Wiener said in the release.

SB 53 seeks to find the sort of balance Governor Newsom had criticized SB 1047 for failing to strike… It seeks to give the rules on transparency for AI developers without putting the fast-growing California AI market out of commission.

“While I think there will be a general agreement (at least on the organized labor side) that we can only trust AI decision making in decisions that have general human consensus, and can’t be trusted on divisive issues, there will be inevitable disagreements on where to draw those lines,” Nathan Calvin, VP of State Affairs at nonprofit AI safety group Encode told TechCrunch.

“These are things that other organizations, my organization in particular, have been talking about for a little while. Forcing companies to tell the public and the government what they’re doing to mitigate these risks is the least we should do — and it’s fair.”

The bill also offers whistleblower protections to employees in AI labs who believe that companies’ technology would pose a “serious risk to society,” which would be defined as causing the death or injury of more than 100 individuals, or financial damage greater than $1 billion.

Also in SB 53, a plan to establish a public cloud computing cluster called CalCompute that would provide the horsepower for startups and researchers without the means to construct a giant AI system of their own.

This bill, unlike SB 1047, does not strictly impose legal liability on AI model developers for the damages resulting from their systems. SB 53 also aims not to stifle smaller AI startups and researchers who improve upon or use open-source models produced by the big players in AI.

Now that the new additions have been added, SB 53 has moved on to the California State Assembly’s Privacy and Consumer Protection Committee. If approved there, the bill would still have to overcome a few more legislative challenges before it can land on Governor Newsom’s desk.

Elsewhere in the country, the New York governor, Kathy Hochul, is weighing a similar AI safety bill, dubbed the RAISE Act, which would also force big AI developers to publish safety reports.

The future of state-level AI regulation along the lines of SB 53 and RAISE momentarily looked uncertain, after federal legislators mulled a 10-year moratorium on state AI legislation. That proposal, intended to avoid a patchwork of regulations for companies to navigate, was rejected in the Senate by a 99-1 vote in early July.

“Making sure [AI] is developed safely and is used for good shouldn’t be a controversial decision—it should be a fundamental part of our work,” said Y Combinator president Geoff Ralston in a statement forwarded to TechCrunch.
“Congress should be leading the way and it should be calling for transparency and accountability from companies developing preeminent models. But until there’s real leadership from the federal government, states must lead the way. California’s SB 53 is a smart and reasonable model for state actions.”

To date, legislators have found it difficult to get AI companies around to state-mandated transparency rules. While Anthropic has tended to side with greater transparency, and indicated that it’s open to the California AI policy group’s recommendations, OpenAI, Google, and Meta have leaned in the opposite direction.

Leading A.I. developers occasionally publish safety reports, but their frequency of late has been wanting. For example, Google refused to publish a safety report on that industry’s latest, greatest model, the Gemini 2.5 Pro, until the technology was already on sale. OpenAI did not release a safety report for GPT-4.1 either. A demographically balanced study later floated GPT-4.1 might be less closely aligned than its predecessors.

While SB 53 is a watered-down version of the AI safety bills before it, it might still force companies to share more information than they currently do. Now, all eyes are on Senator Wiener, who is once again trying to see how far regulation can extend in the rapidly changing realm of A.I. governance.

Your AI journey starts here—keep visiting AILatestByte for trusted insights, trending tools, and the latest breakthroughs in artificial intelligence.  

Leave a Response

Prabal Raverkar
I'm Prabal Raverkar, an AI enthusiast with strong expertise in artificial intelligence and mobile app development. I founded AI Latest Byte to share the latest updates, trends, and insights in AI and emerging tech. The goal is simple — to help users stay informed, inspired, and ahead in today’s fast-moving digital world.