California Governor Newsom Signs First-in-Nation AI Safety Disclosure Law

Calif. enacts first-ever laws for safety disclosures in AI systems
Sacramento, CA – Sept 30, 2025
Marking a turning point in the regulation of artificial intelligence (AI), Governor Gavin Newsom has signed SB 53 into law, establishing the nation’s first comprehensive safety disclosure requirement for ultramodern AI systems. Called the Transparency in Frontier Artificial Intelligence Act, it requires large AI developers to disclose safety procedures and report serious failures of their AI systems — creating one of the earliest tech regulations enacted in the U.S.
A Hands-On Approach to AI Safety
The new law applies to AI businesses with more than $500 million in annual revenue, focusing on some of the largest firms in the industry. These organizations must adopt and disclose safety procedures which limit — where appropriate with respect to their technology’s capabilities — the potential harm from frontline AI systems, especially those that could be misused to:
- Act as bad actors
- Create dangerous substances
- Undertake other unsafe actions
Governor Newsom emphasized the balance between innovation and safety:
“The laws balanced the goals: encouraging technological innovation and ensuring public safety. California has long been a leader in technology and innovation, and there’s no reason why we can’t continue that tradition while also ensuring that our future is clean, sustainable and just.”
Key Provisions of the Law
SB 53 introduces multiple measures to increase transparency and accountability in the AI industry:
1. Safety Notice
- Developers must detail how their AI systems comply with safety rules, best practices, and nationally advised standards.
- The goal is to provide researchers, policymakers, and the public with insights into how companies address AI risks.
2. Critical Incident Reporting
- Companies must report critical safety incidents to California regulators within 15 days.
- This allows officials to respond effectively and swiftly to emerging threats.
3. Whistleblower Protections
- Employees reporting safety concerns or infractions are protected from retaliation.
- Encourages a culture of accountability and risk reporting within AI organizations.
4. Public Research Computing Initiative
- Establishes CalCompute, a state-funded computing cluster for real-world and research computing.
- Aims to democratize AI innovation and reduce concentration of power in a few major tech companies.
Learning from Past Efforts
SB 53 follows the veto of a similar bill last year, which faced criticism for being overly restrictive and potentially stifling innovation.
The updated bill incorporates discussions with AI experts, industry leaders, and policymakers, striking a balance between regulation and technological innovation. The law aims to foster practical safety measures without hindering industry growth.
Industry Reactions
Reactions to the law have been mixed:
- Supportive companies: Praise the legislation as a step toward harmonized AI safety norms. Clear disclosure requirements and reporting mechanisms can help build public trust and prevent misuse of powerful AI systems.
- Concerned companies: Express worry about navigating state-level regulation, as differing laws across states could complicate national and global operations. Some advocate for a federal framework to maintain consistency and reduce regulatory fragmentation.
Implications for the Future
California’s AI safety disclosure law positions the state as a global leader in regulating emerging technology. SB 53 sets explicit expectations for transparency and accountability, potentially influencing other states and countries in shaping their own AI regulations.
The law reflects the growing recognition that AI is not just an innovation tool but a technology with significant societal risks, particularly in sectors like healthcare, finance, and transportation.
Schydlo, UK Public Sector Research Director for AI at Accenture, explains:
“It can have social and scientific benefits — not just by improving cross-collaboration between researchers and helping people understand what’s going on, but also by encouraging companies to investigate hazards themselves.”
By fostering transparency and providing a safety net for whistleblowers, California is creating an environment that responsibly nurtures AI innovation.
Challenges and Opportunities
Challenges:
- Determining what qualifies as a “critical incident”
- Establishing robust reporting systems
- Ensuring deployed AI systems meet high safety standards
- Monitoring compliance and guiding companies through new requirements
Opportunities:
- Access to state-provided computational infrastructure can level the playing field for researchers, startups, and smaller AI developers
- Promotes risk-taking in safer AI development
- Strengthens California’s role as a global leader in AI research, development, and policy
Governor Newsom’s Vision
Governor Newsom emphasizes that the law is part of a broader strategy to maintain California’s leadership in technology and public safety.
- Focuses on public-private collaboration, transparency, and forward-looking policy
- Ensures AI development aligns with social values and safety priorities
- Mandates disclosure and reporting to curtail misuse, minimize unintended harm, and incentivize industry accountability
Conclusion
By signing SB 53, California became the first state to pass comprehensive AI safety legislation. The law establishes a model of transparency, accountability, and innovation, demonstrating the state’s commitment to proactive AI governance.
As AI technology evolves, other regions may follow California’s lead, balancing innovation with safety. The law highlights the importance of forward-thinking governance, public engagement, and partnerships among regulators, industry, and researchers.
In the fast-moving world of artificial intelligence, California is positioning itself not only as a hub for innovation but also as an exemplar for responsible, ethical, and safe AI development.



