California has once again pushed an ambitious AI safety bill to the forefront of debate in the form of SB 53. Its broader predecessor was vetoed by Governor Gavin Newsom last year, claiming that the proposal applied highly “stringent standards to even the most basic functions.” The significantly amended new draft was expected to address these concerns around rigidity and narrow scope for innovation. While the revised bill, passed by the Senate, seems to have been reshaped around expert recommendations and industry input, its fate is far from guaranteed.
As the bill awaits Newson’s approval, critics warn that it can still burden startups and create conflicts with emerging federal and international frameworks. So, let’s examine what the newly passed bill is about, why it is so controversial, and how businesses should prepare for the coming days.
Breaking Down SB 53
Supporters of the bill argue that California, much like it did with privacy, can lead the nation by setting clear rules for AI safety and transparency. Senator Scott Wiener (D-San Francisco)’s bill is claimed to expand disclosure obligations to smaller developers while retaining its core provisions that include strong AI safety requirements and public investment in AI infrastructure.
- Frontier AI Models: In order to align with the recommendations of Governor Newsom’s Joint Policy Working Group on Frontier AI Models, the bill requires companies with less than $500 million in annual revenue to disclose basic, high-level details about their safety testing. Businesses with higher revenue than that are mandated by the bill to file more detailed reports.
- Safety Disclosures: The new law simplifies and clarifies reporting obligations, making them easier to follow. Companies will need to provide meaningful detail about how they test their models and what safeguards are in place. The emphasis is on practical, usable disclosures rather than exhaustive documentation.
- Oversight Structure: The Attorney General no longer has the authority to issue regulations adjusting definitions. Instead, CDT will produce an annual report recommending changes to the legislature.
- Accountability: The law requires developers to regularly revisit their safety frameworks, reviewing and updating them at least once per year. This acknowledges that AI is not static, and new risks emerge as models evolve.
- Whistleblower Protections: The bill also retains protections for whistleblowers at AI labs who disclose significant risks. The protections are irrespective of the model size.
Split Opinions Around California’s AI Bill
When Governor Newsom vetoed SB 1047 in September 2024, he praised the bill’s intent but argued that it fails to acknowledge nuances, such as if an “AI system is deployed in high-risk environments” that involve critical decision-making and the use of sensitive data. The new draft was, therefore, expected to be more risk-based and deployment-sensitive. With that understanding, it becomes clear why SB 53 might still not be out of the woods yet.
- Risk of Prompting Federal Revisit of Moratorium: The 10-year moratorium on states doing anything to regulate AI was struck down by the Senate earlier this year. However, if the SB 53 bill is still viewed as particularly aggressive, the federal administration might be prompted to revisit the ban. In other words, overreach in the California bill might provoke a reaction that limits the regulatory space across other states as well.
- Focus on “Large Developers”: The bill’s threshold for revenue and operational size of AI models is still among its most controversial aspects so far. Business groups like the California Chamber of Commerce and TechNet have pointed out that limiting regulation to big firms may ignore smaller teams building advanced models that could still cause serious harm. This was also one of the objections that led to the veto of SB 1047.
- Burdens on Startups & Open-Source Developers: Compliance cost present another challenge against the bills approval. Startups and open-source developers lack the resources of large tech firms, yet they may be subject to some of the same reporting and testing demands. Investors like Andreessen Horowitz argue that such administrative overhead does not meaningfully improve safety but can make it harder for smaller actors to compete.
- Commercial Clause Risks: Critics also warn that the bill must contend against concerns of burdening interstate commerce. If California’s bill is seen as dictating how AI is developed or deployed outside its borders, it could invite litigation or federal intervention.
- National Competitiveness: Another pressure point comes from fears that California regulation could undermine U.S. competitiveness in the global AI race, particularly against China. Both Silicon Valley investors and the Trump administration have argued that strict state rules could raise costs, slow innovation, and even push companies to relocate. Since California is home to many of the world’s leading AI firms, Newsom will have to weigh carefully whether SB 53 strengthens the state’s leadership position or risks driving innovation elsewhere.
Preparing for Uncertainty
For businesses, it’s now time for close monitoring and preparation. Startups, in particular, should assess what documentation they can reasonably generate without derailing innovation, while larger players should plan for the likelihood of tiered reporting obligations. Legal and compliance teams should also track federal movement on AI regulation, since overlapping or conflicting standards are a real possibility. At the legislative level, even after being passed by the Senate, the future of SB 53 is still not certain, given Governor Newsom’s veto history. Preferences for nuanced, risk-based rules mean that approval is far from guaranteed.
Balancing Innovation and Safety
The takeaway from California’s case is that regulations for AI governance must strike a balance between ensuring risk-free AI systems and preserving the flexibility for companies to experiment, scale, and compete globally. The bottom line for businesses, however, is clear. Proactive risk management and transparent practices are the justified costs of building trust in the age of AI. Companies that adopt clear internal safeguards, document their safety measures, and stay ahead of disclosure requirements will be better positioned to navigate not only California’s evolving rules but also the broader wave of AI governance. In fact, the whole debate around the law and its compliance mandates is actually an investment in credibility, resilience, and long-term competitiveness.