Artificial Intelligence

California Safe and Secure Innovation for Frontier AI Models Act Foreshadows Legislative Landscape

The trajectory of AI innovation and adoption is unprecedented, and the legislative landscape is striving to keep up. The recent passage of California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) is a testament to the growing urgency to address these challenges. The bill, passed by the California State Assembly on August 28, 2024 and awaiting signature by Governor Newsom, sets out to regulate the development and deployment of high-capability AI models. With the potential to create both revolutionary advancements and novel threats, this legislation seeks to strike a balance between fostering innovation and ensuring public safety.  

SB 1047 is one of the first significant legislative efforts in the United States aimed at regulating AI on such a comprehensive scale. Its primary goal is to impose stringent safety, testing, and enforcement standards on developers of large AI models, particularly those capable of causing catastrophic harm such as cyberattacks or the creation of weapons of mass destruction. The bill has sparked intense debate, particularly within Silicon Valley, as it imposes new responsibilities on AI companies and threatens to alter the landscape of AI development in California and beyond. 

Who Will Be Affected by the Bill? 

The bill specifically targets developers of “covered models,” which are defined as AI models that surpass certain thresholds in computing power and cost. These thresholds are significant, reflecting the bill’s focus on the most powerful AI models: 

  • Pre-2027 Definition: Covered models are those trained using computing power exceeding 1026 floating-point operations (FLOP) with development costs over $100 million, or fine-tuned models using computing power of three times 1025 FLOP costing over $10 million. 
  • Post-2027 Definition: The cost threshold remains the same, adjusted for inflation, while the computing power threshold will be set by the U.S. federal government’s Government Operations Agency. 

This narrow definition ensures that the bill primarily impacts the largest AI companies, such as OpenAI and Anthropic, rather than smaller startups or academic research. However, it also means that as AI capabilities advance, more models may fall under these regulations. Notably, the bill’s requirements apply to any AI developers offering their services in California, regardless of where they are headquartered.  

Key Requirements and Safety Measures 

SB 1047 outlines several critical safety and compliance requirements for developers of covered models, aiming to prevent the misuse of powerful AI technologies: 

  • Shutdown Capabilities: Developers must implement the ability to promptly shut down covered models if necessary, halting all operations, including training. However, the bill does not specify what constitutes “prompt” action. 
  • Safety Assessments and Testing: Before deployment or public release, developers must conduct thorough safety assessments to identify potential critical harms, such as mass casualties or damages exceeding $500 million. Test results must be recorded, retained, and appropriate safeguards implemented. 
  • Computer Cluster Policies: Operators of computing clusters meeting certain thresholds must have policies in place to monitor and control the use of resources by customers training covered models. This includes the capability to promptly shut down operations if necessary. 
  • Auditing and Reporting: Starting in 2026, covered AI developers must undergo annual independent audits to ensure compliance, with redacted audit reports made public. Unredacted versions must be provided to the California Attorney General (AG) upon request. Developers are also required to report safety incidents within 72 hours. 

These provisions are intended to provide robust oversight and accountability, but they also introduce new compliance costs and operational complexities for AI companies. 

Enforcement and Amendments 

The bill grants the California Attorney General substantial enforcement authority, including the power to bring civil actions against companies that violate the bill’s provisions. Violations resulting in significant harm, such as death, bodily harm, or substantial property damage, can lead to civil penalties capped at 10% of the cost of computing power used to train the covered model. 

Key amendments to the bill have refined its focus and reduced the potential burden on AI developers: 

  • Exclusion of Criminal Penalties: The legislation was amended to remove criminal penalties, focusing instead on civil penalties where actual harm or imminent threats exist. 
  • Whistleblower Protections: The bill includes protections for employees who report noncompliance or dangers associated with AI models. 
  • Focus on Catastrophic Risks: Amendments have clarified the bill’s focus on catastrophic harms, such as those affecting public safety, rather than more general or speculative risks. 
Industry Reaction and Broader Implications  

The passage of SB 1047 has provoked mixed reactions within the AI industry. Supporters, including industry leaders like Elon Musk, argue that the bill is necessary to ensure the safe development of AI technologies. However, others, such as Meta’s chief AI scientist Yann LeCun and numerous open-source advocates, express concerns that the legislation could stifle innovation, particularly for smaller companies and open-source projects. Critics argue that the stringent requirements could deter investment in AI startups, complicate the open-source ecosystem, and push developers to relocate outside of California. 

Furthermore, the bill’s global implications cannot be ignored. As California often leads the way in tech regulation, SB 1047 may set a precedent that influences AI governance beyond state lines. This could lead to a patchwork of state-level regulations or prompt a push for federal legislation, as some lawmakers, including Rep. Nancy Pelosi, have suggested. 

A New Bill for a New Age  

California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act represents a significant step in the regulation of AI technologies. By imposing rigorous safety, testing, and enforcement standards on developers of high-capability AI models, the bill seeks to mitigate the risks associated with the most powerful AI technologies. While it has sparked considerable debate about the balance between innovation and regulation, SB 1047 underscores the urgent need for thoughtful governance in the rapidly evolving AI landscape. As AI continues to advance, legislation like this may play a crucial role in shaping a safer and more responsible future for AI development. 


Author

Dan Clarke
Dan Clarke
President, Truyo
September 4, 2024