Who Sets the Rules for AI? California Makes Its Move
Artificial Intelligence

Who Sets the Rules for AI? California Makes Its Move

California Governor Gavin Newsom’s recent executive order tightens the state’s rules for AI. The move appears to be a direct challenge to the recently outlined directions by the White House and is likely to have a cascading effect on AI regulations across U.S. states. The order prioritizes enforceable AI safety, accountability, and procurement standards over voluntary frameworks by requiring companies to meet higher safety and responsibility standards. This is likely to embolden the waning counter-pressure on the federal government from states, raising structural tension over who defines AI rules in the U.S.

The executive order is sure to be challenged by the White House, but California’s move might push the federal government to either assert preemption or formalize stricter nationwide rules. In either case, businesses need to be prepared for what’s coming.

California’s AI Playbook

California regulations often become de facto national standards, and early signals suggest other states may adopt similar guardrails, especially where federal direction remains ambiguous. Here’s what Governor Newsom’s executive order brings:

  • Stricter vendor requirements: The order raises the baseline for who qualifies to do business with the state by embedding AI governance directly into procurement. This means companies can no longer rely solely on technical performance or pricing; they must demonstrate structured policies around safety, security, accountability, and internal oversight.
  • Risk and misuse safeguards: Vendors are expected to proactively show that their systems are resilient against harm through testing and controls. This includes mitigating risks like biased outputs, data leakage, adversarial misuse, and security vulnerabilities.
  • Transparency expectations: California is pushing against opaque (“black box”) AI by requiring vendors to explain how their systems function, how they are evaluated, and what limitations they carry. This could include disclosures around training data practices, testing methodologies, known risks, and mitigation strategies. The broader goal is to build trust and accountability, ensuring that the state and, by extension, the public understands the systems being deployed, rather than blindly relying on vendor claims.
  • Procurement leverage: Rather than regulating AI across the board, California is using its position as a massive buyer to enforce compliance indirectly. If a company wants access to state contracts, it must align with these guardrails, making procurement a powerful enforcement mechanism. This approach allows California to shape industry behavior without waiting for comprehensive legislation.

Preparing for the New AI Regulatory Reality

Businesses are definitely in a difficult position where they must balance innovation speed with rising compliance expectations, often without clear, unified rules. However, despite the uncertainty between state and federal directions, companies cannot wait. Here’s how they should prepare:

  • Build formal AI governance early: Companies should move quickly to establish internal structures like AI review boards, documented risk frameworks, and escalation protocols. This is no longer optional as regulators are signalling that governance maturity will be a prerequisite for market access. Getting this in place early also reduces future retrofit costs when stricter rules inevitably arrive.
  • Shift from speed to risk-aware development: AI development needs to evolve from “ship fast” to “anticipate failure.” Businesses should embed risk assessments, misuse testing, and safety checks directly into product design and deployment cycles. This means treating bias, security, and abuse scenarios as core engineering problems, not post-launch fixes.
  • Balance transparency with IP protection: Firms must start preparing for increased disclosure expectations while carefully managing proprietary information. This may involve developing standardized ways to explain models, risks, and safeguards without exposing sensitive IP, essentially learning how to be transparent without losing competitive advantage.
  • Standardize for scale, not just compliance: Even if rules currently apply to specific markets like California, businesses should consider adopting these standards globally. Fragmented compliance is inefficient, and aligning early with stricter regimes can create consistency across products, teams, and geographies.
  • Use compliance as a competitive differentiator: Companies that can demonstrate strong safety, accountability, and transparency practices will likely gain trust with governments and enterprises. Instead of treating regulation purely as a burden, businesses can position responsible AI as a selling point in an increasingly scrutiny-driven market.

The AI Tug-of-War

California’s latest move is being seen as more about who sets the future of AI governance in the U.S. By tying market access to accountability, the state is effectively shaping industry behavior in real time, regardless of federal pace or preference. This creates short-term friction and uncertainty, but also a clear direction of travel.

For businesses, the takeaway is simple. The regulatory landscape may still be fragmented, but the standard is already rising. Those who prepare early won’t just stay compliant but will be better positioned to compete in the market.


Author

Dan Clarke
Dan Clarke
President, Truyo
April 15, 2026

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today