As artificial intelligence continues to weave itself into every corner of modern life, regulators are racing to keep up. Nowhere is this tension more pronounced than in California, where the state’s privacy watchdog—the California Privacy Protection Agency (CPPA)—has sparked a legal and political showdown. At the heart of the debate: whether the CPPA is overstepping its mandate by pursuing aggressive AI rulemaking under the banner of protecting consumer privacy.
Recent developments signal significant revisions ahead for the CPPA’s draft regulations on automated decision-making technology (ADMT). Meanwhile, California legislators are sounding the alarm, warning that the agency may be encroaching on legislative authority. What follows is a breakdown of the key issues, current status, and what organizations should be watching for next.
In an unusual development, a group of California legislators have formally challenged the CPPA’s proposed ADMT regulations. Their central argument? The agency is stretching its statutory authority under the California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) to regulate areas that the Legislature never intended.
This confrontation raises important questions about the limits of agency rulemaking in the AI space, especially when legislation lags behind technological change.
Faced with mounting criticism from industry stakeholders and legislators alike, the CPPA has taken a step back. In March 2025, the agency revised its proposed ADMT regulations, trimming down several key provisions and signaling that further revisions are likely.
Rather than finalizing the rules, the CPPA has signaled an intent to publish a new version for public comment. This move acknowledges the complexity of regulating AI while trying to maintain consumer rights protections.
Although the CPPA has pulled back some of its most controversial provisions, significant elements of the proposed regulations remain. The rules continue to cover the use of technology that makes or materially assists in decisions that could significantly impact consumers.
These retained requirements indicate that the CPPA is not abandoning its AI governance ambitions—it’s merely recalibrating.
The clash in California offers a preview of what other states and federal regulators may soon face: how to responsibly regulate powerful and evolving AI technologies without smothering innovation. It also illustrates the political and legal limits of administrative authority in shaping the future of tech policy.
The CPPA’s attempt to expand its regulatory authority over AI and automated decision-making has ignited a broader debate about the appropriate roles of agencies versus legislatures. While consumer protection is vital in the age of AI, so too is clarity and restraint in rulemaking. The CPPA’s decision to revise and revisit its draft ADMT rules is a welcome development for many—but it’s not the end of the story.
Remarkable consistency is emerging from all regulations and governing bodies, even from the new presidential administration. The commonalities include an emphasis on the overlap with privacy, use case-based assessment of risk, adequate consent, adherence to existing laws, and communicating AI usage and governance efforts in a transparent way. Organizations using AI should prepare for a regulatory environment that is both dynamic and demanding. Transparency, accountability, and flexibility will be key pillars for navigating what comes next in California—and likely beyond.