While We Wait: How State Privacy Laws Already Govern AI Use Cases
U.S. Laws & Regulations

While We Wait: How State Privacy Laws Already Govern AI Use Cases

The recent memorandum by Connecticut AG William Tong is a significant development for U.S. states trying to get a grip on AI governance requirements while still being in a tug-of-war with the federal posture. The imperative extends the application of existing privacy and consumer protection laws (among others) to regulate AI systems. This signals a pragmatic alternative where, rather than waiting for new legislation, regulators can apply existing laws and standards to AI-driven systems today. 

As the future of comprehensive AI laws remains foggy and only a few states, like California and Colorado, move toward more formal AI frameworks, many states are left without standalone AI statutes. Using existing privacy laws, these states can get a hold on some of the more concerning AI use cases. Therefore, let us understand what different AI use cases the existing state privacy laws can cover and how businesses should prepare for them. 

Privacy Laws Governing AI 

Businesses are using third-party or in-house AI tools that deal with a lot of personal user data. While states find it hard to control and audit the deployment and operations of these tools, privacy laws can still influence their data usage. 

Large Language Models (LLMs)
Large language models are fundamentally data systems. If personal data is used in training or fine-tuning, state privacy laws can trigger notice, purpose limitation, and data minimization obligations. User prompts may themselves constitute personal data, especially in enterprise or customer-facing deployments. Retaining prompts for model improvement or repurposing them for secondary uses can raise consent and transparency issues.  

Automated Decision-Making Systems (ADM)
Many state privacy laws regulate automated processing that produces legal or similarly significant effects, such as credit approvals, housing eligibility, or access to services. Where AI systems make or materially influence such decisions, individuals may have opt-out rights or rights to meaningful information about the logic involved. Some states also require risk or data protection assessments for high-risk processing.  

Profiling for Personalization
AI systems that analyze behavior to generate targeted ads, pricing strategies, or personalized recommendations often rely on profiling. State privacy laws increasingly regulate behavioral targeting and the creation of inferences about individuals, especially when sensitive data is involved. If personalization crosses into decisions with significant effects, additional rights may attach. 

Third-Party AI Vendors
Organizations that procure external AI solutions cannot outsource compliance. Under most state privacy laws, businesses remain responsible as controllers for ensuring that processors or service providers use personal data only for specified purposes. Contracts must restrict secondary use, mandate appropriate safeguards, and align with data minimization principles. If a vendor uses data to further train its own models or commingles it across clients, that may conflict with state requirements. Vendor governance therefore becomes central to AI compliance. 

AI in Healthcare and Insurance Underwriting
AI systems that assess medical risk, determine coverage eligibility, or influence underwriting decisions frequently process sensitive personal data. State privacy laws typically impose stricter requirements for sensitive data, including explicit consent in some jurisdictions. When automated systems influence access to healthcare or insurance, they may trigger profiling-related rights and scrutiny under anti-discrimination laws. 

Aligning AI Governance with State Privacy Laws 

To ensure compliance with these privacy laws while pushing their AI systems, businesses will have to identify ways in which their AI systems can be seen as extensions of data processing systems. This is how they can ensure data-privacy compliant AI governance. 

  • Map AI Data Flows: Identify what personal data enters the system (training data, prompts, outputs), where it is stored, who accesses it, and whether it is reused. AI inventories should align with existing data mapping requirements under state privacy laws. 
  • Conduct Risk / Impact Assessments: For high-risk processing such as profiling or automated decisions with significant effects, perform documented assessments evaluating necessity, proportionality, bias risk, and consumer rights implications. 
  • Strengthen Vendor Contracts: Ensure third-party AI providers are contractually restricted from secondary use, model training beyond agreed purposes, or data commingling. Clearly define controller vs. processor roles. 
  • Apply Data Minimization & Purpose Limitation: Limit AI systems to only the data necessary for defined business purposes and prevent repurposing without proper notice or consent. 
  • Enhance Transparency & Consumer Rights Workflows: Update privacy notices to reflect AI-driven processing and ensure mechanisms exist to respond to access, deletion, correction, and opt-out requests. 

Instead of Waiting for the AI Act 

Connecticut’s memorandum reinforces a simple but powerful reality that AI does not exist outside the law. Even in the absence of comprehensive AI statutes, state privacy and consumer protection laws already create enforceable boundaries around how AI systems collect, process, infer, and act on personal data. For businesses, this shifts the focus from waiting on future AI Acts to strengthening present-day data governance. Organizations that can see ways in which their AI systems can be seen as data processing systems will be far better positioned as enforcement intensifies. 

Platforms like Truyo, which offer data privacy management with AI governance in the same platform, can help operationalize these controls across both traditional data processing and emerging AI systems. In this environment, proactive AI governance is not just a best practice but a strategic necessity. 


Author

Dan Clarke
Dan Clarke
President, Truyo
March 5, 2026

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today