The recent memorandum by Connecticut AG William Tong is a significant development for U.S. states trying to get a grip on AI governance requirements while still being in a tug-of-war with the federal posture. The imperative extends the application of existing privacy and consumer protection laws (among others) to regulate AI systems. This signals a pragmatic alternative where, rather than waiting for new legislation, regulators can apply existing laws and standards to AI-driven systems today.
As the future of comprehensive AI laws remains foggy and only a few states, like California and Colorado, move toward more formal AI frameworks, many states are left without standalone AI statutes. Using existing privacy laws, these states can get a hold on some of the more concerning AI use cases. Therefore, let us understand what different AI use cases the existing state privacy laws can cover and how businesses should prepare for them.
Businesses are using third-party or in-house AI tools that deal with a lot of personal user data. While states find it hard to control and audit the deployment and operations of these tools, privacy laws can still influence their data usage.
Large Language Models (LLMs)
Large language models are fundamentally data systems. If personal data is used in training or fine-tuning, state privacy laws can trigger notice, purpose limitation, and data minimization obligations. User prompts may themselves constitute personal data, especially in enterprise or customer-facing deployments. Retaining prompts for model improvement or repurposing them for secondary uses can raise consent and transparency issues.
Automated Decision-Making Systems (ADM)
Many state privacy laws regulate automated processing that produces legal or similarly significant effects, such as credit approvals, housing eligibility, or access to services. Where AI systems make or materially influence such decisions, individuals may have opt-out rights or rights to meaningful information about the logic involved. Some states also require risk or data protection assessments for high-risk processing.
Profiling for Personalization
AI systems that analyze behavior to generate targeted ads, pricing strategies, or personalized recommendations often rely on profiling. State privacy laws increasingly regulate behavioral targeting and the creation of inferences about individuals, especially when sensitive data is involved. If personalization crosses into decisions with significant effects, additional rights may attach.
Third-Party AI Vendors
Organizations that procure external AI solutions cannot outsource compliance. Under most state privacy laws, businesses remain responsible as controllers for ensuring that processors or service providers use personal data only for specified purposes. Contracts must restrict secondary use, mandate appropriate safeguards, and align with data minimization principles. If a vendor uses data to further train its own models or commingles it across clients, that may conflict with state requirements. Vendor governance therefore becomes central to AI compliance.
AI in Healthcare and Insurance Underwriting
AI systems that assess medical risk, determine coverage eligibility, or influence underwriting decisions frequently process sensitive personal data. State privacy laws typically impose stricter requirements for sensitive data, including explicit consent in some jurisdictions. When automated systems influence access to healthcare or insurance, they may trigger profiling-related rights and scrutiny under anti-discrimination laws.
To ensure compliance with these privacy laws while pushing their AI systems, businesses will have to identify ways in which their AI systems can be seen as extensions of data processing systems. This is how they can ensure data-privacy compliant AI governance.
Connecticut’s memorandum reinforces a simple but powerful reality that AI does not exist outside the law. Even in the absence of comprehensive AI statutes, state privacy and consumer protection laws already create enforceable boundaries around how AI systems collect, process, infer, and act on personal data. For businesses, this shifts the focus from waiting on future AI Acts to strengthening present-day data governance. Organizations that can see ways in which their AI systems can be seen as data processing systems will be far better positioned as enforcement intensifies.
Platforms like Truyo, which offer data privacy management with AI governance in the same platform, can help operationalize these controls across both traditional data processing and emerging AI systems. In this environment, proactive AI governance is not just a best practice but a strategic necessity.