As artificial intelligence (AI) continues to permeate the public and private sectors, U.S. states are establishing guidelines that protect citizens while enabling innovation. One such state, Kentucky, has just enacted what may be the most comprehensive state-level AI governance law to date. But Kentucky is not alone. States like Vermont and Connecticut have also moved forward with AI legislation, and common threads are beginning to emerge. Together, these efforts signal the rise of a coherent, risk-based model for AI regulation that balances transparency, accountability, and public trust.
On March 24, 2025, Governor Andy Beshear signed Kentucky Senate Bill 4 (SB 4) into law, marking a significant milestone in AI oversight. Championed by Senate Republicans Amanda Mays Bledsoe and Brandon Storm, the legislation passed with overwhelming bipartisan support—30-3 in the Senate and 86-10 in the House.
While Kentucky’s SB 4 stands out in its depth and scope, it shares notable features with AI laws in other states, particularly Vermont and Connecticut. Collectively, these state laws reveal several emerging commonalities that may shape a broader regulatory template.
Like Kentucky, both Vermont and Connecticut identify and regulate high-risk AI systems. These typically include technologies used in decision-making processes that affect health, housing, education, employment, or legal status.
Shared Characteristics:
AI governance committees or similar oversight bodies are becoming standard practice.
Core Responsibilities:
Transparency is the bedrock of these emerging laws, emphasizing clear communication about when and how AI is used.
Common Transparency Requirements:
All three states require that agencies assess whether AI systems could lead to unlawful discrimination and document safeguards accordingly.
Evaluation Metrics Include:
Kentucky SB 4 is particularly forward-looking in addressing the dangers of AI in elections—a topic gaining national relevance.
Provisions Include:
One of the most compelling statements during the Kentucky Senate hearing came from Senator Amanda Mays Bledsoe, who called on Congress to develop similar regulations for the private sector. This sentiment echoes a growing consensus that while states are setting the pace, a unified federal framework is essential for comprehensive oversight—especially given the interstate nature of data and commerce.
Moreover, state laws are increasingly influencing corporate behavior. As companies interact with state governments or operate within their jurisdictions, they are being nudged toward compliance with these emerging standards. This is particularly true for vendors of high-risk AI systems, who may need to meet different but overlapping requirements across states.
The wave of AI legislation emerging from states like Kentucky, Vermont, and Connecticut is anything but chaotic. In fact, it reveals a growing convergence around core principles of risk-based governance, transparency, accountability, and human oversight. While each state law has its own nuances, the direction is clear: public trust in AI will depend on consistent safeguards, clearly defined responsibilities, and mechanisms for redress.
With Kentucky setting a new benchmark through SB 4, it’s likely that more states—and perhaps eventually Congress—will adopt similar frameworks. What once looked like a regulatory patchwork is beginning to resemble a blueprint. And that’s good news for policymakers, technologists, and citizens alike.