Virginia AI Bill
Artificial Intelligence

Virginia Governor Vetoes AI Bias and Transparency Bill

As artificial intelligence (AI) rapidly weaves itself into the fabric of daily life—from hiring systems to law enforcement tools—states across the U.S. are scrambling to establish guardrails to prevent misuse. But in Virginia, Governor Glenn Youngkin has taken a different route. In a controversial move, he recently vetoed House Bill 2094, a bipartisan proposal designed to regulate the use of AI by state agencies, particularly with regard to algorithmic bias and transparency. The veto has sparked debate among lawmakers, tech experts, and civil rights advocates alike. So why did the bill fail, and what does it mean for AI oversight in the Commonwealth?

What the Bill Aimed to Do

House Bill 2094, introduced by Del. Patrick Hope (D-Arlington), sought to place some basic yet important checks on the use of AI in government operations. The legislation would have required:

  • Bias Assessments: State agencies would need to evaluate their AI systems to determine if they produce discriminatory outcomes.
  • Transparency Measures: Agencies would be required to disclose where and how AI is used in decision-making processes.
  • Oversight Mechanisms: The state’s chief data officer would oversee AI implementations and publish public reports on AI usage by government entities.

The bill passed with broad bipartisan support in the legislature, signaling a shared concern about the potential for AI to perpetuate bias or make opaque decisions that affect citizens’ lives.

Governor Youngkin’s Rationale

Despite its legislative momentum, Governor Youngkin vetoed the bill on March 21, 2025. His veto message emphasized several key points:

  • Too Bureaucratic: Youngkin criticized the bill for creating an “overly bureaucratic” system that could slow innovation.
  • Duplication of Efforts: He argued that the legislation would replicate work already underway by the state’s Chief Data Officer and AI Advisory Council.
  • Existing Protections: He asserted that current laws already govern the most critical elements of algorithmic fairness and anti-discrimination.
  • State-Level Executive Action: Youngkin pointed to his own executive order on AI, which includes guidelines for ethical use of AI across all state agencies and educational institutions. He noted that a task force is already in place to monitor and support implementation.

There may also have been political calculations at play. While not stated outright, the timing of the veto suggests Youngkin may have been hesitant to become the first Republican governor to sign such legislation. Other states that have passed or are preparing similar laws—like Colorado, California, and Connecticut—are all led by Democrats. That context may have made the optics of signing HB 2094 more complicated, even if the policy itself had bipartisan backing.

Pushback from Lawmakers and Advocates

Youngkin’s veto was met with immediate pushback from those who supported the bill. Critics argue that the legislation was a necessary first step in ensuring AI is used fairly and responsibly in government.

Delegate Patrick Hope, the bill’s sponsor, emphasized that:

  • The bill focused only on public sector AI use, not private enterprise.
  • It was a “very modest” proposal intended to promote transparency and accountability.
  • State agencies should not be using opaque algorithms to make impactful decisions without public oversight.

Other lawmakers and civil rights advocates also warned that AI used in areas like criminal justice, child welfare, and public benefits could reinforce systemic biases if left unchecked.

A Missed Opportunity for Leadership in AI Governance

Virginia has been positioning itself as a hub for technology and innovation. The state established an AI Advisory Council in 2021, and it has made moves to modernize its data systems. However, critics say the veto undermines those efforts.

Here’s what Virginia is missing out on by not passing HB 2094:

  • Public Trust: Citizens are more likely to trust government systems when there is transparency around how decisions are made.
  • Accountability: Without required bias assessments, there is no structured way to identify or fix harmful outcomes.
  • Leadership: States like California and Illinois are taking proactive steps to regulate AI. Virginia could have joined them in setting the national standard.

Instead, the state risks falling behind in the ethical deployment of technology.

Where Do We Go From Here?

Governor Youngkin’s veto of HB 2094 sends a clear signal: Virginia will take a cautious, hands-off approach to AI regulation, at least for now. While the administration claims to support responsible AI through internal efforts, critics argue that public accountability and legal guardrails are essential, especially when algorithms can profoundly impact people’s lives.

The failure of the bill underscores a growing tension in tech policy: how to balance innovation with civil rights, and how to govern complex technologies without smothering them. As AI continues to evolve, so too will the calls for transparency, fairness, and oversight. For now, Virginia remains on the sidelines of that conversation—watching, but not leading.


Author

Dan Clarke
Dan Clarke
President, Truyo
March 26, 2025

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today