Virginia's Pioneering Step: The High-Risk AI Developer and Deployer Act
Artificial Intelligence

Virginia’s Pioneering Step: The High-Risk AI Developer and Deployer Act

In a significant move towards regulating artificial intelligence (AI), the Virginia General Assembly has passed the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094). This legislation positions Virginia as the second state, following Colorado, to enact comprehensive laws addressing algorithmic discrimination arising from AI applications.  

As the bill awaits Governor Glenn Youngkin’s decision, it has sparked discussions on its potential impact and the future of AI governance in the Commonwealth. While Governor Youngkin is supportive of AI regulation and guidelines, evidenced by his formation of a high-profile task force last year, the extent of compromises required in this bill process has made his endorsement uncertain. 

Understanding the High-Risk AI Developer and Deployer Act 

The Act aims to establish a framework ensuring that AI systems, particularly those influencing critical decisions, operate transparently and without bias. Key components of the legislation include: 

Definition of High-Risk AI Systems 

The Act categorizes certain AI systems as “high-risk,” especially those integral to making consequential decisions in areas such as: 

  • Employment: AI tools used in hiring or promotions. 
  • Education: Systems determining student admissions or placements. 
  • Financial Services: AI applications in lending or credit assessments. 
  • Healthcare: Tools influencing patient diagnoses or treatment plans. 
  • Legal Services: Systems affecting legal advice or case outcomes. 

These classifications underscore the necessity for stringent oversight to prevent unintended biases that could adversely affect individuals’ lives. 

Obligations for Developers 

Entities that create or significantly modify high-risk AI systems are mandated to: 

  • Risk Disclosure: Provide clear information about the system’s intended use, potential limitations, and any foreseeable risks of algorithmic discrimination. 
  • Performance Evaluation: Offer summaries detailing how the AI system was tested for effectiveness and measures taken to mitigate biases. 
  • Documentation: Maintain records covering: 
    • Data used during training phases. 
    • Known limitations and potential biases. 
    • System objectives and anticipated benefits. 
    • Evaluation methodologies for performance and bias mitigation. 
    • Data governance strategies addressing data source suitability and bias reduction. 
    • Guidelines for appropriate system use and monitoring. 

These requirements aim to foster transparency and accountability, ensuring that AI systems are developed with a conscious effort to identify and reduce biases. 

Responsibilities for Deployers 

Organizations implementing high-risk AI systems are required to: 

  • Risk Management Policies: Establish comprehensive programs aligned with recognized standards, such as the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework or the International Organization for Standardization’s (ISO) 42001 standard. 
  • Impact Assessments: Conduct thorough evaluations before deploying AI systems, focusing on: 
    • System purpose and intended applications. 
    • Data categories processed and outputs generated. 
    • Potential risks of algorithmic discrimination and mitigation strategies. 
    • Post-deployment monitoring and user safeguards. 

These measures are designed to ensure that AI systems are not only effective but also equitable in their decision-making processes. 

Legislative Journey and Current Status 

Introduced by Delegate Michelle Lopes Maldonado, HB 2094 reflects over a year of collaboration with various stakeholders. Despite its passage in both legislative chambers, the bill’s future remains uncertain as it awaits action from Governor Youngkin, who has until March 24 to sign, veto, or propose amendments. The Governor’s previous initiatives, including the formation of an AI advisory task force and the establishment of standards for AI use in state agencies, indicate a recognition of AI’s significance. However, his stance on HB 2094 has not been publicly disclosed. 

Diverse Perspectives on the Legislation 

The Act has elicited varied reactions: 

  • Supporters argue that it introduces essential accountability and clarity for AI developers and users, potentially setting a precedent for responsible AI governance. 
  • Critics, including civil interest groups and innovation advocates, express concerns that the legislation may be overly restrictive, potentially hindering technological advancement and innovation. 

The R Street Institute, for instance, has urged Governor Youngkin to veto the bill, suggesting that its regulatory approach could undermine Virginia’s leadership in digital innovation. They advocate for alternative methods to address AI-related concerns without imposing burdensome regulations. 

Next Steps and Implications 

Should Governor Youngkin sign HB 2094 into law, it is slated to take effect on July 1, 2026. This timeline provides a window for developers and deployers to align their practices with the new requirements. The Act’s implementation could serve as a model for other states contemplating AI regulation, potentially influencing national standards and practices. 

How Does HB 2094 Stack Up to Other AI Regulations? 

Connecticut, the host of the original 29 state consortium that led to Colorado’s Artificial Intelligence Act (CAIA) and Virginia’s HB 2094, is making progress toward their own version. Connecticut’s SB2 has been revised and is met with bicameral and bipartisan support, 30 bill co-sponsors representing 20% of the total legislators in the state, and is no longer poised to be the first bill which could mean smoother passage and the start of a new wave of like-minded bills following in Colorado’s footsteps.  

By far the biggest difference in Virginia HB 2094 is in the ‘principal basis’ requirement for it to be applicable, which leaves a hole for companies to aim for exemption. The language in Colorado is ‘substantial factor,’ and signifies a somewhat narrower scope that leaves little wiggle room for exemptions. Virginia HB 2094 also includes a 45-day cure period. 

Balancing Innovation and Ethical Practices 

As AI continues to permeate various sectors, Virginia’s legislative approach highlights the ongoing balancing act between fostering innovation and ensuring ethical, unbiased applications of technology, the foundation of the Truyo AI Governance Platform. The outcome of HB 2094 will be a critical indicator of how states might navigate this complex landscape in the years to come. 


Author

Dan Clarke
Dan Clarke
President, Truyo
February 26, 2025

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today