Texas Responsible AI Governance Act is Thorough, but Is It Flawed?
Artificial Intelligence

Texas Responsible AI Governance Act is Thorough, but Is It Flawed?

In the United States, while federal regulations are still under development, states have taken the initiative to craft their own AI policies. Among these, Texas has introduced the Texas Responsible AI Governance Act (TRAIGA), positioning itself at the forefront of AI regulation. This blog delves into the key aspects of TRAIGA, its implications for stakeholders, and how it compares to AI legislative efforts in other U.S. states. 

Understanding the Texas Responsible AI Governance Act (TRAIGA) 

Introduced by Texas State Representative Giovanni Capriglione on December 23, 2024, TRAIGA aims to establish a comprehensive framework for the development and deployment of AI systems within the state. The act is designed to balance innovation with accountability, ensuring that AI technologies are developed and used responsibly. 

Key Provisions of TRAIGA 

  1. Risk-Based Approach to AI Governance

TRAIGA adopts a risk-based framework, similar to the European Union’s AI Act, categorizing AI systems based on their potential impact: 

  • High-Risk AI Systems: These include AI applications that influence consequential decisions in areas such as: 
    • Employment 
    • Finance 
    • Education 
    • Healthcare 
    • Housing 
    • Insurance 

Developers and deployers of high-risk AI systems must implement measures to prevent algorithmic discrimination and ensure compliance with the act’s standards. 

  • Unacceptable Risk AI Systems: TRAIGA prohibits AI systems that pose unacceptable risks, such as those that: 
    • Identify emotions or capture biometric identifiers without explicit consent. 
    • Use biometric data to categorize consumers based on sensitive attributes without consent

2. Obligations for Stakeholders

The act delineates specific responsibilities for different parties involved in the AI ecosystem: 

  • Developers: Must conduct thorough risk assessments and mitigate potential risks before deployment. 
  • Deployers (Organizations Using AI Systems): 
    • Ensure human oversight by individuals with adequate training and authority. 
    • Report algorithmic discrimination risks to the newly established Artificial Intelligence Council within 10 days of discovery.
    • Regularly assess AI systems to prevent bias and ensure compliance. 
    • Suspend use of non-compliant AI systems and notify developers of concerns. 
    • Conduct semi-annual impact assessments, especially after modifications to AI systems. 
    • Disclose to individuals when they are interacting with an AI system, including its purpose and potential impact on decision-making. 
  • Distributors: Required to exercise reasonable care to prevent algorithmic discrimination when distributing high-risk AI systems. 
  1. Enforcement and Penalties

TRAIGA empowers the Texas Attorney General to enforce its provisions. While the act provides for civil penalties in cases of non-compliance, it offers a limited private right of action, allowing individuals to seek remedies under specific circumstances. 

The Role of the Artificial Intelligence Council 

A pivotal component of TRAIGA is the establishment of the Artificial Intelligence Council. This body is tasked with overseeing the implementation of the act, providing guidance to state agencies, and ensuring that AI technologies serve the public interest. The council’s responsibilities include: 

  • Reviewing and approving risk assessments submitted by developers and deployers. 
  • Issuing guidelines and best practices for AI system development and deployment. 
  • Facilitating public awareness and education on AI-related issues. 

What Makes TRAIGA Stand Out? 

While many states are taking steps toward AI governance, TRAIGA stands out due to: 

  • Its comprehensive risk-based framework, aligning with global AI governance trends (such as the EU AI Act). 
  • Clear obligations for developers, deployers, and distributors, ensuring accountability at multiple levels. 
  • The creation of the Artificial Intelligence Council, providing a structured oversight mechanism. 
  • A focus on preventing algorithmic discrimination, making it one of the strongest AI anti-bias laws in the country. 

Implications for Businesses and Consumers 

For Businesses: 

  • Increased Compliance Requirements: Companies operating in Texas must adjust their AI policies to comply with TRAIGA’s provisions. 
  • Potential Liabilities: Failing to adhere to TRAIGA could result in civil penalties and enforcement actions by the Texas Attorney General. 
  • More Responsible AI Development: Businesses must prioritize fairness, transparency, and oversight in their AI applications. 

For Consumers: 

  • Greater Transparency: Individuals will be informed when and how AI is used in decision-making. 
  • Stronger Protections Against Discrimination: AI systems must be designed to prevent biased outcomes. 
  • Increased Accountability: Consumers can report AI-related concerns, holding organizations accountable for non-compliant AI use. 

Issues Identified by Center for Data Innovation 

Hodan Omaar, a senior policy manager at the Center for Data Innovation focusing on AI policy, sees two potential flaws in the Texas Responsible AI Governance Act. “First, TRAIGA’s approach is flawed because it hinges on transparency of process, assuming that exhaustive reports, risk documentation, and assessments will translate into meaningful accountability. But paperwork alone doesn’t ensure progress. The Attorney General is tasked with scrutinizing these materials and enforcing compliance, but given the scope of oversight required, expecting this office to have the resources or expertise to tackle such a monumental task is fanciful. It would turn compliance into a hollow ritual: developers churn out paperwork, but no one meaningfully interrogates it or ensures it leads to progress,” says Omaar (Omaar, 2025).  

Omaar points out a second potential flaw in the Act as it stands today saying, “Second, TRAIGA’s plan to create a centralized Texas AI Council recycles an idea that has repeatedly fallen flat. Proposals to vest AI oversight in a single body, from the city level to the national, have run aground for both practical and conceptual reasons. For instance, New York City’s Automated Decision Systems Task Force, designed to guide government use of AI in areas like policing and education, collapsed in 2019 after years of bureaucratic delays and limited access to critical data, leaving it without any actionable recommendations. If a narrowly scoped initiative like New York City’s couldn’t overcome these challenges, it’s hard to see how the state of Texas, with a far more ambitious mandate, expects to succeed” (Omaar, 2025). 

Read how Hodan Omaar proposes Texas avoid these pitfalls: Texas’s AI Law Won’t Deliver the Accountability It Promises 

Final Thoughts: The Future of AI Regulation in the U.S. 

Texas’ Responsible AI Governance Act is another significant step toward comprehensive AI regulation, balancing innovation with accountability. As more states introduce AI laws, businesses must navigate an increasingly complex regulatory landscape. 

Looking ahead, federal AI regulation may eventually unify these state-level efforts. Until then, we will keep you apprised of the evolution of AI governance regulation in the United States and abroad.   

Omaar, H. (2025, January 26). Texas’s AI law won’t deliver the accountability it promises. Center for Data Innovation. https://datainnovation.org/2025/01/texass-ai-law-wont-deliver-the-accountability-it-promises/ 


Author

Dan Clarke
Dan Clarke
President, Truyo
March 12, 2025

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today