AI Is Typing. Regulators Are Replying.
Artificial Intelligence

AI Is Typing. Regulators Are Replying.

Chatbot regulation is quickly emerging as a focal point in AI governance. Last week, we talked about Oregon’s SB 1546, and this week, Washington has also delivered its bill to the governor’s office. Many other states, including Georgia, Hawaii, and New York, are advancing their chatbot bills to different stages. An approach is maturing with mandates for transparency, user disclosures, and safeguards against self-harm and threats to minors. 

Conversational AI tools are now being seen as influential, human-like systems that can shape behavior, especially for vulnerable users. In the absence of comprehensive federal rules, states are acting as testing grounds for AI oversight in this case. For businesses, this means an increasingly fragmented regulatory landscape where chatbot design, risk management, and user protections are becoming core compliance priorities rather than optional features. 

Shaping the AI Conversation 

The chatbot bills across states are focusing on concerns over emotional dependency, manipulation, and unsafe guidance (e.g., mental health or self-harm scenarios). At the same time, the absence of federal standards is pushing states to act quickly and independently. Here’s how different states are coming up with legislation to regulate conversational AI. 

  • Oregon [Passed legislature]: SB 1546 is one of the strongest chatbot bills moving anywhere in the country. It applies to AI companions and companion platforms, requires notice when a reasonable person could think they are interacting with a human, mandates suicide/self-harm response protocols, adds special protections for minors, and creates a private right of action with statutory damages. The bill has cleared both chambers and is now awaiting Governor Kotek’s decision. 
  • Washington [Passed Legislature]: HB 2225 has passed the legislature and was delivered to Governor Bob Ferguson on March 12, 2026. The bill regulates AI companion chatbots and is built around transparency and youth safety: users must be told when they are interacting with AI, and providers must address risks tied to self-harm and emotionally manipulative design. Washington is one of the clearest signs that chatbot regulation has moved from theory to near-enactment. 
  • Georgia [Crossed chambers]: SB 540 is Georgia’s main chatbot bill. It targets conversational AI services, especially where minors are involved, and would require clear disclosure that the user is interacting with AI, restrict manipulative or sexualized chatbot behavior toward minors, require tools for parents, and require suicide/self-harm response protocols. It also authorizes attorney general enforcement and civil penalties. Georgia’s approach closely mirrors the broader national trend toward child-safety-centered chatbot governance. 
  • Hawaii [Crossed chambers]: Hawaii is moving two notable chatbot bills. HB 1782 focuses on protecting minors from harms tied to AI companion systems and conversational AI services, including oversight and penalties. SB 3001 is broader: it would require disclosures, suicide/self-harm protocols, protections for minors, annual reporting, and attorney general enforcement for conversational AI services. Hawaii stands out because it is testing both a minor-protection model and a consumer-disclosure/compliance model at the same time. 
  • New Jersey [Newly introduced]: A4732 was introduced on March 10, 2026 and referred to the Assembly Science, Innovation and Technology Committee. It would require AI companion operators to notify users that they are not communicating with a human. New Jersey is also considering separate legislation aimed at stopping AI from being marketed as a licensed mental health professional. Together, those bills show New Jersey moving on two familiar themes: anti-deception disclosure and mental-health-related guardrails. 
  • New York [Active]: New York has become a major state to watch. The newly introduced S9051 would prohibit AI chatbots from using unsafe features for minors and is already in the Senate Finance Committee. Separately, S7263 would impose liability where a chatbot impersonates certain licensed professionals, and S5668 targets harmful or misleading chatbot outputs. In other words, New York is building a broader liability-and-safety framework around chatbot conduct. 

Navigating Chatbot Governance 

Conversational AI tools like chatbots offer businesses a powerful mix of scalability, personalization, and cost efficiency. The appeal for 24/7 customer engagement, faster support, and deeper user insights can complicate governance efforts in the face of these bills. Here’s how the businesses can navigate: 

  • AI inventory: An AI inventory can help control compliance and risk prioritization. By mapping each chatbot to its purpose, data inputs, user segments (e.g., minors), and risk exposure, businesses can quickly determine which systems fall under stricter state laws. This enables targeted safeguards instead of blanket controls, reduces compliance blind spots, and creates a defensible structure when regulators ask, “Where is AI used and how is it governed?” 
  • Standardize disclosure and user transparency: Consistent disclosure frameworks help businesses preempt the core concern behind most bills: user deception. By embedding disclosures contextually (not just at entry), companies reduce legal exposure while maintaining user trust. A standardized approach also avoids fragmented UX across states, ensuring that compliance does not degrade customer experience but instead becomes a trust signal. 
  • Recordkeeping and audit trails: As states move toward enforcement and potential liability, businesses must shift from “we comply” to “we can demonstrate compliance.” Detailed logs of interactions, disclosures, and interventions provide evidence during audits or litigation. More importantly, they enable internal monitoring, helping teams identify risky chatbot behaviors early and continuously improve controls. 
  • Safety-by-design controls: Rather than relying on post-incident fixes, safety-by-design ensures risk mitigation is built into the system architecture. This includes real-time detection of sensitive topics, escalation pathways, and restrictions on high-risk outputs. Such controls directly address legislative concerns around harm and manipulation, while reducing operational and reputational risk before it materializes. 

The Fine Print Behind the Chat 

Chatbot regulation seems to be rapidly becoming a defining pillar of AI governance. As states continue to move independently, businesses must operate in an environment where compliance expectations evolve faster than formal standards. The winners in this space will not be those who react to each new law, but those who build scalable, proactive governance frameworks that align innovation with accountability.  

Platforms like Truyo AI Governance enable organizations to operationalize this shift by bringing visibility, control, and compliance together as conversational AI continues to scale. 


Author

Dan Clarke
Dan Clarke
President, Truyo
March 18, 2026

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today