New York’s RAISE Act and the Future of U.S. AI Governance
Artificial Intelligence

New York’s RAISE Act and the Future of U.S. AI Governance

New York’s Responsible AI Safety and Education (RAISE) Act was signed on December 19, 2025, days after President Trump issued an executive order aimed at deterring or challenging state AI laws. That sequencing is the story: regardless of federal discomfort with “patchwork” rules, AI governance is still advancing at the state level, especially through disclosure duties, auditable safety plans, and enforceable accountability. Enacted by Governor Kathy Hochul, the law is being framed as one of the strongest state AI safety regimes because of its highest-impact targets. 

Let’s breaks down what the RAISE Act requires, why it matters beyond New York, and what businesses should do now as U.S. AI governance increasingly moves through documentation and disclosure, even when federal policy rhetoric points in the opposite direction. 

New York’s AI Accountability Model 

New York is set to turn “responsible AI” from a principles poster into a compliance surface. The RAISE Act focuses on large developers, pushes governance obligations into public commitments (published safety plans), and backs them with enforcement mechanisms and fines. It also creates a dedicated state capability to track what frontier model development is doing in the explicit move toward ongoing oversight, not one-time rulemaking. 

  • Who it applies to: Large companies meeting the law’s scope thresholds (including a revenue threshold widely reported as $500M+) and developing frontier/large AI systems.  
  • Publish a safety plan: Covered developers must publish safety plans describing how they identify and manage key risks and incidents.  
  • Incident reporting clock: The law retains a 72-hour incident disclosure expectation for serious safety/security events (a major “accountability clock” feature).  
  • Enforcement infrastructure: Establishes a new AI office within the state financial regulator to monitor AI development and issue reporting/oversight outputs (including an annual report).  
  • Penalties: Creates monetary consequences for violations, with public summaries describing reduced, but still meaningful, fine ranges in the final enacted version.  
  • What “strong” really means here: the core “strength” is not maximal restriction, it’s operationalization: publishable plans + incident clocks + oversight body + enforceable penalties. 

Operating AI Governance Amid Conflicting Signals 

The executive order signed on December 11, 2025, signals an attempted federal strategy: discourage or challenge state AI laws through litigation posture and federal funding leverage. New York signing RAISE immediately after is a direct countersignal: states aren’t waiting for a single national framework to exist before raising the floor on AI accountability. The practical reality for businesses is not “state AI laws disappear.” It’s messier: legal uncertainty + continued state momentum + governance requirements that move into procurement and market expectations, even when statutes are contested. 

  • Assume volatility, not voids: Even if parts of state AI laws get challenged, the direction of travel is toward documented safety practices and defensible disclosures, the exact mechanics that RAISE prioritizes. 
  • Treat state laws as “governance prototypes.”: Frontier-model transparency laws are converging. New York explicitly builds on patterns similar to those of other large-state frameworks, signaling an emerging benchmark behavior among major like-minded states. 
  • Prepare for two enforcement channels: direct legal compliance (state mandates, such as safety plan publishing and incident reporting), and indirect compliance (customers, partners, and public-sector buyers requesting proof of controls). 
  • Build an “audit-ready safety narrative”: If you can’t show (1) what risks your track, (2) how you test, (3) how you respond, and (4) what you disclose, your governance posture will look weak in any forum: regulators, procurement, investor diligence, or litigation discovery. 
  • Don’t confuse pro-innovation policy with “no governance”: Even the executive-order approach leans on steering the market rather than banning AI outright; that still pressures organizations to demonstrate reliability, accountability, and control. 

The Direction of U.S. AI Governance 

The New York RAISE Act is a clean signal about where U.S. AI governance is headed: not necessarily toward blanket restrictions, but toward enforceable expectations that AI developers can explain what they built, how they manage risk, and what happens when something goes wrong. And by signing it right after a federal executive order aimed at chilling state AI regulation, New York underscored a deeper truth: AI governance will advance through documentation, disclosure, and accountability, regardless of federal discomfort. 


Author

Dan Clarke
Dan Clarke
President, Truyo
January 8, 2026

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today