New York’s Responsible AI Safety and Education (RAISE) Act was signed on December 19, 2025, days after President Trump issued an executive order aimed at deterring or challenging state AI laws. That sequencing is the story: regardless of federal discomfort with “patchwork” rules, AI governance is still advancing at the state level, especially through disclosure duties, auditable safety plans, and enforceable accountability. Enacted by Governor Kathy Hochul, the law is being framed as one of the strongest state AI safety regimes because of its highest-impact targets.
Let’s breaks down what the RAISE Act requires, why it matters beyond New York, and what businesses should do now as U.S. AI governance increasingly moves through documentation and disclosure, even when federal policy rhetoric points in the opposite direction.
New York is set to turn “responsible AI” from a principles poster into a compliance surface. The RAISE Act focuses on large developers, pushes governance obligations into public commitments (published safety plans), and backs them with enforcement mechanisms and fines. It also creates a dedicated state capability to track what frontier model development is doing in the explicit move toward ongoing oversight, not one-time rulemaking.
The executive order signed on December 11, 2025, signals an attempted federal strategy: discourage or challenge state AI laws through litigation posture and federal funding leverage. New York signing RAISE immediately after is a direct countersignal: states aren’t waiting for a single national framework to exist before raising the floor on AI accountability. The practical reality for businesses is not “state AI laws disappear.” It’s messier: legal uncertainty + continued state momentum + governance requirements that move into procurement and market expectations, even when statutes are contested.
The New York RAISE Act is a clean signal about where U.S. AI governance is headed: not necessarily toward blanket restrictions, but toward enforceable expectations that AI developers can explain what they built, how they manage risk, and what happens when something goes wrong. And by signing it right after a federal executive order aimed at chilling state AI regulation, New York underscored a deeper truth: AI governance will advance through documentation, disclosure, and accountability, regardless of federal discomfort.