2025 AI Governance
Artificial Intelligence

2025 AI Governance in Review: The Year AI Governance Transitioned From Principles to Practice

In 2025 AI governance stopped being just strategy decks and summit communiqués and started to look like a real operating environment for businesses. We saw overlapping moves like the EU enforcing key parts of its AI act, Federal vs. State tussles in the US, and emerging frameworks in Asia and the Middle East, too. We saw obligations being phased in, timelines renegotiated, and standards drafted in parallel. The directive was clear that AI systems with real impact on people and markets are moving into a regulated, documented, and auditable space.

All throughout 2025, AI governance was at the forefront as global regulators signalled that accountability, transparency, and risk management would sit at the core of enterprise AI programmes. Therefore, as the year comes to a conclusion, let’s look back at the major events in AI governance that would surely set up future AI implementation decisions as well.

The Year AI Governance Found Its Edges

The general vibe of AI governance in 2025 was cautious on risk, nervous about competitiveness, and split between hard and soft laws. As different AI laws emerged across the globe, we saw a tug of war between business interests and customer rights. Here’s how the regulatory landscape looked throughout the year:

United States

At the federal level, 2025 brought strategy rather than statute. The White House’s “America’s AI Action Plan” laid out dozens of policy actions on innovation, infrastructure, and global AI diplomacy. The plan also signalled that regulators would continue to use existing consumer-protection, civil-rights, and sectoral laws to police harmful AI.

But it was the situation between federal and state postures over AI governance that made more headlines. The federal policymakers had almost successfully blocked the enforcement of state AI laws. Although the limited-time moratorium provision was removed from the One Big Beautiful Bill, President Trump has recently again signalled about issuing a similar executive order. Here are some notable state-level AI laws that get to see the light of day for now:

  • California:Governor Newsom signed the much anticipated (and much more debated) AI bill into law by the end of September 2025. The bill focuses on transparency, especially from companies building more powerful AI models. It also offers protections for whistleblowers inside AI labs. A similar bill was rejected by Governor Newsom last year.
  • Texas: Texas enacted HB 149, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), in June, with an effective date of 1 January 2026. The act regulates the use of AI systems in the state with notable focus on banning the non-consensual capture of biometric data and prohibiting deepfakes and child exploitation, among other things.

European Union

The EU AI Act effectively began its journey at the beginning of 2025 when bans on prohibited AI practices became applicable from February. Practices such as manipulative systems, social scoring, and some real-time biometric identifications were deemed unacceptable as per the Commission’s guidelines. The clock also started ticking for general-purpose AI (GPAI) when transparency and risk-management obligations came into force in August. However, the developments in the second half of the year have the act’s fate set on a rather obscure future. Tech-sector groups like CCIA Europe, whose members include major US platforms, publicly urged EU leaders to pause implementation of the AI Act. And in November, we saw the Digital Omnibus proposal, which was a targeted package proposing to delay high-risk AI obligations.

Italy

Italy became the first EU member state with a comprehensive domestic AI law, adopting Law No. 132/2025 on Artificial Intelligence in September, which entered into force in October. The law explicitly complements the EU AI Act while introducing sector-specific governance (e.g. for public administration, health, and justice. It also adds criminal and administrative penalties, including sanctions for abusive deepfakes and unauthorized AI-based reproductions.

India

Though India is yet to present concrete laws on AI governance, it did offer some guidelines in 2025. The Ministry of Electronics and IT (MeitY), under the IndiaAI Mission, released the India AI Governance Guidelines 2025, which signalled a comprehensive framework aimed at “safe, trusted, and inclusive AI adoption”. The ministry suggested the establishment of an AI Governance Group and related bodies to oversee implementation. There was also emphasis on risk-based oversight, sectoral governance, deepfake controls, and alignment with India’s DPDP Act and IT rules. For multinationals, India sits in the “soft-law but serious signalling” bucket where regulators are defining expectations now, even if formal AI-specific penalties will come later.

Japan

In May 2025, Japan passed the Act on the Promotion of Research and Development and the Utilization of AI-Related Technologies, also called the AI Promotion Act. The act establishes a national AI Strategic Headquarters chaired by the Prime Minister, bringing all cabinet ministers into AI policy coordination. It frames AI explicitly as a foundational driver of economic and social development, focusing on promotion rather than penalties. Japan anticipates governance through central planning, guidelines, and sectoral oversight instead of detailed, prescriptive obligations aimed at individual companies. The country is positioning itself as a major economy with a pro-innovation, centrally coordinated AI framework, which still expects responsible deployment but prefers standards and policy to hard enforcement language.

A World Negotiating Its AI Rules

While embracing Xi Jinping’s proposal at World Artificial Intelligence Cooperation Organization (WAICO) for a global AI governance body is still up for debate, several efforts are being made across the globe for frameworks of collective AI governance. 2025 saw some of these efforts that while regional in scope, echo the aspiration for global symmetry.

Europe: AI Act Equivalency and Adequacy

The European Union has taken the most advanced step toward international harmonization through its AI Act, which functions not just as a domestic rulebook but as a de facto global benchmark. Much like GDPR’s adequacy decisions for data protection, the EU is expected to extend AI Act equivalency to nations and companies that demonstrate comparable standards in transparency, risk management, and accountability. This mechanism effectively exports European AI values, i.e., human oversight, fairness, and traceability, to trading partners and multinational enterprises seeking interoperability.

Middle East: Future Investment Initiative (FII)

Meanwhile, the Middle East, led by Saudi Arabia’s FII Institute, is positioning itself as a bridge between Western regulation and Eastern innovation. The FII’s AI Global Council and its initiatives under the Global AI for Good agenda aim to establish shared ethical standards and investment-driven collaboration across borders. Unlike traditional regulatory frameworks, this approach is driven by public-private cooperation, focusing on responsible deployment, sustainability, and inclusivity. While it lacks enforcement mechanisms, it represents one of the most pragmatic efforts to align diverse economies around AI’s responsible use.

Paris AI Action Summit

In February 2025, France and India co-hosted the AI Action Summit in Paris, where over 60 countries, the EU, and the African Union Commission signed the Statement on Inclusive and Sustainable AI for People and the Planet. The declaration stressed on AI that is human-rights-based, human-centric, ethical, safe, secure, trustworthy and sustainable. It linked AI governance to Sustainable Development Goals and closing digital divides.

The Strategic Lesson of 2025

For businesses, 2025 should not be read as a year of deregulation or a pause. It is better understood as the year regulators started adjusting AI governance to reality. Deadlines were re-sequenced, definitions were clarified, and strictness was experimented with on different levels. The companies that will be best placed for 2026 and beyond are the ones that learn from 2025 AI governance lessons about balancing innovation with user rights, aligning AI use cases with emerging national laws, and build a mindful AI governance framework.

This is where Truyo can help strengthen the governance posture. Our AI governance capabilities from system inventories and risk-assessment workflows to vendor oversight, documentation, and audit-ready evidence capture give organizations a way to operationalize this messy, multi-regime reality. As 2025 closes, the question is no longer whether AI will be governed, but whether your governance can keep pace.


Author

Dan Clarke
Dan Clarke
President, Truyo
December 11, 2025

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today