EU AI Act Phase 3: How to Ensure Governance for AI You Didn’t Build
Artificial Intelligence

EU AI Act Phase 3: How to Ensure Governance for AI You Didn’t Build

“Across the organizations we work with, the greatest AI risk right now is the AI embedded in third-party tools and services. Vendors are rapidly integrating AI capabilities, often without enterprises having full visibility into how those systems use data or make decisions.” 

While Phases 1 and 2 of the EU AI Act focused on discovering and categorizing AI risks within the enterprise, Phase 3 recognizes the role of third-party AI systems. For over a decade, businesses have been engaging with third-party vendors to embed AI capabilities into business operations. In fact, the growing AI pressure in the market has pushed a large number of organizations to outsource their AI-enabled offerings to external systems. Limited visibility into these systems and contractual obligations, as the “product enhancement” clause can put the AI infrastructure of enterprises at risk. 

The EU AI Act, therefore, holds enterprises responsible when they deploy third-party systems for AI enablement. Phase 3 extends AI governance to attestations, contractual leverage, and continuous oversight. Let’s understand the risk that Phase 3 focuses on and how your AI governance framework can be equipped to manage risks originating outside your enterprise boundary. 

Governing the AI You Didn’t Build 

Competition pressures are encouraging businesses to outsource their AI adoption to third-party vendors.  However, a faster time-to-value often leaves enterprises unaware of regulatory, security, and operational exposures. Across the organizations we work with, the greatest AI risk right now is the AI embedded in third-party tools and services. Vendors are rapidly integrating AI capabilities, often without enterprises having full visibility into how those systems use data or make decisions. Here’s why the EU AI Act makes third-party risk management an AI governance imperative: 

  • Invisible AI: Vendors increasingly embed AI into routine product features and continuously update SaaS offerings, making AI capabilities difficult to detect. Under competitive pressure, enterprises rarely pause to reassess each update. The harm is unintentional deployment of high-risk or even prohibited use cases without governance, logging, or oversight.  
  • Hybrid Data Chains: Many AI tools combine pretrained foundation models with enterprise data. This creates exposure around sensitive data access, biased or inaccurate outputs in regulated contexts, and blurred accountability between model provider and deployer.  
  • Disclosure Breakdown: Traditional vendor oversight mechanisms collapse under speed. Massive documentation dumps create friction with bespoke questionnaires that don’t scale. The harm is either unchecked risk acceptance or governance bottlenecks that teams work around.  
  • Leverage Deficit: Enterprises are accountable for AI outcomes, yet vendors control design details, training data context, and technical evidence. Limited transparency leaves the organizations unable to explain decisions or demonstrate compliance when challenged.  
  • Contractual Exposure: Standard “product enhancement” clauses can permit vendors to use enterprise data to improve or train models. This can lead to confidentiality loss, regulatory violations, intellectual property leakage, and irreversible model training on proprietary data.  
  • Fragmented Ownership: AI procurement often moves faster than governance alignment. Procurement, legal, security, and business teams operate in silos, and no one owns end-to-end risk. This fragmentation leads to gaps in monitoring, contractual safeguards, and incident response. 

Embedding Accountability into Third-Party AI

For a lot of organizations, advancing with AI without third-party vendors is neither practical nor economically viable. Phase 3 simply recognizes this dependency and ensures it is governed responsibly. Here’s how businesses can comply:

  • Build Continuous AI Inventory: Establish and maintain a living inventory of all AI-enabled capabilities embedded in vendor products, including feature updates. Require structured vendor attestations for each AI use case and for every material modification. Treat SaaS AI updates as compliance events, not routine upgrades. 
  • Map Data & Model Flows: Document how foundation models, vendor systems, and enterprise data interact before deployment. Require visibility into training practices, fine-tuning methods, access controls, and human oversight mechanisms. Ensure high-risk use cases are governed by formal risk management procedures, with logging, monitoring, and data governance safeguards. 
  • Standardize Vendor Disclosures: Replace ad hoc questionnaires with structured, updateable AI risk attestations aligned to EU AI Act categories. Create automated triage rules to quickly clear low-risk systems and escalate flagged use cases. Ensure disclosures are comparable across vendors so oversight scales efficiently as AI adoption expands. 
  • Revise AI Contract Controls: Audit existing contracts for data-use and product enhancement clauses. Explicitly restrict model training on enterprise data unless clearly defined, consented to, and auditable. Add AI-specific contractual addenda covering disclosure of subcontractors, model updates, incident notification, and regulatory cooperation. 

Vendor Risk and Enterprise Responsibility 

Phase 3 of the EU AI Act ensures that governance extends beyond your enterprise walls in the age of outsourced intelligence. As businesses rely on third-party vendors to accelerate AI adoption, risk emerges with speed outpacing visibility and dependency outpacing control. Phase 3 does not restrict AI ambition, but it disciplines it. By requiring transparency, contractual clarity, and continuous oversight, it transforms third-party AI from a blind spot into a governed asset. 

Truyo’s AI Governance platform, with its dedicated Vendor Risk Management capabilities, enables enterprises to operationalize EU AI Act by centralizing attestations, enforcing contract controls, and continuously monitoring third-party AI risk at scale. 


Author

Dan Clarke
Dan Clarke
President, Truyo
March 5, 2026

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today