EU AI Act Part 2: Achieving GPAI compliance
The EU AI Act introduces a sweeping compliance framework that transforms AI governance from manual processes into systematic infrastructure.

Part 1 of our series explored the EU AI Act's training data disclosure requirements and the August 2025 deadline. In Part 2, we examine the comprehensive obligations that extend far beyond transparency requirements.
First, we'll explore the documentation foundations that organizations must build to satisfy regulators. Second, we'll analyze the two critical risks facing enterprises in downstream provider relationships—accidentally becoming a provider yourself, or inheriting compliance responsibilities from your AI deployments. Third, we'll examine the additional “systemic risk” obligations for high-capability models. Fourth, we'll discuss how to operationalize these requirements within existing engineering workflows.
Together, these four areas create an interconnected compliance framework that will fundamentally reshape how enterprises approach AI governance. Finally, we’ll go through what we see as four key pillars of GPAI compliance to help you get started.
Documentation: Building your compliance foundation
The EU AI Act's technical documentation requirements create new categories and practices of record-keeping that many organizations either do not have—or, if they do, will struggle to systematically maintain. For Chief Privacy Officers, these requirements represent compliance obligations, of course, but will also create new opportunities to strengthen AI governance at the infrastructure level.
Required documentation
The regulation mandates comprehensive documentation of training and testing processes, including data preprocessing methods, model architecture decisions and evaluation methodologies.
Organizations must also document: computational resources consumed during training; energy consumption metrics; and performance benchmarks across a number of evaluation criteria. Evaluation results documentation will extend beyond simple accuracy metrics to include bias assessments, testing for robustness and capability limitations. Organizations will have to maintain records of how models perform across different demographic groups, languages and use cases.
Dual documentation streams
Technical documentation must be oriented to two audiences at once—regulatory authorities and downstream providers—creating new requirements for different depths and formats that organizations with AI models approaching the threshold must satisfy. Regulatory documentation will focus on compliance demonstration and risk assessment, while downstream provider documentation must emphasize practical integration guidance and limitation awareness.
For downstream providers, model integration guidance documentation must enable informed decision-making about capability boundaries, known failure modes, recommended use case parameters and safety considerations.
10-year retention and version control
While the precise scope of the regulation will be open to legal interpretation, it appears that for all high-capability models, regulation requires 10-year retention of technical documentation, which would create significant data management obligations. Organizations must implement version control systems that will track several things across extended timeframes, including model iterations, documentation updates and compliance artifacts.
The challenge for senior enterprise executives lies in embedding governance requirements into development without creating the type of friction that might slow performance or innovation and corrode competitive advantage.”Ethyca Team
Downstream provider risks: When you become the provider
The relationship between GPAI providers and downstream organizations will establish new categories of risk and responsibility that many enterprises might not anticipate, and there are two primary risks that enterprise executives must take on board.
Risk 1: Becoming a provider
Fine-tuning existing GPAI models beyond a “substantial modification” threshold can transform your organization from downstream user to GPAI provider, making it subject to the full compliance obligations.
For example, a fintech company uses a foundation model for customer service—say OpenAI's latest GPT model via API—but their engineering team finetunes it extensively on proprietary financial data to create a specialized custom model which they then package into their mobile app. Under EU AI Act definitions, this “substantial modification” means the fintech just became a GPAI provider and are now placing a model on the market, so they need full GPAI compliance themselves—including training data summaries, technical documentation and documentation retention.
Risk 2: Inheriting compliance responsibilities
When you integrate GPAI models into your systems, you can become responsible for AI Act compliance in your specific use context. Even if your GPAI vendor handles their obligations, you must ensure your deployment meets AI Act requirements for your use case.
An example here would be the same fintech, which also uses Anthropic’s Claude, without extensive modification, for internal research analysis or to make credit scoring decisions. In these cases, even though Anthropic handles GPAI provider obligations under the Act, the fintech can still inherit EU AI Act compliance for their specific use cases.
Meeting new obligations
To meet these obligations, therefore, privacy leaders must demand comprehensive compliance support from GPAI providers: detailed technical documentation, ongoing risk assessments and clear guidance on modification thresholds. Vendor contracts will likely have to specify compliance responsibilities and documentation delivery obligations.
Practically, the immediate tasks facing executives will include:
- maintaining detailed registers of all AI systems and their modifications;
- assessing each system to determine whether your organization is a provider of the AI model, a deployer, or both;
- establishing governance to evaluate whether any customization might cross the “substantial modification” threshold;
- and implementing measures to ensure compliance for all specific use cases.
Additional obligations for frontier AI
Organizations developing or deploying high-capability AI models will face additional obligations under the EU AI Act's “systemic risk” provisions.
The 10^25 FLOPs threshold
Models trained with computational resources exceeding 10^25 floating-point operations (FLOPs) face presumed systemic risk designation. This threshold captures the most capable current models and will likely include a large number of future enterprise AI deployments.
The imperative of “red-teaming”
Organizations approaching this threshold must implement enhanced risk management processes designed to identify potential misuse vectors, security vulnerabilities, and unintended capability emergence. This will require a systematic “red-teaming” approach. “Red-teaming”—in this context, proactively simulating adversarial attacks on AI applications to identify potential weaknesses before malicious actors can exploit them—had been seen as voluntary best practice but now becomes a structured regulatory requirement, with significant attendant costs.
Operationalizing compliance: Embedding requirements in engineering workflows
The EU AI Act's comprehensive requirements demand integration with existing engineering workflows rather than parallel compliance processes. The challenge for senior enterprise executives lies in embedding governance requirements into development practices without creating the type of friction that might slow performance or innovation and corrode competitive advantage.
Organizations can use the EU’s Code of Practice frameworks to demonstrate systematic risk management and technical documentation practices. Effective GPAI compliance requires operationalized integration with the “continuous integration and continuous deployment” (CI/CD) pipelines used in modern AI development—meaning compliance checks, documentation generation and storage and risk assessments must operate alongside technical testing and deployment processes.
Fides, the set of tools built by Ethyca for enterprise-grade engineered data governance, powers automated monitoring and evaluation frameworks that enable system-level data integration and mitigate risk for regulatory compliance across the whole of an organization.
Getting started: Four key pillars of GPAI compliance
For enterprise leaders navigating enterprise AI deployment, understanding four key elements is essential for building compliance strategies that can scale with organizational AI adoption:
- Training data summary publication: As detailed in our previous post, providers must publish sufficiently detailed summaries of training datasets used for model development. This transparency requirement should create new competitive dynamics around proprietary data advantages.
- Technical documentation for authorities and downstream providers: Organizations must maintain detailed documentation of a wide range of practices, including training and testing processes, evaluation results, model architecture, design specs, data sources, computational resources and energy consumption.
- Copyright compliance policy implementation: GPAI providers must establish and implement policies ensuring respect for copyright and related rights throughout the AI development lifecycle, including systematic identification and removal of protected content.
- Cooperation with regulatory authorities: Organizations must establish processes for ongoing cooperation with EU regulators, including response to information requests, compliance demonstrations and participation in regulatory oversight activities.
These four cornerstones create interconnected obligations that require organizations to systematically integrate their compliance processes across technical, legal, governance and operations teams.
The most successful organizations will treat the EU AI Act not as yet another compliance burden, but as an opportunity to strengthen their AI governance infrastructure. Rather than viewing such requirements as external obligations (i.e. a cost), they will integrate systematic risk management, documentation and oversight into core AI development capabilities, transforming obligations into opportunity.
This transformation requires treating governance as systematic infrastructure rather than manual process. Leaving behind manual procedures that create development friction, and instead building systematic capabilities that scale seamlessly with the pace of enterprise AI development and deployment, will become a critical foundation for enterprise success.
Ready to understand how these compliance requirements can transform into automated governance infrastructure? Read Part 3 of our EU AI Act series: "Machine-Readable Governance: How the EU AI Act Will Force Live System Compliance."
About Ethyca: Ethyca is the trusted data layer for enterprise AI, providing unified privacy, governance and AI oversight infrastructure that enables organizations to confidently scale AI initiatives while maintaining compliance across evolving regulatory landscapes.
.jpeg?rect=270,0,2160,2160&w=320&h=320&fit=min&auto=format)


.jpeg?rect=1050,0,2700,2700&w=320&h=320&fit=min&auto=format)