EU AI Act GPAI Code of Practice: The Compliance Playbook Has Arrived.
The European AI Office released the first draft of the General-Purpose AI Code of Practice — the implementation document that translates the EU AI Act's GPAI provisions into specific compliance procedures. This is not a policy statement or a set of guiding principles. It is an operational playbook with specific requirements, standardized documentation templates, and testing protocols that define what compliance actually looks like in practice. For the large number of organizations that have been waiting for regulatory clarity before beginning AI governance work, that clarity has now arrived. The waiting period is over.
What the First Draft Actually Requires
The GPAI Code of Practice draft specifies four categories of obligation that go beyond what most enterprise AI governance frameworks currently address. First, mandatory capability evaluations covering reasoning, autonomy, persuasive capability, and cybersecurity potential must be completed before model deployment — not after. Second, technical documentation requirements with standardized templates must be maintained for all GPAI deployments, covering training data sources, capability assessments, and known limitations. Third, incident reporting protocols require 72-hour notification timelines for high-impact incidents, with a definition of high-impact that is broader than most legal teams have been assuming. Fourth, upstream supply chain transparency obligations require organizations that build on third-party foundation models to document and disclose those dependencies.
What Changes for Enterprise Legal and Compliance Teams
The Code of Practice moves GPAI compliance from a principle-based interpretive exercise to a documentation exercise with specific deliverables. Legal teams can now audit against defined requirements rather than attempting to interpret broad regulatory language. Compliance gaps are identifiable — organizations can map current AI governance practices against the specific checklist the Code of Practice creates, and identify precisely what needs to change. The organizations that have been building AI governance frameworks proactively over the past 18 months will find their existing work maps reasonably well to the Code of Practice requirements. The organizations that have been waiting for clarity will find they have meaningful work to do quickly.
Timeline Pressure Is Real
The EU AI Act's GPAI provisions are not hypothetical future requirements. They have been in force since August 2024. The Code of Practice is the implementation document that tells organizations what compliance looks like — but the obligation to comply predates the Code of Practice. Organizations that interpret the publication of the Code of Practice as the beginning of their compliance obligations rather than the clarification of existing obligations are materially misreading the regulatory timeline. The European AI Office has signaled that enforcement activity will intensify through 2025, with early enforcement actions targeting organizations that have made no visible compliance progress.
The Supply Chain Dimension Most Organizations Are Missing
The upstream supply chain transparency requirements in the Code of Practice create an obligation that extends beyond an organization's own AI deployments to the AI components embedded in software they purchase and deploy. Enterprise software vendors are already beginning to receive requests from customers for GPAI compliance documentation for AI features embedded in their products. Organizations that have not yet assessed their AI supply chain — the foundation models, AI APIs, and AI-embedded software that their operations depend on — have a compliance gap that is not visible in any internal AI governance review.
ZeroForce Perspective
The board directive is straightforward: commission a gap analysis against the GPAI Code of Practice requirements within the next 60 days. The analysis should cover three areas: internal AI systems subject to GPAI obligations, third-party AI deployed internally, and AI-embedded software in the enterprise vendor stack. The gap analysis output is a prioritized remediation roadmap with cost and timeline estimates. This is not optionally proactive governance work. It is compliance risk assessment for obligations that are currently in force. The organizations that complete this analysis in Q1 will be in a materially better position than those that complete it after receiving an enforcement inquiry.
How does your organization score on AI autonomy?
The Zero Human Company Score benchmarks your AI readiness against industry peers. Takes 4 minutes. Boardroom-ready output.
Take the ZHC Score →