This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 6 minute read

EU AI Act General Purpose AI model rules now apply, with new Code of Practice designed to support compliance

As of 2 August 2025, several provisions of the EU’s Artificial Intelligence Act relating to General Purpose AI models now apply. Providers of GPAI models in the EU market will need to disclose information to authorities and customers on the data used to train their models, and on their compliance with EU copyright law - and some may also need to manage and mitigate systemic risk at the EU level. While some providers have already begun self-reporting to a degree, the introduction of mandatory reporting and a more streamlined process is a major milestone in the regulation of AI in the EU, and will provide greater visibility of what sits behind a number of GPAI models.

To support compliance with these new requirements, the European Commission has developed the GPAI Code of Practice. This voluntary framework is designed to help AI providers "reduce their administrative burden” and give them “more legal certainty”. The Commission has also released guidelines on the scope of the GPAI aspects of the AI Act.

Compliance obligations for GPAI model providers 

The obligations under the EU AI Act apply in tiers based on the purpose for which the AI system is intended to be used. The Act establishes specific requirements for providers of GPAI models, which are AI models trained on extensive datasets and include some of the most advanced models, such as OpenAI’s GPT-4 and Google’s Gemini.

Definition: What is a GPAI model 

The AI Act defines a GPAI model as an “AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”

The AI Act specifies that the generality of a model may also be assessed based on factors such as:

  • containing at least one billion parameters, and
  • being trained on a large dataset using self-supervision at scale.

The AI Act sets out the following requirements for providers of GPAI models: 

  • Technical documentation: Providers must maintain up-to-date technical documentation detailing the model’s development, training, testing and the results of its evaluation and make this available to the AI Office and national authorities upon request.
  • Information sharing: Providers must provide sufficient documentation to downstream providers who integrate the GPAI model into their own systems, including documentation that enables providers of AI systems to understand the capabilities and limitations of the GPAI model.
  • Copyright compliance: Providers must implement policies to comply with EU copyright law.
  • Transparency of the training data: Providers must publish a public summary of the training data used must be published (see below).
Definition: Downstream provider

This refers to a provider of an AI system that integrates an AI model, whether the AI model is provided by the same entity and vertically integrated or supplied by another entity under contractual arrangements.

The AI Act imposes additional, more stringent obligations on GPAI models that pose a systemic risk because of their capabilities (which is presumed in the case of GPAI models trained with more than 10^25 floating point operations, although this threshold is currently under review by the Commission). These obligations include conducting risk assessments and mitigating possible systemic risks at the EU level, reporting incidents and corrective measures, and applying cybersecurity protection. 

Providers of GPAI models, including those posing systemic risk, can rely on codes of practice to demonstrate compliance with the relevant provisions of the AI Act. 

GPAI Code of Practice 

The voluntary Code of Practice has had a mixed reaction from the major AI providers with some, like Google, Microsoft and OpenAI, having committed to sign the Code and others, notably Meta, having decided not to citing legal uncertainties. However, given that the EU has confirmed there will be no pause in implementation of the AI Act, GPAI model providers should familiarise themselves with the Code and complementing documents to facilitate their compliance with the applicable provisions of the AI Act. Failure to so comply could attract significant fines of 3% of annual total worldwide turnover or EUR 15 million, whichever is greater.   

The Code of Practice is the result of a long and fraught multi-stakeholder process which included participants from industry, academia, civil society, rightsholders, and EU Member States. Prepared by independent experts, the Code consists of three chapters which address and support compliance with the obligations set out in the AI Act for providers of GPAI models. 

ChapterCommitments  
Transparency
  • Draw up and keep up-to-date model documentation containing at least all the information referred to the Model Documentation Form provided in the Transparency chapter
  • Provide relevant information contained in the Model Documentation Form to the AI Office and downstream providers
  • Ensure quality, integrity, and security of information

Copyright

  • Draw up, keep up-to-date and implement a copyright policy
  • Reproduce and extract only lawfully accessible copyright-protected content when crawling the World Wide Web
  • Identify and comply with rights reservations when crawling the World Wide Web
  • Mitigate the risk of copyright-infringing outputs
  • Designate a point of contact and enable the lodging of complaints
Safety and security 
NB: this chapter applies only to providers of systemic risk GPAI models 
  • These providers must conduct ongoing systemic risk assessments and mitigation which should be proportionate to the risks posed
  • The longest and most detailed of the three chapters, this contains multiple commitments including the adoption of a state-of-the-art Safety and Security Framework, identification and analysis of the systemic risks stemming from the model, and a commitment to implement appropriate safety and security mitigations and reporting throughout the model lifecycle

Code of Practice voluntary and non-binding

Providers may voluntarily adhere to the Code of Practice, which is intended to aid them demonstrate compliance with the AI Act’s obligations for GPAI models by clarifying what to do to comply. The Commission notes that “providers who sign and adhere to the Code will benefit from reduced burden and increased legal certainty”. The Code of Practice is not legally binding, and providers may choose not to adhere, in which case they must be able to demonstrate compliance using alternative adequate means.

The Code of Practice can be used to demonstrate compliance until the Commission issues harmonised standards (timing yet to be confirmed), which will grant providers a “presumption of conformity” with the obligations set by the AI Act. However, unlike the harmonised standards, the Code of Practice does not grant providers a presumption of conformity. 

GPAI model guidelines and training data template

The Code of Practice is complemented by the GPAI model guidelines which aim to clarify the obligations of GPAI model providers under the AI Act and outline the Commission’s interpretation of the relevant provisions for GPAI systems. In particular, the guidelines consider: 

  • Procedures for transmitting information to providers of AI systems wishing to incorporate the model into their own AI systems
  • Clarification of the definitions of certain technical terms (for example, "adaptability" and "autonomy")
  • Assessment and mitigation of risks to fundamental rights and safety posed by AI models with systemic risk
  • Clarification of the concepts of "provider" and "placing on the market"
  • The guidelines also specify the boundaries of the definition of a GPAI model, relying on the amount of computational resources used to train the model. 

The Commission has also recently published a training data template which should help GPAI model providers comply with the transparency requirements of the AI Act. Use of the template is mandatory with the aim of providing a minimum common basis for the information that must be made publicly available in the training content summary for GPAI models - and will ultimately assist copyright holders in exercising their rights more effectively by giving them increased visibility into whether their protected works were included in the training data. 

Providers of GPAI models must make the following information available to the public:

  • General information: identification of the provider, model type
  • List of data sources: public data, private data, and so on
  • Details about data processing and compliance with third-party rights

For a modified AI model, these obligations are limited to the training data used for the modification. Each section of the template contains the information that must be supplied, but the Commission also encourages providers to voluntarily supply additional information. 

Looking ahead 

Member States are currently assessing the adequacy of the Code of Practice and its alignment with the AI Act. Irrespective of the outcome of this assessment, the GPAI model rules have now taken effect and will need to be complied with by GPAI model providers (although an additional two-year grace period applies to to GPAI models already on the market prior to 2 August 2025). 

Given the risk of enforcement and the draconian fines that can be imposed under the AI Act regulatory readiness is key. Please reach out to us to discuss how we can support you in this process.

Tags

general purpose ai, gpai, ai