After extensive negotiations, members of the European Parliament have reached a political agreement on new rules to regulate general purpose artificial intelligence (AI), like ChatGPT.

Key areas covered by the AI Act include general purpose AI systems, prohibited practices, high-risk classification, identification of biases, general principles, and sustainability of high-risk AI systems.

General Purpose AI

While the European Commission's original proposal did not contain any specific provisions on general purpose AI, the meteoric rise of ChatGPT has significantly disrupted the debate in the Parliament. The Parliament’s proposal essentially sets out two regulatory layers: “general purpose AI” and a sub-category “foundation models”. The latter will be subject to stricter rules.

General purpose AI technologies are rapidly transforming the way AI systems are developed. Such technologies are expected to bring significant benefits and foster innovation in many sectors, such as healthcare, pharma and finance. However, their disruptive nature also raises ethical and legal questions around privacy, intellectual property, liability and accountability, as well as the rule of law.

EU lawmakers need to strike a delicate balance between innovation and regulation. While nothing new when it comes to emerging technologies, in our experience tracking global tech sector legislative developments, this is currently a difficult tight-rope being walked by lawmakers across the globe!

Foundation Model

General purpose AI is defined as an AI system that can be used for different purposes for which it was not intentionally designed. It will be subject to transparency requirements under Article 52 of the AI Act.

Conversely, foundation model is an AI that is trained on broad data at scale to accomplish a wide range of downstream tasks. Foundation models are developed from algorithms designed with the intention to optimise for generality and versatility of output. They include generative AI systems like ChatGPT, which is trained on a vast quantity of data scraped from the web.

The difference between the two concepts relates to the training of the data, adaptability of the AI system, and whether it can be used for unintended purposes. The difference has practical implications, as foundational models will be subject to stricter rules under the AI Act.

For instance, under a new Article 28b, providers of foundation models will be required to take measures to mitigate risks to health, safety, fundamental rights, the environment and democracy and the rule of law. They are also subject to data governance requirements, such as examining the suitability of the data sources and possible biases. Providers will also need to make publicly available a summary of the use of copyright-protected training data.

The Parliament’s proposal on foundation models seems to have a clear political goal, that is to regulate ChatGPT-like AI systems. Interestingly, the breadth of legislative objectives seen in the draft rules – from pure legal obligations to social norm-making to ESG-esque principles – is reminiscent of recent proposals released in other markets such as China. Nevertheless, such a regulation will be hard to put into practice and may face opposition from the Commission.

Next steps

The proposal might still be subject to minor changes at the technical level. The Parliament aims to adopt its position on the AI Act in Committee on 11 May. A Plenary vote will likely follow in mid-June.

Please reach out to us if you have any questions. Stay tuned for further updates as we see them.