Yesterday, the Internal Market Committee and the Civil Liberties Committee of the European Parliament adopted the first ever rules regulating artificial intelligence (AI). The EU now comes a step closer to passing new rules regulating artificial intelligence systems such as ChatGPT.

Adoption of the proposal expected in June

The next step in the legislative process is the plenary adoption of the text by the whole Parliament in mid-June.

Once the Parliament finalises its position, the proposal will enter the last stage of the process, and the trilogues will start between the EU Council and Commission.

General purpose AI

In our previous post, we discussed in detail the new obligations imposed by the Parliament on providers of general purpose AI and foundation models. These models, such as ChatGPT, have evolved rapidly and innovatively in recent years.

The providers of such AI models would be required to take measures to assess and mitigate risks to fundamental rights, health and safety and the environment, democracy and rule of law. They would also be subject to data governance requirements, such as examining the suitability of the data sources and possible biases, among others.

High-risk AI

The Parliament also expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. For instance, this includes adding AI systems that influence voters in political campaigns and recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.

The text also introduces a new obligation for high-risk AI systems to conduct a fundamental rights impact assessment before putting the system into use.

Risk based approach and prohibited AI practices

The proposal follows a risk-based approach and sets forth obligations for providers and users depending on the level of risk the AI can generate.

AI systems that pose an unacceptable risk to peoples’ safety are prohibited. This includes systems that deploy manipulative techniques, exploit people’s vulnerabilities, are used for social scoring, or lead to intrusive and discriminatory uses.

Stay tuned for more developments!