Today, the European Parliament has adopted its position on the AI Act. The EU now comes a step closer to passing the first ever comprehensive rules regulating artificial intelligence systems, such as ChatGPT.
The AI Act will now enter trilogue negotiations.
A risk based framework
As discussed in our previous posts, the EU is currently in the process of adopting the first comprehensive legal framework to regulate AI in the world. The proposed AI Act imposes potentially significant obligations on providers and users based on the level of risk associated with the AI system. The higher the risk, the stricter the rules.
Expansion of prohibited AI practices
The risk-based approach sets out obligations for both providers and those deploying AI systems depending on the level of risk the AI can generate.
AI systems that pose an “unacceptable level of risk” would be prohibited outright. The Parliament has expanded the classification of prohibited areas to include discriminatory and intrusive AI uses. This would include emotion recognition systems in law enforcement, predictive policing systems, and real-time remote biometric identification systems in publicly accessible spaces.
“Post” remote biometric identification systems would only be allowed when strictly necessary for the prosecution of serious crimes and after judicial authorisation.
Expansion of high-risk AI
The Parliament also expanded the classification of “high-risk” areas to include harm to people’s health, safety, fundamental rights, or the environment. For instance, this includes adding AI systems that influence voter behaviour and recommender systems used by very large social media platforms to the high-risk list.
The text also introduces new requirements for high-risk AI systems to conduct a fundamental rights impact assessment and an environmental impact assessment of the system.
Generative AI
In our previous post, we discussed in detail the new obligations imposed by the Parliament on providers of general purpose AI and foundation models.
Providers of foundation models would have to assess and mitigate possible risks and register their models in the EU database before their release in the EU.
Generative AI systems based on such models, like ChatGPT, would have to comply with transparency requirements. This includes disclosing that the content was AI-generated. They would also have to make publicly available the detailed summaries of the copyrighted data used for their training.
The EU is currently working with international counterparts including the US and G7 countries to align AI rules in a voluntary code of conduct for AI.
Next steps
Trilogue negotiations with the Council, brokered by the Commission, are expected to start imminently.
The aim is to reach a political agreement on the AI Act by November 2023. Stay tuned for further updates.