With its new legislative proposal for a regulation on AI, the EU is seeking to tackle the challenge of Artificial Intelligence head on: balancing both the potential to transform our world for the better against the potential to breach our fundamental rights, e.g. through gender-based or other kinds of discrimination. What is the key to the EU approach? Putting humans at the centre of the development of AI.
A clear and safe framework
The first draft of the regulation - aiming to lay down harmonised rules on AI - is now available and it is clear that the EU is looking to create a clear and safe framework for companies to develop and implement AI, facilitating innovation and trust for customers and users.
Trust is a must
Striking a balance between the protection of citizens’ freedoms and EU economic competitiveness is a hard choice and requires complex evaluations. As recently stated by Margrethe Vestager, Executive Vice-President for a Europe Fit for the Digital Age: “On Artificial Intelligence, trust is a must, not a nice to have”.
Trust requires transparency, transparency requires accountability.
The human centric approach
The Regulation on AI is part of a wider effort to develop human-centric AI by eliminating mistakes and biases to ensure it is safe and trustworthy.
More specifically the regulation includes requirements to minimise the risk of algorithmic discrimination, in particular in relation to the quality of data sets used for the development of AI systems. It also introduces human oversight of certain AI systems to prevent or minimise risks to health, safety or fundamental rights that may emerge when an AI system is used.
Read more about this in our DigiLinks post What the EU is doing to foster human-centric AI.
Pursuing the twin objective of promoting the development – and deployment – of AI and of addressing the risks associated with certain uses of this new technology, implies complex evaluation and difficult ethical and regulatory choices, especially when the risk of discrimination arises