Building on the UK’s vision for a “pro-innovation approach” to regulating AI, the government's new AI white paper sets out more details. A consultation inviting active stakeholder engagement is now running until 21 June 2023. In marked contrast to the EU with its dedicated AI Act, the UK is proposing a regulatory framework, but no specific legislation, as it seeks to avoid "unnecessary burdens for businesses”. It is essentially taking a “bottom-up” approach whilst the EU is preferring the more prescriptive “top-down” route. We take a look at the impact of the UK proposals v the EU’s.
The ambition
The UK has identified AI as one of five critical technologies and notes that it presents risks to human rights (e.g. deepfakes), safety (e.g. chatbots recommending a dangerous activity), fairness (e.g. algorithmic bias in credit applications), privacy and agency (e.g. from surveillance via connected home devices), societal wellbeing (e.g. AI generated disinformation) and security (e.g. automation and magnification of cyberattacks). Through its “common sense, outcomes-oriented” approach to regulation, the government is aiming to build trust in the technology in order to accelerate adoption, and maintain the UK’s position as a global AI leader.
The EU has also identified that AI can bring important economic and societal benefits, but links the promotion of its uptake to the need to address associated risks by protecting individuals and building trust in AI systems. To achieve this double objective, the proposed regime extends beyond regulating the use of the technology - and includes the introduction of an AI specific liability regime focused on compensating potential harms caused by AI.
The UK approach
A new framework to be built out
As we discussed in this Linklaters report, AI technologies are currently regulated through a complicated patchwork of law and regulation. The UK is now proposing a new framework to support “responsible innovation”.
Responses to the consultation will inform how this framework is built out but we know that the government is keen to take a “pragmatic, proportional” approach. Much like the UK financial services regulatory regime, it is proposing a “principles based” framework, which is intended to be context specific and to regulate the use of the technology, not the technology itself.
Regulator led
The UK government will issue a non-statutory definition of AI for regulatory purposes and a set of high-level overarching principles. It will then be up to existing regulators to determine how to work the principles into their existing regimes and provide guidance specific to their sectors. These will include the Information Commissioner's Office, Competition and Markets Authority, Ofcom, Financial Conduct Authority, Bank of England and Medicines and Healthcare products Regulatory Agency.
Central government support will monitor and evaluate the emerging regulatory landscape around AI, with a view to identifying any gaps in regulatory coverage and supporting coordination and collaboration between the regulators.
The UK has been a leader in digital regulatory cooperation and some good work has already been done by its unique Digital Regulation Cooperation Forum, for example with respect to shared perspectives on the impact of algorithms. However, how successfully “gaps” will be identified and managed remains to be seen, and is arguably one of the weakest points in the UK approach.
A principles-based agile and iterative approach
The UK framework will be built around five general principles and is “deliberately designed to be flexible” in order to adjust as the technology develops. The proposed framework will be aligned with, and supplemented by, a variety of tools for trustworthy AI - such as assurance techniques, voluntary guidance and technical standards.
There is nothing much new in these principles which are a condensed from of the six principles originally proposed by the Department of Digital Media Culture and Sport last year – and we have had soft law principles for many years now based around ethical objectives (see chapter 2 of our report).
However arguments for a flexible approach are perhaps supported by the explosion of recent concerns around generative AI such as Chat GPT which has led, for example, to a Chat GPT ban in Italy. The government will be doing further work to explore risks and regulatory needs in the context of foundation models.
Greater regulatory clarity?
The intention is to provide “industry with the clarity needed to innovate”, although at this stage we have little detail how that will be achieved. The five principles will not be put on a statutory footing until an “initial period of implementation” [as yet undefined] has passed and “when parliamentary time allows”.
When introduced, this statutory duty will require regulators to have “due regard to the principles”. It will be interesting to see if industry considers this sufficient “clarity” on what is currently a complicated legal and regulatory landscape to navigate.
The EU’s approach
Pro-individual and prescriptive
While most regimes consider themselves to be “pro-innovation”, there is definitely concern that the more prescriptive EU approach could stifle AI adoption. The regime extends across many sectors, is wide-ranging with a very broad definition of AI (although this will likely be brought more in line with the OCDE definition in the final version of the text) and prescriptive for high-risk use cases. Penalties for breaches are severe and new regulatory bodies are being set up to enforce the regime.
Predictable and world leading?
However, the upside to the more prescriptive approach is greater predictability. In particular, EU officials often insist that the regime also provides legal certainty for the non-high risk use cases. As the front runner in digital regulation, the EU could be setting the de facto standard for others to follow, much as it did with GDPR.
Looking ahead
The UK is still in the early stage of addressing the challenge of how to regulate a fast evolving frontier technology. It is certainly taking plenty of steps to actively engage with industry and regulators, and to present itself as supportive of the tech sector. The EU is further down the line with its regime as a standard setter. However, its prescriptive approach raises the risk of discouraging AI investment and innovation in the EU, and may prevent it from adapting quickly to technological developments.
We will follow with interest as the journey to regulate AI on either side of the Channel continues.