The DRCF has this week provided an update on the launch of a new AI and Digital Hub, initially announced in September 2023. Now expected to launch in the first half of 2024 as a 12-month pilot the Hub will provide a forum for innovators looking to deploy AI models and other digital tools. It will allow a broad range of innovators to test their ideas and ask complex cross-regulatory questions to the four key digital regulators through a single access point, and to receive tailored support in response. 

The UK’s approach: “more agile AI regulation”

As examined in our recent thought leadership report, the challenge of how to regulate the explosion of AI, driven by developments in generative AI (read more), is front of mind for regulators globally. 

The UK has proposed taking a ‘pro-innovation’ principles based, rather than a rules-based approach to regulating AI. Rather than taking the EU’s “horizontal approach” with its cross-sectoral AI Act, in a White Paper on AI regulation published in March 2023 (read more), the Government set out a vision for a more “vertical”, sector by sector approach to be led by key UK regulators. Specifically the Government proposed five cross-sectoral principles for existing regulators to interpret and apply within their remits.

Last week, in its response to the consultation that accompanied the AI White Paper, the Government has confirmed that it will “back regulators with the skills and tools they need to address the risks and opportunities of AI” with plans to spend over £100m on research and training on AI.

Digital regulatory cooperation

With the recent growth in consumers’ use and familiarity with AI foundation models such as Chat GPT, regulators are conscious of the need to address possible competition, consumer, data protection and online harms at source and rapidly. But how can regulators shape regulation to prevent unintentional harmful effects of technology and protect competition, consumers and citizens, whilst remaining sufficiently flexible to encourage innovation and technological development? Sarah Cardell, CEO of the CMA has emphasised the need for regulators to be proactive and “put ourselves at the forefront of that thinking, rather than waiting for problems to emerge and only then stepping in with corrective measures”.

We have long-since discussed the desire for regulatory cooperation in digital markets and both the need for business to navigate various regulations and the reality that rules can, at times, conflict  with one another. The UK has unique approach to the challenge through its DRCF which was designed to overcome these hurdles, and provide authorities (Competition and Markets Authority, Information Commissioner’s Office, Ofcom and Financial Conduct Authority) with an avenue to jointly address key cross-regulatory issues.  And the DRFC has taken up the challenge on AI announcing in its 2023/4 work plan that joint work on AI is a key theme - with each DRCF member regulator having an active programme of work on AI and algorithms

Competition regulator's response

The CMA and ICO issued a joint position paper in August 2023 on the design and use of harmful online choice architecture, swiftly followed by the CMA’s initial report on AI foundation models in September, which set out a principle-based approach to AI regulation. The CMA has begun engagement in the UK, US and elsewhere to gather views on the competition and consumer protection principles announced in the report, and we expect an update on the CMA’s approach to the regulation of AI foundation models in March 2024

We have seen ongoing discussions around how AI will fit into the in the new Digital Markets Competition and Consumer Bill (currently at the House of Lords Committee stage), which will also regulate the design of online choice architecture through consumer protection regulations. In addition, we expect an upcoming joint statement between the CMA and ICO, considering areas of regulatory overlap between competition and consumer protection objectives, to be published in spring of this year. So this is an extremely busy and dynamic space in the regulatory landscape. 

Data regulator's response

The ICO has been focusing on AI for some time. It issued its first guidance on the interaction between the data protection and AI back in 2014 and has been refining and developing that guidance ever since, with the most recent guidance issued in March 2023. 

The ICO has also operated a Regulatory Sandbox for innovative technology (including AI) for many years, and that acts as a form of precursor to the Hub. That sandbox process has been used by many organisations, though it can take time to submit a project to the sandbox and complete the sandbox plan which is sometimes difficult to square with the rapid pace of technological chance. 

Financial regulators’ response 

Even prior to the AI White Paper, the Bank of England and the Financial Conduct Authority were engaging actively with industry through an AI Public-Private forum to consider whether AI-specific regulation is needed in the financial services sector. That resulted in a joint Bank of England, Prudential Regulatory Authority and FCA discussion paper published in October 2022 which posed a number of questions to help inform policy making in this area, and in October 2023 a feedback statement providing insights on where the industry would appreciate further guidance on AI adoption (read more

How the hub will work and who will be eligible

During the 12-month pilot phase, innovators will be able to apply to the pilot AI and Digital Hub via a dedicated page on the DRCF website. Businesses will apply to pose questions to the Hub, and successful applicants will be able to engage directly with a the four DRCF members in a single touch-point, and receive tailored advice on their queries. A wider pool of stakeholders will indirectly benefit from the Hub through an accompanying case study archive, which will include anonymous examples of support provided to individual firms, as well as other relevant materials from across DRCF members which innovators may find helpful. Applicants’ proposals must meet the following eligibility criteria to be considered by the Hub:

  • Have an AI and/or digital focus;
  • Fall within the remit of at least two DRCF member regulators;
  • Be innovative; and
  • Benefit consumers, businesses and/or the UK economy.

The DRCF has indicated that it will consider the term “innovative” broadly – catching both radical and incremental advances to business models, services, products or processes to ensure that the Hub is attractive and accessible to a broad range of innovators. 

The pilot will allow DRCF members to discuss a coordinated route forward, and wrestle with potentially divergent priorities and ambitions at source and collaboratively – for example the balancing the desire to provide transparency in the working of AI models and fair competition against protecting consumers’ data privacy. 

Catching up - watch this space

The latest development marks a step forward in the UK’s regulatory approach, bringing it closer to other regulators such as the European Commission, which already has a proposed trio of inter-related legal initiatives to help build trustworthy AI (through its AI Act (read more) and proposed Product Liability Directive and AI Liability Directive (read more) and the US, which through Biden’s Executive Order has charged a swathe of US regulators to come up with safety AI-related standards, tools, and tests (read more).

How effective the Hub will be in supporting innovators will hinge on how timely the DRCF can onboard applicants and answer their questions, and how pragmatic the advice will be, particularly given how tricky it can be trying to apply broad existing laws to new tech.

More details about the Hub service, including how to apply, will be published on the DRCF website closer to launch.