Yesterday Jennifer Calver participated in a panel chaired by Mohammed Gharbawi, Head of the Fintech Hub at the Bank of England, at a conference on Regulation and Risk Management of AI in Financial Services. Following the recent publication of the Bank of England and FCA's joint Discussion Paper on this topic, one of the main questions discussed was whether our existing regulatory regimes are sufficient to tackle the novel risks presented by AI, or whether a standalone, more bespoke regime is needed.

This question has become more pressing since the EU made its bold proposal to introduce the first cross-sectoral regulatory regime for AI. This includes specific legislation in the form of an AI Act - potentially bringing in burdensome restrictions on what it considers "high risk" AI - and a directive on AI liability to facilitate the bringing of claims relating to AI harm (read more in our DigiLinks blog). Both pieces of legislation could impact financial services.

We heard from Jessica Rusu, Chief Data, Information and Intelligence Officer at the FCA, who agreed with our view that many rules for governing such technology in financial services are already in place. We consider that the UK is in a good position on the world stage to tackle the complex regulatory challenge in this area by continuing its approach of deep and transparent engagement with industry, open debate and cross-sectoral regulatory cooperation (for example through the Digital Regulatory Cooperation Forum).

The multidisciplinary community around AI in UK financial services is very engaged in the topic of ensuring safe and ethical adoption, and how best to achieve this through regulation and risk management. There seemed to be general consensus in the room, from regulators and industry-participants alike, that the UK can carve its own "pro-innovation" path in AI regulation. We should be looking to leverage our less prescriptive and more adaptive, principles-based approach to financial regulation. The next step then is to agree on what practical guidance is needed help the industry achieve compliance when using AI and identifying whether there are regulatory gaps that need filling.

At Linklaters we have been considering the compliance challenge for firms adopting AI tools in finance over the last few years, both at a global level (see our report) and specifically with respect to the UK (see our article). There is significant regulatory complexity, including data protection and competition considerations. Looking ahead, any new guidance that is developed must be focused on delivering the regulatory clarity and certainty that is needed for the industry to thrive.

The BoE and FCA are inviting responses to the Discussion Paper by February 2023. We look forward to seeing the outputs from that work and continuing to be part of the conversation.