The key digital regulators continue to support the UK’s “pro-innovation” approach to regulating AI, according to recent updates. There is consensus that – at this point – they can mitigate existing AI risks through their current frameworks/ forthcoming powers. There is, however, plenty of work still to do to understand the impact of this transformative technology on consumers and markets.
In the financial services sector, the regulators are keeping close tabs on third party risks. This is one of the messages from FCA and Bank of England updates which set out their latest thinking on regulating AI in financial services. The papers can help firms map AI risks against existing rules but they stop short of providing practical guidance for the industry on best practice for adopting AI.
The regulatory ask
In contrast to the EU’s AI Act, the UK is taking a guidance-based, sector-by-sector approach to AI regulation. Last year the government released a white paper on AI regulation, encouraging regulators to align their AI strategies with five core principles based on fairness, transparency and accountability (read more).
This year the UK government published its response to the consultation on its AI white paper, reaffirming the flexible principles-based approach, and called on regulators to outline their strategic approach to AI by the end of April.
The regulatory AI workstream
The four members of the UK’s Digital Regulatory Cooperation Forum (DRCF) have been collaborating on AI and pursuing their own AI workstreams over the past year:
- The DRCF published a paper on Fairness in AI: A View from the DRCF and has now launched its AI and Digital Hub to support digital innovators. It is hoped that this will increase regulatory understanding of industry challenges on AI, particularly for SMEs which do not have the same capacity for regulatory engagement as larger companies.
- The Competition and Markets Authority has published a recent update paper on foundation models (noting concerns about the risks posed by AI to competition and consumer protection (read more)) and has issued its strategic update on AI. This considers how the CMA is addressing AI risks to competition and consumers – across all sectors – both through its existing powers and those which it will be assuming under the Digital Markets, Competition and Consumer Bill.
- The Information Commissioner’s Office has also issued its response to the AI white paper which confirms its long-held view that existing data protection legislation, which is flexible and principles-based, can work to regulate AI. It confirms that the principles set out in the AI regulation white paper consultation mirror, to a large extent, the statutory principles that the ICO already oversees. The ICO expects to consult on updates to its Guidance on AI and Data Protection and Automated Decision-Making and Profiling to reflect changes to data protection law following the passage of the Data Protection and Digital Information Bill.
- Ofcom has issued its Strategic approach to AI 2024/25 which draws parallels between the five principles the government has put forward on AI and the key outcomes it would like to see with respect to online safety. The regulator highlights a number of existing powers, including those under the new Online Safety Act, which can be used to regulate services that use AI technologies.
- In its AI update, the Financial Conduct Authority explains its current approach to AI and what it plans to do in the year to come. The letter from the Bank of England / Prudential Regulation Authority confirms that it plans to work together with the DRCF on upcoming AI projects.
As well as confirming their approach for managing AI risks, a consistent theme from all the responses is that AI is changing the way that regulators regulate. They are investing in the technology to improve how they supervise markets and some are exploring uses cases for GenAI.
Impact for AI in financial services
We have been following the evolving regulatory landscape of AI in financial services. Here we take a closer look at the updates from the finance regulators.
The FCA’s approach: Eyeing Big Tech
The FCA released its AI update alongside policy announcements relating to Big Tech. Building on its work on data asymmetry, the FCA plans to gauge the value of Big Tech firms’ data sets to financial services. It will assess the competition risks arising from any concentration of third party technology services. It is also working with the Bank of England to set operational resilience standards for “critical third parties” to the financial sector.
Specifically on AI, the FCA’s fundamental approach is that many of the risks relating to AI can be mitigated within its existing rulebook. In the table below we have mapped out how key aspects of the FCA’s rules relate to the principles in the government’s white paper, which demonstrates coverage across all five. The FCA also highlights that other areas of law and regulation should also be considered, including data protection standards under the GDPR.
Safety, security & robustness | Fairness | Transparency & explainability | Accountability & governance | Contestability & redress | |
FCA Principles | ✔ | ✔ | ✔ | ✔ | |
Consumer Duty | ✔ | ✔ | ✔ | ||
Licence threshold conditions | ✔ | ✔ | ✔ | ||
Senior Managers Regime (SMCR) | ✔ | ||||
Systems and controls rules e.g. on operational resilience | ✔ | ✔ | |||
Complaints-handling procedures | ✔ |
This underscores the flexibility of the FCA’s outcomes-based approach to regulation. Obligations on regulated firms – such as having adequate risk management systems and paying due regard to the information needs of clients – can be applied to AI use cases today, even in the absence of a broader AI regulatory framework.
The Bank of England’s approach: Technology-agnostic but not technology-blind
Like the FCA, the Bank of England (including in its role as the PRA) does not mandate or prohibit specific technologies in its rulebook. Its response to the government notes, however, that it needs to understand and address risks that may arise relating to specific technologies. For example, according to the Bank of England, AI providers may in future take on systemic importance to the financial sector and be brought into the critical third parties regime as a result.
Given its regulatory remit beyond consumer to financial markets, the Bank of England is also exploring the impact of AI on financial stability (read more). Its Financial Policy Committee will undertake deeper analysis of the potential financial stability implications of AI this year.
Reaching out
Both the FCA and Bank of England recognise that they need to continue to collaborate with a diverse set of stakeholders. The Bank of England is considering establishing a new AI Consortium as a follow-up to the AI Public-Private Forum (AIPPF). It is also working with the FCA to run a third instalment of its survey on machine learning in financial services. Meanwhile, both regulators are engaging with DRCF projects, for example to conduct research into deepfakes and to better understand the adoption of generative AI technology.
Looking ahead
For those in the UK financial services sector, the latest papers confirm that no new regulation is on the immediate horizon. Regulated firms should therefore focus on ensuring they are in line with current rulebooks when deploying AI and should take note of the ICO’s guidance with respect to personal data and AI/automated decision making and profiling.
Companies in all sectors should stay abreast of developments in this space as is clear that regulators are hard at work and the regulatory landscape is very much evolving.
Some big questions still remain: not least, the extent to which compliance with AI rules will harmonise with the EU’s more prescriptive regime under the AI Act, and whether a change in government could change the course of the UK’s regulatory approach. Our cross-disciplinary tech sector team are monitoring with interest and will keep you updated.