With Biden's Executive Order on AI published earlier this week, and the UK's global AI summit underway at Bletchley Park today, AI is topping the news agenda. Meanwhile regulators continue the hard work in grappling with the challenge of how best to regulate this rapidly evolving technology. As addressed in our recent thought leadership report AI in Financial Services 3.0: Managing machines in an evolving legal landscape,  financial regulators are particularly focused on the use of AI in finance - a data rich industry in which the deployment of machine learning is advanced.

Work by the UK financial regulators

Last week the Bank of England, FCA and PRA issued a feedback statement in response to their joint discussion paper on artificial intelligence and machine learning. The latest update does not include policy proposals but instead summarises responses from the discussion paper. These responses include valuable insights about where the financial services industry would appreciate further guidance for AI adoption.

The discussion paper (DP5/22), published last year, invited comments on the best approach to defining AI, identifying the benefits, risks and harms of AI, and how regulation can support the safe and responsible adoption of AI. Now in a feedback statement (FS2/23) the regulators share the themes from the responses they received.

Regulatory definition and objectives

  • No need for a regulatory definition: Respondents agreed with the regulators that a strict regulatory definition of AI is unnecessary and unhelpful, not least because of the pace of technology development. Instead a technology-neutral principles-based or risk-based approach that focuses on AI's specific characteristics or risks could better support the safe and responsible adoption of AI in financial services.
     
  • Dynamic guidance for dynamic AI: Regulators should consider creating and maintaining ‘live’ guidance, including periodically updated best practices, which can adapt to the rapidly changing AI landscape.
     
  • Ongoing industry engagement: Respondents praised initiatives like the AI Public Private Forum and recommended using it as a template for future public-private collaboration.

Potential risks and benefits

  • Consumer outcomes: Respondents emphasised that regulation and supervision should prioritise consumer protection, particularly in terms of fairness and other ethical dimensions. Though AI could benefit consumers, it also creates risks like bias, discrimination and lack of explainability and transparency which could lead to exploitation of consumers. This is especially relevant given the regulators’ current focus on firms’ implementation of the Consumer Duty.
     
  • Governance structures: Respondents suggested that the most salient risk for firms is insufficient oversight. Respondents felt that current firm governance structures, and regulatory frameworks like the SMCR, could be adequate for addressing AI risks but sought actionable guidance on how to interpret the ‘reasonable steps’ element of the SMCR in an AI context.
     
  • Third party risks: According to the feedback, third-party providers of AI software/tools do not always provide sufficient information to allow for effective oversight of their services. Third-party exposure could increase systemic risks too. Given this, and the increased complexity of models, respondents asked for further guidance and also mentioned the relevance of the incoming operational resilience critical third party regime.
     
  • Model risk management: Respondents deemed the principles proposed in PRA SS1/23 sufficient for AI model risk but suggested certain areas could be strengthened or clarified to address AI-specific issues.
     
  • Joined-up approach: Respondents suggested firms adopt a joined-up approach across business units and functions, including closer collaboration between data management and model risk management teams.

Regulatory barriers and concerns 

  • Complex landscape: Respondents expressed concerns about the complexity and fragmentation of AI regulations, calling for better coordination and alignment between regulators (both domestic and international). Regulatory barriers around data protection and privacy could hinder adoption of AI. They argued for an industry-wide standard for data quality to be developed. Respondents asked for more practical and actionable guidance through illustrative case studies over the more complex and confusing areas of AI regulation.
     
  • Data regulation concerns: The majority of respondents highlighted the fragmented nature of data regulations and emphasised the need for regulatory alignment to address data risks associated with fairness, bias, and protected characteristics management. Further guidance was sought in relation to interpreting Equality Act 2010 and FCA Consumer Duty vis-à-vis fairness in the context of AI models.

Work by the UK competition regulator

It's not just the financial regulators who are considering the complex issues relating to the regulation of AI. See our earlier post on the work that the Competition and Markets Authority has done with respect to foundation models. Their initial report follows the UK Government’s white paper on AI, published in March 2023 and the CMA’s response to the White Paper of June 2023.