As readers of our report on AI in financial services will know, the financial services sector has been experimenting with AI for many years. The rise of large language models and generative AI like ChatGPT has given momentum to firms experimenting with AI in new areas. This has triggered a range of regulators to warn of the potential financial stability implications of AI.
01| Concentrating on concentration risks
The Bank for International Settlements has just released a bulletin on AI in central banking. It explores the opportunities and challenges for central banks created by AI and the new generation of machine learning techniques.
One interesting point for financial services firms to note is the connection the paper draws between AI, concentration risk and financial stability risk. This is relevant to regulators and financial firms alike as they increasingly rely on external LLMs or GenAI models supplied by a handful of providers.
The BIS draws attention to the risks that arise if the same few best in class algorithms are used by many institutions. In stress situations, for example, their behaviour might look increasingly alike and lead to undesirable phenomena such as liquidity hoarding, interbank runs and fire sales.
02| Staying ahead of the risks
The BIS is not the only one drawing attention to the financial stability risks raised by AI.
In the US, the Financial Stability Oversight Council has for the first time identified the use of AI in financial services as a vulnerability in the financial system. In its 2023 annual report, the FSOC recognises the potential benefits offered by AI in reducing of costs and improving efficiencies but also notes that AI can introduce risks, such as safety-and-soundness risks like cyber and model risks.
The FSOC has recommended keeping a watchful eye on the rapid developments in AI, including GenAI, to ensure that oversight structures keep up with or stay ahead of emerging risks to the financial system.
03| Herd concerns
The Bank of England is also looking at the risks posed by AI to UK financial stability.
In its latest Financial Policy Summary and Record, the Bank of England’s Financial Policy Committee considers the recent developments in AI technologies and their adoption by the financial system. The FPC emphasises the importance of ensuring that firms have appropriately addressed the risks associated with the AI models they employed.
The FPC suggests that wider adoption of AI could pose system-wide financial stability risks, for example due to herding or broader procyclical behaviours. It promises to keep AI and machine learning on the FPC’s agenda in 2024.
What next?
One way financial regulators are mitigating these risks is by taking powers to supervise the operational resilience of the technology companies that provide critical services to the financial system. The EU’s Digital Operational Resilience Act and the UK’s critical third party regime are notable examples. As their AI models are relied upon by more financial services firms, the providers of those models are more likely to find themselves attracting scrutiny from financial regulators and central banks.