A parliamentary committee has called on the UK Financial Conduct Authority to give more clarity on how financial regulation applies to AI. The Treasury Committee’s report on AI in financial services says that the FCA, along with the Bank of England and HM Treasury, should do more to manage the risks presented by AI.
AI rules
The UK does not yet impose AI-specific regulation on financial services firms. According to the Treasury Committee, this wait-and-see approach exposes consumers and the financial system to potentially serious harm.
The FCA has argued that its existing rulebook is flexible enough to apply to AI. For example, the Consumer Duty requires firms to act to deliver good outcomes for retail customers. This could steer firms towards being transparent with customers about their use of AI or ensuring that any AI applications do not discriminate against vulnerable customers or those with protected characteristics.
Committee recommendations
The report, which the Treasury Committee published on 20 January 2026, recommends that:
The FCA give comprehensive and practical guidance on the application of consumer protection and individual accountability rules to firms’ use of AI by the end of 2026,
The FCA and Bank of England conduct AI-specific stress testing to help firms prepare for AI-driven market shocks, and
HMT designate major AI and cloud service providers under the critical third parties regime before the end of the year.
Industry view
The industry has generally supported the UK’s sector-specific approach to AI regulation which has allowed for rapid adoption of the technology.
Any new standards would overlap with existing regulation. For example, the Consumer Duty and Senior Managers and Certification Regime already comprise extensive rules and guidance. Firms would effectively have to treat any new AI-specific guidelines as rules, adding to regulatory complexity.
Whether or not more guidance is on the way, the best approach for firms to take is to integrate AI compliance into their existing frameworks. This will support them to be clear, both internally and externally, about how they manage AI-specific risks. As well as acknowledging those risks, they should also continue to highlight the ways in which AI can also improve customer outcomes, such as measures to prevent fraud.

/Passle/5c4b4157989b6f1634166cf2/MediaLibrary/Images/2025-11-18-11-52-26-962-691c5dfa104d74a7c40dcb41.jpg)

/Passle/5c4b4157989b6f1634166cf2/SearchServiceImages/2026-01-19-13-38-23-910-696e33cf7a3631344c2f2f42.jpg)
/Passle/5c4b4157989b6f1634166cf2/SearchServiceImages/2026-01-13-12-37-35-768-69663c8f966611ff708e0fc1.jpg)
