At a time when all eyes have been on the US, a few recent developments indicate how the UK government and regulators aim to balance AI innovation with AI safety. As well as announcing a new AI Assurance Platform, the government has suggested we can expect AI legislation next year.
DSIT’s AI Assurance Platform
As AI adoption accelerates, there is a growing market for technologies designed to mitigate the multifaceted risks associated with AI and GenAI. The UK government anticipates the demand for AI assurance tools and services to grow exponentially and surpass £6.53 bn by 2035. These “tools for trustworthy AI” aim to help to monitor and control the AI system in operation.
To ensure these opportunities materialise and that all British businesses have access to the critical resources needed to address the multifaceted AI risks, the government has announced a one-stop-shop for information on the actions that businesses can take to identify and mitigate these risks.
The AI Assurance Platform aims to support UK businesses in the development and use of trustworthy AI products and services. The platform will enable businesses to, for example:
- carry out impact assessments and evaluations
- review data used in AI systems to check for bias
- ensure trust in AI as it is used in day-to-day operations
Businesses, especially SMEs, will get support to use a self-assessment tool for implementing responsible AI practices and making better decisions in developing and using AI systems.
The new AI Assurance Platform will contain in one place existing guidance and tools from the Department for Science, Innovation & Technology, plus newly developed resources like the AI Management Essentials (AIME) tool, which aims to support private sector organisations to develop ethical, robust, and responsible AI.
To ensure the new AIME tool is fit for purpose, DSIT has launched a consultation which will run until 29 January 2025 and will help further refine the tool to meet the needs of organisations across different sectors.
Commitment to regulate frontier AI in this parliamentary session
AI was among the priorities of the Labour manifesto in which it pledged to introduce binding regulation on the handful of companies developing the "most powerful AI models”. This was confirmed in the King’s Speech but AI was not among the list of bills announced at the time.
In a recent interview at the FT’s Future of AI Summit, Peter Kyle, the Secretary of State for Science, Innovation and Technology, confirmed the government’s intention to make Britain’s voluntary agreement on AI testing legally binding and that new legislation for “frontier AI” will be tabled next year.
Financial services regulators moving ahead with AI initiatives
When it comes to AI adoption, financial services is one of the more advanced sectors. This means that AI safety is high on the agenda for the sector’s regulators but so is demonstrating the benefits that AI adoption can bring.
For example, the Bank of England has recently announced the establishment of an Artificial Intelligence Consortium. The platform will enable stakeholders to discuss AI capabilities, development, deployment, and use in UK financial services, thereby informing BoE’s approach in this area.
The Consortium aims to:
- Identify how AI is or could be used in UK financial services.
- Discuss the benefits, risks and challenges arising from the use of AI
- Inform the Bank of England’s (Bank’s) ongoing approach to addressing any identified risks and challenges, and in promoting the safe adoption of AI
The Financial Conduct Authority’s recently opened AI Lab seeks to achieve a similar goal. The Lab aims to help firms address challenges in developing and implementing AI solutions.
The FCA’s AI Lab comprises four elements:
- The AI Input Zone is an online feedback platform. Similar to the European Commission’s recent consultation on AI in financial services, the FCA wants to hear from the industry about the potential AI risks and opportunities within financial services.
- AI Spotlight will showcase examples of how financial services firms are experimenting with AI. Accepted projects will take part in a Showcase Day on 28 January 2025.
- An AI Sprint will bring together industry, academia, regulators, technologists and consumer representatives to explore how the FCA enables the safe adoption of AI in financial services. The sprint will take place on 29-30 January 2025.
- A Supercharged Sandbox which will leverage its existing Digital Sandbox infrastructure to support experimentation with, for example, enriched datasets and increased AI testing capabilities.
Looking ahead
Since Brexit, the UK has taken a contrasting path to the EU when it comes to regulating AI. The previous government spelt out a less prescriptive and more agile approach to regulating this kind of rapidly advancing tech. For now, the new government appears to be treading a similar line. Even with some AI regulation on the horizon, the latest developments suggest that the government and regulators want to encourage the safe and responsible take-up of AI.