2023 has been the year that artificial intelligence finally hit the mainstream, and it shows no sign of slowing down. Last week, the UK government hosted the inaugural global AI Safety Summit at Bletchley Park, which managed to attract a significant number of senior tech players and bring diverse nations together in one forum. As the dust settles we reflect on the impact of the summit on the global stage.
Reflections on the summit
The delegates brought many different political and corporate agendas to the summit, and the UK opted to focus on the macro issue of safety - as an issue all parties could coalesce around and develop a level consensus.
Aside from getting the global players around the table, the summit has also delivered some concrete outcomes. These include the signing of the Bletchley Declaration by 28 countries signalling a consensus on the need for sustained international cooperation on AI, and a commitment to host further events in 6 months (South Korea) and 12 months (France) time.
The UK also announced the establishment of its AI Safety Institute to work closely with its intended US counterpart.
Delivering momentum and focus
Perhaps the greatest success of the event is the sense of momentum and focus it has brought to the question of how best to regulate AI.
In the same week the UN announced the creation of a new AI advisory body, to provide recommendations by the end of the 2023. The G7 published the Hiroshima Process Guiding Principles providing a framework for the safe use of AI systems. Leaders from many of the countries attending have also agreed to collaborate on safety testing frontier AI models against a range of critical national security, safety and societal risks.
The US response
The most striking example of this momentum was the publication by the US government of an AI safety specific Executive Order two days ahead of the UK summit. Billed as “the most significant actions ever taken by any government to advance the field of AI safety" our US team have been unpacking the impact of this major development.
We have been mapping global regulatory developments in the AI space for several years, and to date, the US has been slower to regulate AI than China or the EU. Biden's Executive Order on AI now represents a significant step forward in developing a framework for regulating AI safety in the US.
The Order builds on the work done by the National Institute of Standards and Technology (NIST) in providing an AI risk management framework, and will be bolstered by the US's new AI Security Institute which will sit within NIST. Ambitious and comprehensive, the Order sets out eight guiding principles intended to serve as a roadmap for the industry and regulators alike, and charges a swathe of US agencies to start developing AI related standards, tools and tests.
Looking ahead
AI is perhaps the most complicated area of digital regulation that the world has had to tackle to date. Due to the level of international concern with the risks presented by widespread AI adoption, it is not surprising that there is such an array of international initiatives seeking to impose much needed guardrails.
Looking ahead we expect the area to continue to develop apace, with geopolitical interests increasingly driving the agenda. Will the US's latest move inspire the EU to double down on their AI Act and get it over the line? Meanwhile China has its own AI Act on the cards for 2024, having already passed AI specific laws with respect to deepfakes and generative AI this year.
For more from our US team on the Biden Executive order read our latest client alert A North Star for AI? The White House’s Ambitious AI Executive Order and listen to our podcast Untangling the Spiderweb: Biden’s Executive Order on AI, National Security, and Digital Assets // Fintech.