This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minute read

Mid-Year Update 2025: Round-up of developments in AI regulations in key regions

Given the fast moving pace of all thing AI related, ahead of the launch of our Tech Legal Outlook 2025, Mid-Year Update, we take stock regarding developments in AI regulation in four key regions.

United States 

There is consensus among U.S. lawmakers and industry on the need for federal legislation that establishes a comprehensive framework for AI. However, diverging opinions on the degree of restrictions to impose and on which players to impose them have prevented Congress from moving forward. Trump held true to his promise to rescind Biden’s landmark AI Executive Order and replace it with his own Executive Order on AI, marking a clear deregulatory shift focused on encouraging economic competitiveness, national security, and AI innovation.

In the absence of federal regulation the U.S. states have taken the lead in legislating AI. The most significant is Colorado’s AI Act, set to come into force in February 2026, which focuses primarily on high-risk AI systems. In 2024 alone, there were almost 700 state-led legislative proposals on AI, which includes California’s AI safety bill, SB 1047, ultimately rejected by California Governor Gavin Newsom after strong opposition from AI companies and politicians from both sides. State-led AI legislation has made it difficult for companies to operate at national level as crossing state borders often means facing additional or different compliance requirements.

However, state regulation could come to an abrupt stop if Congress approves Trump’s controversial One Big Beautiful Bill Act. This includes a proposal to introduce 10-year moratorium on states enforcing most forms of state level AI regulation. The outcome of the Bill is still uncertain given strong pushback from U.S. states, including Republican led ones, but if enacted, this would override a wide range of existing state laws regulating deepfakes, algorithmic bias and election security. 

Asia 

While the launch of China’s DeepSeek R1 model created ripples in the West which disrupted the markets and brought down the value of some of the major tech companies, in Asia it sparked significant attention for the AI sector. 

In China, the recent trade tensions with the U.S. have spurred a renewed drive towards advancing its strategy of achieving self-sufficiency in the technology sector, particularly in the AI space. Recently, President Xi Jinping advocated for "safe, reliable, and controllable" AI, and called for an acceleration in the development of AI regulations and laws. 

China has already implemented an extensive AI legislative framework, which includes various sector-specific AI rules for industries like finance, healthcare, and automotive, addressing growing risks and ethical concerns. Healthcare is characterised by rare and low-level AI regulations, whereas regulatory efforts in the automotive industry have concentrated on the application of AI technology in autonomous vehicles. 

Various regulators have also provided guidance which impact financial services firms. China’s National Financial Regulatory Administration, for example, has released the Banking and Insurance Institutions Data Security Management Measures which require financial institutions to:

  • take effective measures to protect the legitimate rights and interests of individuals; and
  • ensure transparency, fairness and impartiality, such as by establishing an assessment mechanism for onboarding of algorithms and AI-related products,

when designing algorithms, carrying out automated decision-making activities, labeling data or training AI models.

The Interim Measures for the Management of Generative AI Services, first introduced in 2023, continue to shape AI governance, ensuring national security and public interest protections. From 1 September 2025, businesses will also be required to adhere to the new Labelling Rules, which impose specific labelling obligations on internet service providers and content distributors concerning AI-generated content. 

While China’s legislators had been expected to publish a comprehensive “AI Act” this year, which would adopt a risk-based approach to assessing AI-based technologies – similar to the EU AI Act – recent revisions to Beijing’s legislative agenda suggest a pivot to rules promoting the "healthy development of artificial intelligence" rather than more formalistic controls on its deployment.

Various other countries in the region have already implemented AI regulations. For example, South Korea has introduced Asia's first comprehensive AI regulation, the AI Basic Act, which is set to start applying from January 2026, while Japan has passed a “fundamental law” establishing key guiding principles. Others, such as Hong Kong and Singapore, have been relying instead on sector specific guidance and is still uncertain whether this AI guidance will coalesce into an AI-specific legislation in the near future.

European Union

In the EU, businesses are preparing for the bloc’s flagship AI Act, which came into force in August 2024, with certain substantive provisions scheduled to become applicable over a three-year period. The AI Act establishes new obligations for both developers and deployers of AI systems, with significant requirements for "high-risk" AI models.

Companies used to the compliance challenge of the GDPR are aiming to get ahead the regulatory curve. They are preparing for the end of the year on their AI agreements with AI external service providers and are already working on their template agreements. When it comes to AI, however, a close collaboration between the legal and tech teams is crucial in ensuring both teams understand how compliance will be achieved.

The European Commission has recently issued guidelines which aim to clarify what falls in scope of “AI system” under the Act following a 2024 consultation which sought feedback on the seven main elements in the definition and examples of simple software systems that should be out of scope. It has also launched a consultation to better understand the practical impact of AI – and AI regulation – in financial services. 

However, despite the Commission’s attempts to provide guidance on the various obligations companies must comply with, the AI Act has received mixed feedback. 

The growing tensions with the U.S. over tariffs, along with the potential risks posed by excessive regulatory burdens to European tech sector, are increasing calls to postpone the implementation of the AI Act. While this might provide some breathing space to EU’s tech sector, it is still too early to tell if the Commission will listen to these calls and delay the implementation of the AI Act. In the meantime, businesses must continue with their compliance timetable ensuring they do not fall behind.

United Kingdom

The UK’s approach to AI regulation remains principles-based and sector-driven, whereas the EU AI Act is a risk-based, cross-sectoral law. UK sectoral regulators are still focused on assessing the impacts of AI in their sectors. This includes the Bank of England which has considered financial stability implications of AI and the FCA which has shared AI themes and launched a  AI test service.

In March the Artificial Intelligence (Regulation) Private Members' Bill was re-introduced into the House of Lords after failing to become law under the previous government. If enacted, this would establish a new “AI Authority”, codify the UK’s five AI principles into binding duties and requires companies to appoint a dedicated “AI Officer” responsible for ensuring the safe and ethical use of AI. As a Private Member’s Bill, it will require government backing to progress and for the time being can be viewed as part of the broader policy discussions.

While no new AI-specific law has been proposed, the policy direction is clear: regulators and businesses must self-regulate by adhering to the agreed-upon principles and preparing for increased oversight in the future. CEOs and compliance teams should view the current framework as a transition: maintain a rigorous, documented approach to AI governance now, stay engaged with regulators, and ensure all AI-driven services meet existing UK standards.

Looking ahead

The pace of change in AI development and its accelerating adoption continues to challenge regulators with new risks constantly emerging. As more AI specific laws comes online, we will continue to monitor and report on global developments and the evolving regulatory landscape.

 

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!