The EU has been the global leader in the regulation of AI, starting with its General Data Protection Regulation (GDPR). This regulation set a global gold standard framework for the use of the personal data that is key to the development and running of AI systems, and already anticipated the automated decision-making they enable.
With its AI Act the EU has now become the first jurisdiction to adopt a specific regulatory framework providing consistent rules for AI across the EU, and to support the development and adoption of AI across the EU economy.
As with the GDPR we expect the influence of the EU’s AI Act to extend well beyond the European Union. In this post we explore its likely impact in the world's largest economies, as the U.S and China race for leadership in AI.
A framework for supporting innovation
The flagship EU AI Act has now been adopted with implementation being phased in over the next three years. Given the tiered, risk based approach, the most burdensome obligations will apply in the relatively narrow range of “high risk” use cases which include credit scoring and employee monitoring.
Principles v product safety
The GDPR is a principle-based legislation, while the EU AI Act is more of a product safety regulation, with a blend of fundamental rights. GDPR overarchingly applies to everything, while the EU AI Act will be more focused.
The AI Act focuses on products and applications (“use cases”) when they are placed on the market, exclusive of pure research and development. This means that a company willing to develop AI systems can do so but must consider the requirements that will apply when it will want to place them on the market. Even if they want to test it in real life they can benefit from the so-called sandbox that can last up to 12 months during which they can test the system within certain boundaries.
Providing legal certainty in a push for innovation
The EU's institutions may give people an impression of over-regulating, but their work goes beyond drafting regulations. It is important to understand that the aim of the EU institutions in regulating AI through the AI Act framework is to create an environment of trust, which is designed to foster innovation.
The new regime should not therefore just be viewed as a burden on companies but rather as providing more legal certainty. The regulatory environment is also supplemented by a push for innovation and supercomputing access, especially for start-ups and SMEs.
The territorial scope exceptionally broad
While in practice many AI systems will only be subject to light obligations, the territorial scope of this legislation is exceptionally broad - and could therefore potentially capture many international organisations with no actual presence in the EU (including providers of AI systems that are put into service or placed on the market in the EU, and providers or deployers of AI systems where the output from the system is used in the EU).
EU v China
Asia AI rules in flux
In step with the global picture, AI rules in Asia are in flux: diverse local culture and political landscapes lead to a patchwork of rules and standards across the region (read more). While a light touch and voluntary approach to AI regulation is dominant across the Asia region, we see markets like Japan moving to mandatory rules. Only Mainland China is an outlier, with its prescriptive rules that show hints of influence by the EU AI Act.
China leading on regulation
Even before the EU’s AI Act grabbed the headlines late last year, China was among the first markets to establish an algorithm registry and issue some of the world’s first mandatory rules on generative AI. Since August 2023, licensing has been required for providers of generative AI offering public-facing services, covering all facets of generative AI.
China also introduced rules to regulate other AI related services and products such as algorithm recommendation and deep synthesis (or “deepfake”) services. The Chinese State Council is expected to submit a draft national AI law for review by the country’s legislature later this year.
Local companies are therefore already doing quite a lot, including security assessments and transparency work. Many Chinese companies are already ahead of the curve to certain extent when they now look to comply with the EU AI Act.
Points of difference
The main principles in the EU and China's AI regulations are very similar: be transparent with customers, protect data, be accountable to the stakeholders, write instructions and guidance on the product, etc. But China's regulation so far only focuses on generative AI, although Beijing has been mulling an overarching AI law, whereas the EU AI Act is already applicable to all AI.
The EU AI Act focuses on the rights of users, whereas the Chinese regulation is seen as more of a state, or- government-led rulebook. The approval process for deploying large language models in China, for example, is obviously government-led, which is quite different from self-assessment required by the EU.
In addition, Chinese regulations require companies and products to observe socialist values and ensure that AI outputs are not perceived as harmful to political and social stability. For multinational corporations that have not grown up with these concepts, so that can cause confusion among compliance officers.
Lobbying for harmony
We're now seeing some governments in the APAC region that are taking large chunks from the EU's regulation on data and AI, that they're working on their own AI legislation. Businesses can certainly consider lobbying their local government stakeholders to achieve more harmony and consistency in cross-market rules.
EU v US
US gaining momentum
Although US businesses tend to appreciate the benefits of a light-touch voluntary approach, there is an underlying move towards stricter regulatory oversight in the United States. Historically a follower in data privacy regulation, but now, spurred on by the Biden AI Executive Order and the AI Bill of Rights, the US is aiming to lead global discussions on AI risk management and compliance.
Additionally, individual states and cities in the United States are incrementally rolling out new laws, such as AI hiring restrictions in New York City, Utah’s AI policy act and, most notably, comprehensive AI legislation in Colorado. With a classification system akin to the EU’s AI Act, Colorado’s AI Act seeks to protect against algorithmic discrimination and is expected to significantly impact business across sectors and industries.
As with data breach laws and, more recently, comprehensive state privacy laws, it seems possible that Colorado’s AI law will be followed by similar state laws in the coming years – unless and until comprehensive federal AI regulations are enacted.
Impact of EU AI Act in US
Even the EU AI Act will be felt by US businesses. For example, a US bank could have developed an AI-based credit scoring tool for the US market and intend to roll it out in the EU or use it in the US to evaluate the creditworthiness of natural persons in the EU.
This could bring the US bank under the scope of the EU AI Act, thereby requiring a strong risk and quality management framework, using only high-quality data, thoroughly documenting the system and its functioning, enabling human oversight with intervention capabilities, and meeting high accuracy/robustness standards as well as post-market monitoring, incident notification and registration obligations.
Compliance
For international organisations navigating a rapidly evolving legal and regulatory landscape, managing the compliance challenge is absolutely key to maintaining trust with stakeholders of all kinds including regulators, customers and employees.
Help is now at hand with our updated AI Toolkit, a quick-reference guide for in-house counsel on all things AI. This Toolkit starts with a technical primer and provides an overview of key AI compliance, contracting and contentious topics across the EU/UK, Asia and the US. We hope this handbook helps you engage with this exciting new technology ethically, safely and lawfully.