EU and US lawmakers are working together to create a voluntary code of conduct for artificial intelligence (AI). The aim of this code is to set forth standards for the use of AI technology, bridging the gap until formal laws like the EU AI Act are passed.

While the code will be voluntary and not legally binding, large AI companies are expected to adhere given calls to regulate AI are growing louder.

EU - US cooperation 

In 2021, the EU and US established the US-EU Trade & Tech Council (TTC) to build trust and foster cooperation in tech governance and trade. TTC serves as a forum for driving digital transformation and collaborating on new technologies.

AI technology, particularly generative AI, such as ChatGPT, is developing very rapidly. However, like any technology, AI can pose risks that demand a regulatory response.

Lawmakers across the globe are currently working on new rules to regulate AI and tackle these risks. As discussed in our previous posts, the EU is coming closer to passing rules that take a risk-based approach to AI regulation. Nevertheless, it will take time for such regulations to be adopted and come into effect. Hence the need for quick interim measures.

In this context, TTC provides a platform for stakeholders, companies, and private sector to engage with lawmakers to develop ways to mitigate the risks associated with AI while fostering innovation.

AI Code of Conduct

During a recent TCC meeting, representatives from the European Commission and US Government met with industry stakeholders and civil society to discuss generative AI.

The European Commission’s Executive Vice President, Margrethe Vestager, emphasised the urgent need for a code of conduct to address the rapid developments in generative AI. The EU’s objective is to set forth safety provisions and standards for businesses to sign up on a voluntary basis.

The US government is also willing to contribute towards the drafting of the code, without specifying the preferred standards.

The idea is to invite governments in other regions to participate, including G7 countries, as well as other nations like Indonesia and India.

Stakeholders’ perspective

AI industry and civil society stakeholders were also heard during the TTC discussion.

One topic discussed was related to the challenges involved in ensuring the safety of AI systems before they are launched. Another point raised highlighted the potential benefits of AI in the healthcare sector, such as cancer detection or treatment.

The concept of external audits, particularly algorithmic audits, was proposed as a potential regulatory tool to understand, measure, and mitigate harms caused by AI systems.

AI Act

The EU is currently in the process of adopting the first comprehensive legal framework to regulate AI in the world. The proposed AI Act imposes obligations on providers and users based on the level of risk associated with the AI system.

The proposed AI Act also highlights the importance of codes of conduct. For instance, non-high-risk AI systems will be encouraged to develop codes of conduct to voluntary comply with the mandatory requirements that apply to high-risk AI systems.

Next steps

The collaborative effort between the EU and US to draft a voluntary AI code of conduct seem to show their commitment to establish a framework for responsible AI practices. The code is expected to be in place before the end of the year.

The drafting of the AI Code of Conduct will continue alongside the legislative process of the EU's AI Act.