The rapid and prolific adoption of generative artificial intelligence (AI) tools such as ChatGPT into mainstream corporate use is the hottest topic in tech. Just as in the U.S. and Europe, the APAC region has seen a number of exciting AI initiatives, such as the launch of Tongyi Qianwen, Alibaba Group’s answer to ChatGPT, in April 2023. Meanwhile the AI regulatory landscape in the APAC region remains largely uncharted, with a number of jurisdictions on the cusp of regulatory transformation.
Different jurisdictions will adopt different approaches as the AI regulatory framework matures and develops, creating a regulatory patchwork for multinational organisations to navigate. We identify some common themes across the region.
A trend towards regulation
Given potential risks of AI to generate harmful outcomes not just to individuals but to society at large, governments and regulators are responding with greater scrutiny as they consider whether existing regulatory frameworks are fit for purpose. While many jurisdictions in APAC already have general principles-based frameworks covering AI, spurred by the surge in generative AI, many are now considering AI specific regulation and/or guidance.
A prescriptive approach
Some APAC jurisdictions are following EU and its proposed AI Act in leaning towards a prescriptive, risk-based, AI-specific approach:
- Australia: AI is currently only regulated through a voluntary scheme, the AI Ethics Framework and various technology-neutral regulations such as consumer protection, online safety and privacy laws, However, the federal government has announced a complete review of governance mechanisms for the safe and responsible use of and are seeking public feedback on the governance and regulatory framework for AI. Many see this as signalling the intention to develop AI-specific legislation on to address gaps in the existing laws.
- Mainland China: The Cybersecurity Administration of China (CAC) has unveiled proposals for rules and restrictive measures on companies developing generative AI products like ChatGPT. These measures will cover AI algorithms as well as the ‘models and rules’ used to generate content, and are expected to be finalised before the end of 2023 (read more).
- Taiwan: A draft Basic Law for Developments of Artificial Intelligence has been proposed, by a private research foundation. This sets out fundamental principles for AI developments and for the Taiwanese government to promote the development of AI. It is expected the government will examine this draft and create its own draft AI legislation for Taiwan in due course.
Guiding, not legislating
Meanwhile, some countries are favouring a lighter touch, voluntary approach to AI specific governance:
- Singapore: The general regulatory approach to date has been focused on fostering AI innovation through the responsible use of AI in the finance sector. The government has issued AI-friendly best practice principles and follow-up guidance for financial institutions, with no penalty or liability failure to adhere to these guidelines (although regulated firms will still be subject to broader risk management requirements when deploying AI). Last year, it launched AI Verify, a testing framework and toolkit aimed at assisting firms in the evaluation of AI systems against international AI ethics principles. In its latest move, the government intends to issue a set of advisory guidelines on the use of personal data in AI systems – which will apply across all sectors – which are also expected to be voluntary.
- Hong Kong: Similarly there are no statutory laws on the use of AI, although existing regulatory regimes may apply. As in Singapore, AI specific guidance to date has been focused on the finance sector. The Hong Kong Monetary Authority has issued guidance focused on financial consumer protection in the use of big data analytics and AI and issued high level principles on AI. The Securities and Futures Commission has not issued AI specific guidance, however CEO Julia Leung recently commented on SFC expectations for licensed firms using AI, including thorough testing and risk assessment of AI use, monitoring of the data used, ensuring clients are treated fairly and a reminder that where there are conduct breaches related to AI, the SFC will hold firms responsible rather than the AI used. The Privacy Commissioner of Hong Kong has also recently spoken on the risks of using generative AI, particularly in scenarios where sensitive personal data was used to train the AI systems, and urging for more formal legislation to be formed.
A middle ground
Other jurisdictions are layering regulation on guidance:
- South Korea: A bill on Promotion of AI Industry and Framework for Establishing Trustworthy AI has been passed, which could be enacted and take affect by the end of the year at the earliest. This bill provides a remarkable open pathway for the government to develop AI-specific policies on an ongoing basis, with a new AI development plan to be formulated every three years and an AI committee to consider AI-specific policies. It is also expected to introduce a statutory requirement for South Korean companies to establish ethical guidelines for AI. In the meantime the Personal Information Protection Commission has established of a research group to review current legislation governing the protection of biometric information.
- Japan: The approach is two-pronged - a combination of providing non-binding guidelines to support AI innovation and imposing mandatory, sector-specific restrictions on large platforms to safeguard the use of AI. A new AI strategy council has been convened to discuss ways to promote and regulate AI. Japan’s Personal Information Protection Commission has also just issued a warning to OpenAI, developer of ChatGPT, in relation to its collection of personal data without obtaining user consent and noted that it would take additional actions if it recognises any new concerns in the future.
Overarching AI frameworks impacting APAC
Meanwhile, at an international level, the Organisation for Economic Cooperation and Development has announced plans to publish an update to its AI guidelines, with a framework to be formed as early as June this year.
It is anticipated that the updates will address issues related to the emergence of generative AI, for instance transparency and explainability, one of OECD’s AI Principles that has been woefully cast aside in “black-box” AI systems. Forty-two countries, including the G20 nations, have committed to the OECP Principles. While amendments are still in a conceptual stage, it is foreseeable that this update could have a far-reaching impact.
The way forward
Businesses operating in the region are advised to monitor the latest AI regulatory developments in their usage and potential deployment of AI technologies in order to ensure compliance with these ever-evolving AI frameworks, including to adapt and update internal processes and policies and AI governance and compliance programs.
We will keep monitoring the developments and provide further updates and can assist those managing the compliance challenge.