ChatGPT has quickly become 2023’s buzzword. Excitement fills social media platforms and private sector boardrooms as everyone tries to find, or exploit, the value of the emerging generative artificial intelligence industry.
Many Chinese companies, such as Alibaba, JD.com, NetEase and ByteDance (parent company of TikTok) have been actively developing services to imitate human thinking. Baidu established China’s first domestic generative AI, the ERNIE Bot (文心一言) in March, followed by Alibaba’s launch of Tongyi Qianwen (通义千问) just this week. The AI Lab of Bytedance is developing its own large language model, while ChatJD from JD.com is on its way. All sparked by OpenAI's launch of ChatGPT, a new wave of generative artificial intelligence is coming to the Middle Kingdom like elsewhere – fast!
However, authorities around the world have started to apply tighter regulation to this technology to minimise potential (sometimes unknown) risks.
The Cybersecurity Administration of China (CAC) this week joined the trend by releasing the Management Measures for Generative Artificial Intelligence Services (Exposure Draft). While only a few pages long, these Draft Measures juxtapose cultivation and regulation of generative AI within the framework set up by China’s national trifecta of data laws: the Cyber Security Law (CSL), Data Security Law (DSL) and Personal Information Protection Law (PIPL).
Below we seek to identify some of the Draft Measures’ standout points:
- Governance principles – AI technologists may be wary of the subjectivity required to respect social morals, good customs, private interests as well as law, with parts of the Draft Measures tracking principles from the DSL. The Draft Measures also emphasise that content must avoid discrimination, respect IP and remain truthful. The principles espoused by China’s unique algorithm governance rules – avoiding discrimination in algorithm training and avoiding unfair competition – are echoed in the Draft Measures too, adding to the growing momentum in China’s commitments to ESG. Admirable but arguably a combination of obligations that require the infant technology to run before it can walk?
- Nationalist values – As well as requiring generated content to "reflect core socialist values”, the Draft Measures mirror the Provisions on the Ecological Governance of Network Information Content to prohibit content on “subversion of state power" or containing "terrorist or extremist propaganda", "ethnic hatred" or "other content that may disrupt economic and social order". China is not the only superpower where national security and political positions are currently intertwined with business. If enacted, the Draft Measures will exacerbate the challenge for multinationals seeking to navigate differing national perspectives.
- Gatekeeper liability – Generative AI enterprises must assume responsibility for content creators which use their products and services. The Draft Measures go further to oblige these enterprises to guide users on good content creation practices. This assumption of liability is similar to long-time discussions in the US and EU on social media platforms’ responsibilities for user content – a regulatory stick which waved too vigorously may stifle technological development.
- Security assessment – Generative AI products and services must undergo a security assessment by the CAC as is the case for online social platforms. Subject to details on any variations for these AI tools to the ambit of the assessment, development plans will need to factor in the increased time and costs of this process.
- Transparency burden – The Draft Measures mandate providers of generative AI that uses manual labelling in its development to use clear labelling rules, train labelling personnel, and conduct sample verification. In addition, providers must disclose details on the source, size, type, quality, etc., of pre-training and optimisation training data, underlying algorithms and technical systems. Content must also be labelled in accordance with China’s deep fake rules to avoid confusion and misunderstanding by the public. Transparency will be welcomed by users, but developers will be concerned by the compliance burden and the potential risk that disclosure requirements pose to their trade secrets.
- Real-name system – Consistent with other rules in China’s internet sector, real name registration is a must for generative AI users. Real identities will be linked to mobile numbers to allow Internet service providers to verify identities, in the perennial challenge of balancing policing of the Internet with individuals’ privacy.
- Social responsibilities – Businesses must take appropriate measures to prevent users from overindulging in generated content. An anti-addiction regime hit the online game sector in 2021, however it is not clear if generative AI providers will similarly need to embed controls into their products to restrict use by vulnerable groups.
- User profiling & data sharing – User profiling and data sharing between the AI platform and its affiliated entities are banned. While positive from a privacy perspective, this may increase the development costs of generative AI in China, as well as reduce the opportunities for businesses to monetise datasets on user behaviour.
- Compliant channels – Providers of generative AI are required to establish a mechanism to handle user complaints. Similar mechanisms exist under the PIPL and PRC Civil Code in respect of personal data handling. While more sophisticated platforms will have hotlines and customer service policies in place, start-ups may struggle to comply.
- Continuity requirements – The Draft Measures require “continuous” services to be maintained. This requirement is like those traditionally applicable to the telecom industry, where services need always be “usable”. One interpretation is that the CAC wants to treat generative AI services as public infrastructure that must be available to ensure China’s digital ascension. Good from a user perspective, but tough for businesses that may be stop-start in the early days of development.
- Model rectification – Where generated content is found to violate the Draft Measures (if enacted), providers must fix the problem within 3 months. As a positive for the AI industry, this grace period signals willingness on the part of the CAC to allow businesses to continue development provided that filtering safeguards and other remediation actions are taken in parallel.
- Sanctions – Violations of the Draft Measures (if enacted) which accord with provisions in the CSL, DSL, PIPL and certain other administrative and criminal laws will be punished according to those laws. Any violation that does not have a clear penalty will attract liability at the discretion of the cyberspace and other authorities, although fines must be within a range of RMB 10,000 to 100,000 (approx. US$1,450 to US$14,500).
In 2017, China announced its ambitious plan to become a global leader in the field of AI by 2030. The explosive growth of ChatGPT and its peers in the past few months has thrown down the gauntlet to China’s tech champions. However, the Draft Measures demonstrate that shaping the Chinese AI industry will be a joint endeavour of the private sector and the guiding hand of the State. Will these aspirations be achieved? Ask your favourite AI bot!
Please reach out to us if you have any questions. Stay tuned for further updates as we see them.