This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minutes read

Hong Kong Privacy Commissioner’s new Model AI Framework – What does it mean for businesses adopting AI solutions in Hong Kong?

A new Model AI Framework in Hong Kong, SAR addresses data privacy and governance compliance issues in connection with the procurement and implementation of AI solutions from third party vendors – as well as using and handling personal data in customising or operating AI systems to help them deploy AI responsibly and ethically and in line with the requirements of Hong Kong’s data protection law (the Personal Data (Privacy) Ordinance (PDPO). 

This follows a regulatory trend in other jurisdictions to give more practical guidance and recommendations on how organisations should be using AI solutions and more practical ways in how existing data privacy frameworks apply to this new adopted technology. 

Guidance when developing and procuring AI

The Office of the Privacy Commissioner for Personal Data, Hong Kong (PCPD) recently published the "Artificial Intelligence: Model Personal Data Protection Framework" (the Model AI Framework) as a comprehensive best practice guidance and recommendations  for organisations procuring and using AI solutions (including generative AI and predictive AI) to assist them with how to comply with the data privacy requirements of the PDPO.

This Model AI Framework builds on a similar framework outlined in the PCPD’s 2021 “Guidance on the Ethical Development and Use of Artificial Intelligence” (2021 AI Guidance), which targets organisations developing in-house AI models. 

As with the 2021 Guidance, the Model AI Framework steers organisations towards putting in place an AI governance framework which adheres to the three data stewardship values (being respectful, being beneficial and being fair) and seven ethical principles (accountability, human oversight, transparency and interpretability, data privacy, fairness, beneficial AI, reliability, robustness and security) advocated in 2021 AI Guidance. 

Organisations which both develop AI models in-house and procures AI models from third party vendors will need to take note of requirements in both the 2021 AI Guidance and the Model AI Framework. 

Key takeaways: measures to adopt

The Model AI Framework recommends organisations to adopt measures in the four domains below to build a comprehensive AI governance framework which identifies and mitigates risks in the process of procuring, implementing and using AI solutions. These are:

01 AI Strategy and Governance

  • Putting in place an AI strategy which sets out principles of ethical procurement, implementation and use of AI, and establishing internal policies, procedures and infrastructure for organisations to move towards this goal. The AI strategy should be regularly communicated to stakeholders and be reviewed and adjusted based on feedbacks from stakeholders. 
     
  • Formulating and considering governance issues in the procurement of AI solutions, including whether the potential AI suppliers have adhered to international technical and governance standards, any data processor agreements to be signed, the policy on handling the output generated by the AI system (e.g. label or watermark AI-generated content and filter out AI-generated content that may pose ethical concerns). 
     
  • Setting up an AI governance structure with sufficient resources, expertise and authority to steer the implementation of the AI strategy and the procurement, implementation and use of AI systems, and providing training to relevant personnel to ensure that they have necessary awareness and knowledge when using AI systems. 

02 Risk Assessment and Human Oversight

  • Establishing a risk management system to identify, analyse, evaluate and mitigate risks throughout the life cycle of AI system deployment, including adopting appropriate levels of human oversight (e.g. human-out-of-the-loop, human-in-command, human-in-the-loop) based on the risks assessed. Risk assessments should be conducted by a cross-functional team during procurement process or when an AI system is significantly updated. 
     
  • Assessing data privacy risks to individuals when conducting risk assessments including assessing how personal data is collected, use, retained and secured in the whole data life cycle as it is being used in AI systems. Risk assessment should also consider broader ethical considerations including whether the AI solutions will have potential impact (including benefits or harms) on individuals, the organisation and the wider society. 

03 Customisation of AI Models and Implementation and Management of AI Systems

  • Practising strong data governance and complying with the PDPO in preparing datasets for customising and using AI solutions, taking into account the legality and quality of training data used. 
     
  • Conducting rigorous testing and validation of AI models before deployment to ensure that the models are reliable, robust and fair and work as intended. 
     
  • When implementing AI models with open-source elements, following industry best practices in maintaining code and managing risks. Implementing security measures to ensure that AI solutions are secure and robust, such as implementing internal staff guidelines on acceptable inputs to be fed into AI systems and establishing mechanisms to enable the traceability and auditability of AI systems’ outputs. 
     
  • Monitoring and evaluating (or requiring AI suppliers to evaluate) AI systems continuously in light of the shifting landscape of risks. Considering to establish an AI incident response plan which provides for processes such as monitoring, reporting, containing, investigating and recovering from AI incidents. 

04 Stakeholder Communication and Engagement

  • Maintaining regular and effective communication with all stakeholders, particularly internal staff, AI vendors, individual customers, and regulators, to enhance transparency and build trust regarding AI usage.
     
  • Handle data access and correction requests, and provide channels for individuals to provide feedbacks, seek explanation and /or request human intervention. 

Model AI Framework - A new chapter for Hong Kong?

For organisations with operations in or targeting Hong Kong, SAR which are building up an AI governance framework or reviewing and updating their existing governance compliance frameworks to incorporate AI issues, the Model AI Framework provides comprehensive guidance and recommendations which reflect the HK Privacy Commissioner’s expectations for ethical AI deployment and guidelines for organisations looking to refresh their AI compliance efforts. 

With certain components of the Model AI Framework going beyond the existing data privacy legal requirements under the PDPO (e.g. AI incident response plan), it remains to be seen how and to what extent the HK Privacy Commissioner will monitor or be able to enforce compliance with the Model AI Framework and whether non-compliance of the guidelines and recommendations by an organisation who utilises personal data in its implemented AI solutions will give rise to a presumption against such organisation in any data privacy investigation or compliance check initiated by the HK Privacy Commissioner. 

However, given the HK Privacy Commissioner’s announcement that it will increase the strength of compliance checks towards AI adoption (after it completed a round of compliance checks on 28 organisations in February 2024), the release of the Model AI Framework certainly signifies a rise of HK data privacy regulators’ expectations for organisations to establish its AI governance frameworks and guardrails to ensure its implementation and use of personal data with AI is conducted responsibly and ethically. 

The global context

The release of this Model AI Framework comes at a time when regulators in jurisdictions such as Japan, Korea, Australia and Thailand are moving towards putting in place incremental AI-specific legislation, and follows Singapore Infocomm and Media Development Authority’s introduction of the Model AI Governance Framework for Generative AI in early June (read more). 

It remains to be seen whether this will mean a potential move towards incremental AI-specific legislation in future for Hong Kong, SAR, which is a trend increasingly seen in other APAC jurisdictions.

As regulators (not only data protection regulators but other regulators) are ramping up their speed to regulate or provide further guidance on adopting and using AI, organisations who have operations in multiple jurisdictions are advised to closely monitor these regulatory updates. They should  put in place both a centralised but flexible AI governance framework which keep track of these local regional requirements and can update their local regulatory requirements matrix accordingly as part of their regulatory mapping process in implementing their AI governance and compliance framework. 

Tags

ai, data and cyber