This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minute read

APAC Privacy Regulators Approve APPA Anonymisation Guide: Key Takeaways and How It Differs from the European Approach

On 31 July 2025, the Hong Kong Privacy Commissioner (“PCPD”) together with seven privacy authorities from Australia (Victoria), Canada (Federal and British Columbia), Japan, Korea, New Zealand and Singapore (together, the “Regulators”), approved the new “Guide to Getting Started with Anonymisation” (the “Guide”) published by the Asia Pacific Privacy Authorities (the “APPA”). The growing demand for data-driven innovation as well as the growing rise in artificial intelligence has prompted organisations across Asia-Pacific to revisit their privacy practices, particularly concerning anonymisation. Here, we distil the Guide, frame it in the context of Hong Kong’s and Singapore’s regulatory landscape and compare it to the European approach.

1. Understanding Anonymisation: Key Concepts and Risk-Based Approaches

The Guide states that anonymisation is the technical process of transforming personal data so that individuals can no longer be identified, either alone or in combination with other data. Crucially, anonymisation is not a static end-state but an ongoing process involving risk assessment, the application of technical and organisational safeguards, and continuous governance.

The Guide underscores the importance of best practices and standards and references the ISO/IEC 27559 framework. This standard frames anonymisation as a risk-based process rather than simply a matter of removing obvious identifiers. It requires a contextual assessment, evaluating who has access to data and for what purpose, alongside technical measures and robust management of re-identification risks.

2. A principles-based, five-step approach

The Guide recommends a five-step anonymisation process:

  1. Step 1: Know your data: Identify direct identifiers (e.g., names, ID numbers) and indirect identifiers (e.g., birth date, postcode), along with any 'target attributes' that provide utility but may carry sensitivity (e.g., health status, purchase history). 
  2. Step 2: Remove direct identifiers: Strip out direct identifiers and if necessary, pseudonymise records using robust, non-reversible techniques.
  3. Step 3: Apply anonymisation techniques: Modify indirect identifiers using appropriate technical controls such as generalisation, suppression, masking, data swapping, or adding statistical ‘noise’. 
  4. Step 4: Assess re-identification risk: Evaluate how likely it is that individuals could be re-identified, using techniques such as ‘k-anonymity’. ‘k-anonymity’ is a method to compute the re-identification risk level of a dataset and refers to the smallest number of identical records that can be grouped together in a dataset. A higher k-anonymity value means there is a lower risk of re-identification. For example, the case study in the Guide describes a gym grouping records to ensure a minimum group size (k-value), reducing the risk that any individual could be identified. The Guide notes that the Singapore Personal Data Protection Commission, for instance, recommends a minimum k-anonymity value of five (with other relevant safeguards in place) for sharing data with external parties. The Guide also highlights the importance of using the ‘motivated intruder’ test (originating from the UK Information Commissioner’s Office) to assess the risk of re-identification.
  5. Step 5: Manage re-identification risks: Use technical, contractual, and governance safeguards to manage re-identification risk. Examples of such measures include strong access controls, contractual obligations imposed on counterparties, and deletion of data when no longer needed.

3. What is the difference between anonymisation and pseudonymisation?

A frequent source of confusion is the distinction between anonymisation and pseudonymisation. The Guide does not address this distinction directly. However, the Guide notes that first-generation privacy laws made a simple distinction between personal data and anonymised data, whereas more modern privacy laws add a category of “pseudonymised” or “de-identified” data. Neither Hong Kong’s Personal Data (Privacy) Ordinance nor Singapore’s Personal Data Protection Act contains the concept of “pseudonymised” data, but it is worth outlining the differences given the frequent confusion between the two concepts.

  • Anonymisation is the process of altering personal data so that no individual can be identified, either directly or indirectly. In other words, if the risk of re-identification is sufficiently low, personal data that is genuinely anonymised falls outside privacy laws.
  • Pseudonymisation, in contrast, replaces identifying information (such as names or ID numbers) with fictitious values which do not allow the individual to be identified (pseudonyms). The additional information needed to link the pseudonyms back to the identifying information is then kept separately. Pseudonymised data is still regarded as personal data, because of the ongoing risk of re-identification. For example, if a hospital with a medical research dataset replaces the names of individuals with random codes, but keeps a separate “key” file which allows the hospital to link the code to the underlying individual, this is classified as pseudonymised data.

The Singapore Personal Data Protection Commission (the “PDPC”) has issued guidelines expressing what kind of data can be considered to be anonymised – one of the criteria is the removal of direct identifiers. In the guidelines, the PDPC stated that the use of pseudonyms to replace direct identifiers would be considered as ‘removal’ of a direct identifier. However, in order for such data to be anonymised, the re-identification risk has to be managed through removal of indirect identifiers, additional safeguards on identity mapping tables, and periodic reviews. 

4. A more business-friendly approach compared to the EDPB when assessing the risks of re-identification

It is worth noting that the Guide takes a more business-friendly approach compared to the European Data Protection Board (the “EDPB”) when assessing the risks of re-identification, based on the EDPB’s approach set out in their draft Guidelines 01/2025 on Pseudonymisation

This is because the Guide references the UK’s “motivated intruder” test to assess the risks of re-identification. This is a subjective test which assesses the risk from the point of view of a “motivated intruder”. For example, if data controller A discloses pseudonymised data to data controller B, data controller B might be able to view this dataset as anonymised data when using the subjective “motivated intruder” test. 

By contrast, the EDPB’s draft Guidelines 01/2025 on Pseudonymisationtake the view that pseudonymised data is still considered personal data from the point of view of data controller B, as long as the pseudonymised data and additional information could be combined regardless of the fact that it is data controller A who holds the additional information.[1] This is an objective test rather than a subjective test and is more onerous on data controllers who want to argue that data has been anonymised and therefore falls outside the privacy regime.

Although the EDPB guidelines are not final, the difference in these two approaches highlights that the Regulators are taking a more pragmatic approach than the EDPB on the assessment of the risks of re-identification. 

5. Final thoughts

The Regulators’ endorsement of the Guide marks a step towards regional convergence on practical and flexible data governance. For organisations operating or partnering in the jurisdictions mentioned above, the message is clear: true anonymisation must rest on careful, risk-based evaluation, ongoing re-identification testing, and a blend of technical, contractual and governance safeguards. The Guide’s focus on real-world workflows and risk management, rather than rigid zero-risk standards, makes it notably more business-friendly than the draft European guidance.

As AI-driven analytics and data sharing expand, staying aligned with evolving standards and regional best practices will be essential. While Hong Kong and Singapore do not separately recognise pseudonymised data as a legal category, understanding the distinction and choosing the right protective techniques remains crucial for legal compliance and commercial trust.

 


 

[1]    See paragraph 22 of EDPB’s draft Guidelines 01/2025 on Pseudonymisation 

Tags

data and cyber