This month, we bring you the first issue of The Five Things to Know, a horizon scanning piece summarizing the quarter and what you need to know in the coming months.
In this issue, we see regulators worldwide grappling with the deployment of AI; South Korea, Vietnam, Taiwan, and Singapore are taking steps to balance innovation and investment, with regulation. We also note developments in online safety laws for minors, the concern over energy use in data centres, and finally - which Asian country now has the strictest privacy regime?
New AI laws in Korea, Vietnam and Taiwan in the first quarter of 2026
What the headlines mean
Asia is slowly moving from voluntary AI guidance to AI-specific statutes. Korea and Vietnam are mandating obligations and risk level designations similar to the EU AI Act (albeit the obligations are generally less onerous than the EU AI Act), whilst Taiwan is now setting out its governmental plan for how it intends to regulate AI in the future.
Note that the majority of Asia (including Singapore and Hong Kong SAR) still use voluntary guidelines rather than AI specific laws. This is evidence that governments and legislators are still trying to carefully balance innovation/investment vs regulation with most jurisdictions favouring the former, but see the update immediately below regarding the new Singapore IMDA Agentic AI Framework.
Further details of the developments in each jurisdiction below:
Korea - AI Basic Act (in force from 22 January 2026)
High-impact AI is defined as AI that may significantly impact human life, physical safety, or fundamental rights, with a focus on 11 key sectors: energy supply, drinking water, healthcare, medical devices, nuclear power safety, transport, biometric analysis, criminal investigations, employment, loans, and student evaluation.
Transparency / labelling requirements: providers of products/services using generative AI and/or high-impact AI must inform users that AI is being used, and label AI-generated outputs where they could be confused for non-AI content.
Risk assessments and other obligations: High-impact AI providers must establish comprehensive risk management frameworks, identify risks throughout the AI lifecycle, and maintain documents regarding safety and reliability of the AI system.
Human oversight: Operators of high-impact AI must set up systems for human supervision and intervention to monitor high-impact systems.
Penalties: Violations can lead to fines up to KRW 30 million (~$20k USD).
Vietnam – Law on Artificial Intelligence (in force from 1 March 2026, with a one-year grace period)
Risk classification and management: The AI Law adopts a risk-based approach to classifying AI systems, categorising them into three levels of risk (i.e., high, medium and low) based on the level of impact on human rights, safety and security.
High-risk AI systems face a lifecycle-wide compliance framework: Providers of high-risk AI systems must complete a mandatory conformity assessment before deployment, notify the Ministry of Science and Technology before deployment, and comply with continuous obligations covering transparency, risk management, data governance, human oversight requirements, and incident reporting/escalation.
Medium-risk systems are governed primarily through transparency and accountability obligations rather than pre-deployment conformity assessment. Providers and deployers must comply with transparency obligations and must be able to explain the system's purpose, inputs, and risk controls to regulators upon request.
Taiwan – AI Basic Act (in force from 14 January 2026)
Seven fundamental principles guiding AI governance: The Act articulates seven core principles that will underpin Taiwan's AI ecosystem, including sustainable development and well-being, transparency and explainability, and accountability.
Act primarily sets out obligations on the government regarding future AI obligations: e.g. mandating the Ministry of Digital Affairs (MODA) to develop an internationally aligned risk classification framework.
Limited operational obligations on the private sector so far: The only mandatory private sector obligations are that when an AI product or service is deemed "high-risk" by the government, the AI product/service must disclose appropriate warnings to inform users of potential risks. More obligations are expected on the private sector as and when MODA starts developing its risk classification framework.
Singapore launches new Model AI Governance Framework for Agentic AI
What the headlines mean
Regulators worldwide are beginning to grapple with a fundamental shift in AI governance: the challenge of overseeing agentic AI systems that autonomously plan, execute multi-step tasks, and interact with tools and external systems with limited human intervention.
While existing regimes and standards (e.g. the EU AI Act requirements on transparency/human oversight and baseline governance frameworks like NIST AI RMF and ISO/IEC 42001) will increasingly be applied to agentic deployments, Singapore’s IMDA has moved earlier than most with a dedicated, enterprise-focused framework specifically for agentic AI.
This positions Singapore as an early reference point for agentic AI governance expectations in the APAC region and beyond.
Further details of the new Model AI Governance Framework for Agentic AI
The IMDA launched the Model AI Governance Framework for Agentic AI on 22 January 2026. The Singapore Government/IMDA positions this as the world’s first ever framework for enterprises on how to deploy agentic AI responsibly.
The IMDA launched the Model AI Governance Framework for Agentic AI on 22 January 2026. The Singapore Government/IMDA positions this as the world’s first ever framework for enterprises on how to deploy agentic AI responsibly.
The framework sets out key considerations across four dimensions:
- Make humans meaningfully accountable:
- Allocate responsibility clearly across internal stakeholders and external parties (e.g., model provider / agent vendor / internal teams).
- Build meaningful human oversight into workflows: define “significant checkpoints” where human approval is required (especially for high-stakes / irreversible actions) and audit oversight effectiveness over time.
- Implement technical controls and processes:
- Conduct pre-deployment baseline testing and post-deployment testing, as well as implement continuous monitoring/logging of agent actions with automated alerting of anomalous behaviour (and corresponding failsafes) to contain unexpected or unauthorised agent actions.
- Enable end-user responsibility:
- Transparency should be provided to users on when/how agents are used and what they can do e.g. the agent’s capabilities, access boundaries, and who to contact in case there are issues or errors.
Japan's energy efficiency law expands to data centres
What the headlines mean
Japan’s expansion of their energy efficiency law into data centres is part of a broader global shift toward treating data centres as vital, energy-intensive infrastructure with far-reaching ESG implications. Regulators and governments are increasingly hard-wiring energy efficiency and sustainability metrics directly into both the allocation of data centre capacity and the laws that govern their operation. This shift underscores a collective commitment to ensuring that the rapid growth of digital infrastructure aligns with broader environmental and sustainability objectives.
Singapore is a good illustration: under the Data Centre - Call for Application issued on 1 December 2025, successful applicants must meet “best-in-class” sustainability requirements, including facility PUE of 1.25, must obtain a Green Mark for Data Centres 2024 Platinum certification, and have at least 50% of power coming from green sources. Furthermore, the upcoming Singapore Digital Infrastructure Act is also expected to mandate all new and existing data centres to meet a certain PUE number.
The EU is also expected to present a legislative package in the second quarter of 2026 which will include minimum energy performance standards for data centres starting from 2030 for new facilities, as well as energy rating schemes for data centres covering energy efficiency, water efficiency, renewable energy use, waste heat reuse and flexibility.
Further details of Japan's energy efficiency law
Japan’s energy efficiency legislation (the Act on the Rationalization of Energy Use) is being expanded to include minimum performance standards and information reporting for data centres, with implementation expected in the month of April 2026.
In summary, the proposals include:
- Minimum efficiency standards for new data centres: setting an energy efficiency standard PUE of 1.4 by 2030 and a PUE of 1.3 for all new data centres built from 2029 onwards, with the ability for the authorities to require an energy rationalisation plan if the standard is not met.
- Enhanced reporting and disclosure requirements: requiring large data centre operators to submit additional medium-to-long term plans and periodic performance reports, with elements of public disclosure and potential “naming” of non-disclosing operators.
UAE enacts landmark Child Digital Safety Law
What the headlines mean
The UAE’s Child Digital Safety Law is part of the global move away from the traditional “notice-and-takedown” model toward a “systems-and-processes” model to protect children in digital environments, with a particular emphasis on age assurance, privacy-by-default settings and stronger governance for child users.
This UAE Law takes inspiration from the UK Online Safety Act (where child protection duties include requirements around “highly effective” age assurance for certain content) and the EU Digital Services Act (which includes restrictions on targeted advertising to minors).
In Singapore, the Online Safety (Relief and Accountability) Act illustrates a complementary direction: moving beyond a takedown system into a regulator-enabled victim redress model via an Online Safety Commission expected to be operational later in 2026, with powers to issue rapid directions to platforms and other actors.
Therefore, across jurisdictions we are seeing both (i) systemic safety duties/processes (UK/EU/UAE) and (ii) fast relief mechanisms for victims (Singapore).
Further details of UAE's Child Digital Safety Law:
The UAE has enacted a Federal Decree Law No. (26) of 2025 Regarding Child Digital Safety, establishing a comprehensive framework to protect children online. The law entered into force on 1 January 2026 and includes a one-year grace period before enforcement starts.
Key points:
- Who is in scope?: digital platforms and ISPs operating in the UAE or directed to UAE users. This will include social media services, gaming companies, search engines, streaming services, e-commerce platforms.
- Core obligations include requirements around:
- Age verification: age verification mechanisms which should be proportionate to the platform’s risk classification and the potential effect of the platform’s content on children.
- Content restrictions: This includes:
- having privacy-by-default settings for children’s accounts;
- age restriction tools with an appropriate verification process to bypass such restrictions;
- blocking/filtering mechanisms to restrict excessive interaction and participation by children on the platform; and
- robust parental control tools such as allowing parents to set clear usage-time limits and mandatory breaks on usage.
- Privacy obligations:
- Platforms are prohibited from processing personal data of children under 13 unless strict conditions are met, including obtaining verified parental consent, providing an easy method for consent to be withdrawn, and offering clear privacy notices.
- Platforms are also restricted from using children’s data for targeted advertising and commercial profiling.
- Enforcement mechanics: administrative penalties will be set out in implementing regulations, with powers expected to include measures such as fines/suspension/blocking-style tools.
Korea: Privacy reforms introduce even higher fines and further governance requirements
What the headlines mean
The combination of (i) higher statutory fining ceilings, (ii) increased governance requirements, and (iii) continuing public enforcement against prominent companies, underscore Korea’s position as having the strictest privacy regime in the APAC region.
Further details of Korea's privacy reforms:
Korea has passed amendments in February 2026 that enable administrative fines of up to 10% of total revenue in certain serious / repeated breach scenarios. This is an increase from the existing framework which tops out at 3% in many cases. The amendments are expected to take effect in August 2026.
Key points:
- 10% of total revenue fine ceiling for repeated or large-scale violations (over 10 million people), or for breaches resulting from failure to comply with regulatory orders. However, these fines may be reduced if the organisation can demonstrate substantial investment in data protection measures as evidenced by a large budget dedicated to data protection, personnel, facilities and equipment.
- Governance changes:
- The CEO will be explicitly designated as the individual ultimately accountable for personal data protection.
- The Chief Privacy Officer (“CPO”) will be tasked with ensuring adequate resources for data protection and must provide regular updates to both the CEO and the board.
- For organisations above a specified threshold, board approval will be necessary for appointing or removing the CPO, and the privacy regulator (PIPC) must be notified of these changes.
- Recent enforcement:
- On 12 February 2026, the PIPC fined the Korean entities of Louis Vuitton, Christian Dior and Tiffany a combined KRW 36Bbn (USD25Mn) for customer data breaches.
- The breaches were linked to SaaS / cloud-based customer-management access, with attackers obtaining access via malware/credential compromise and voice-phishing of staff.

/Passle/5c4b4157989b6f1634166cf2/MediaLibrary/Images/2026-01-28-11-42-24-951-6979f620da2c44bd51323e05.jpg)
/Passle/5c4b4157989b6f1634166cf2/MediaLibrary/Images/2026-02-10-09-26-10-525-698af9b2b876970f0dcaea7f.jpg)

/Passle/5c4b4157989b6f1634166cf2/SearchServiceImages/2026-03-24-09-46-07-734-69c25d5fea665d1d088059f1.jpg)
