In 2026, organisations will navigate an increasingly complex and volatile threat landscape. Geopolitical instability, expanding digital ecosystems, and widespread adoption of AI have intensified the risks associated with data and cybersecurity.
Generative and agentic AI represent a new threat paradigm, with state-sponsored attacks increasing in speed, scale and sophistication. These sophisticated threats are targeting critical infrastructure and multinational corporations, with automated hacking tools significantly amplifying malicious activity.
This convergence demands a fundamental shift in risk management, and data, cyber, and AI risks must be managed holistically. Organisations integrating these disciplines into unified governance frameworks will be better positioned to build operational resilience, meet evolving global regulatory obligations, and preserve stakeholder trust.
The interconnected nature of data, cyber and AI legal risks
Large multinational corporations face a complex web of legal risks across three interconnected domains: data protection, cybersecurity, and AI. While each domain presents distinct regulatory and operational challenges, they cannot be addressed in isolation. A data or cyber breach can compromise AI models, while poorly governed AI amplifies cyber threats through vectors such as indirect prompt injection attacks and the exposure of sensitive data. Understanding how these risks interact is essential to building effective governance frameworks.
Privacy and data protection risks are exacerbated by a fragmented privacy landscape. For example, the General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), and China's Personal Information Protection Law (PIPL) impose varying – and sometimes inconsistent or conflicting – obligations on data handling, cross-border transfers, and individual data rights. The increasing use of Generative AI systems, which often rely on vast datasets of personal information, raises acute concerns around data protection, algorithmic bias, and transparency. If such risks are not properly managed, this can lead to discrimination claims, lawsuits, reputational damage, and regulatory enforcement.
Cybersecurity threats and new cybersecurity laws compound these risks. As AI expands the attack surface and introduces new vulnerabilities, multinationals must contend with state-sponsored attacks, ransomware, and AI-powered attacks. At the same time, multinationals must ensure compliance with the ever-evolving and increasing number of cyber laws worldwide. These include the recently updated Singapore Cybersecurity Act 2018 (with some amendments still pending) and the proposed Singapore Digital Infrastructure Act (expected to be tabled next year), Australia’s new Cyber Security Act 2024, and the EU’s new NIS2 and DORA regulations.
AI-specific legal liability can arise from different causes of action and legal doctrines. The main ones include: allegations that AI-generated outputs are erroneous or discriminatory to individuals; regulatory non-compliance with AI-specific laws such as the EU AI Act; contractual disputes with third-party AI vendors regarding responsibility and liability allocation; and intellectual property disputes over the lawfulness of using another company’s data in training an AI model.
Diverging regulatory approaches
Governments worldwide are introducing legal and regulatory frameworks that underscore the need for integrated compliance strategies due to the interconnected nature of data, cyber and AI legal risks. The EU is leading the pack, but the regulatory approaches diverge significantly across jurisdictions and regions.
The EU's integrated regulatory architecture
The EU has already developed a comprehensive, layered approach. The GDPR, AI Act, Cyber Resilience Act, NIS2, and DORA create overlapping obligations requiring coordinated compliance:
- The EU GDPR requires organisations to process personal data lawfully and transparently, implement security measures, respect data subject rights, maintain processing records, conduct impact assessments for high-risk processing, appoint data protection officers where necessary, and report breaches within 72 hours.
The AI Act establishes a risk-based framework, prohibiting certain AI practices and imposing stringent requirements on high-risk AI systems. These include conformity assessments, risk management systems, data governance, transparency, human oversight, and cybersecurity measures.
Cyber Resilience Act introduces mandatory cybersecurity requirements for manufacturers, importers and distributors of products with digital elements, requiring them to ensure cybersecurity throughout the product lifecycle through secure design and development, vulnerability management, security updates, incident reporting, conformity assessments, and CE marking before placing products on the EU market.
NIS2 expands cybersecurity obligations across essential and important entities in 18 sectors, requiring comprehensive risk management, incident reporting within 24 hours, supply chain security measures, and business continuity planning. Critically, NIS2 imposes personal liability on management bodies for cybersecurity failures.
DORA enforces digital operational resilience requirements for financial entities and their third-party providers, including ICT risk management frameworks, incident reporting, resilience testing, and third-party oversight.
Together, these frameworks demand integrated compliance strategies. Legal teams must align privacy and data-sharing controls (GDPR), AI governance (AI Act), cybersecurity measures (NIS2), and operational resilience (DORA) across the organisation.
The UK's innovation-led approach
Compared to the EU, there is currently no AI-specific legislation in the UK, and the UK government has consistently maintained that it will adopt a pro-innovation and business-friendly approach to regulating AI. The Government released its first AI White Paper in March 2023 – focusing on principles such as safety and security, transparency and explainability, and accountability and governance – with reports that a promised bill focused on ‘frontier AI’ won’t be proposed until the second half of 2026 at the earliest as the government prioritises alignment with U.S. policy and broader tech legislation.
In the meantime, privacy laws are being updated incrementally, with The Data (Use and Access) Act 2025 (DUA Act) implementing modest reforms to UK’s data protection laws. As for the UK’s cybersecurity laws, a spate of notable cyberattacks on UK companies in 2025 have increased pressure on the Government to introduce and pass The Cyber Security and Resilience Bill as soon as possible in 2026.
The U.S. fragmented landscape
The United States pursues a fragmented approach, with state-level laws creating an array of different privacy, cybersecurity, and data breach reporting obligations. Federal cybersecurity laws are meanwhile strengthening, particularly for critical infrastructure sectors. The SEC's cybersecurity disclosure rules, for example, require public companies to disclose material cybersecurity incidents within four business days and provide annual reporting on cybersecurity risk management strategies. Nascent AI regulation is also fragmented and immature, with an emerging patchwork of state and local laws which vary widely in scope and enforcement, creating compliance challenges for multinationals.
Asia-Pacific divergence
In a similar vein to the U.S., regulatory models across Asia-Pacific vary significantly. China has a comprehensive framework of laws including the PIPL, the Cybersecurity Law, and AI-specific regulations. Singapore has their own equivalents (Singapore PDPA 2012 and Cybersecurity Act 2018, with the Digital Infrastructure Act expected to be tabled in 2026) albeit adopting a more AI business-friendly approach with the Model AI Governance Framework, whilst Hong Kong’s Protection of Critical Infrastructures (Computer Systems) Ordinance comes into effect on 1 January 2026.
The case for holistic governance
Siloed corporate governance models no longer reflect the operational reality of modern digital systems. Consider a multinational financial institution deploying AI for credit decisioning in the EU. This single use case triggers obligations under GDPR (automated decision-making and transparency), the EU AI Act (high-risk system classification requiring conformity assessment and human oversight), DORA (ICT risk management and third-party oversight), and NIS2 (incident reporting). Each regulatory framework has different supervisory authorities, documentation requirements, and enforcement mechanisms (although in recognition that simplification is needed there are now proposals to move to a “report once share many” approach under the Digital Omnibus package).
Traditional siloed approaches – where privacy teams handle data privacy laws, IT security manages cyber risks, and legal teams review AI contracts – create dangerous gaps. AI models trained on compromised or biased data can produce discriminatory outputs, undermine decision-making and increase regulatory exposure. A data or cyber breach can compromise an organisation’s AI models, while poorly governed AI amplifies data and cyber threats through the likes of indirect prompt injection attacks if security considerations aren't embedded from design stage.
Leading organisations are therefore adopting cross-disciplinary governance frameworks that integrate legal, cybersecurity, audit, risk management and privacy/data protection expertise. This enables proactive compliance, best-practice AI deployment, coordinated incident response, and resilience against regulatory scrutiny.
Looking ahead
The convergence of data, cyber, and AI governance will become a defining feature of threat and risk management. Organisations that embrace integrated governance will be better positioned to navigate regulatory complexity, mitigate emerging threats, and unlock the full potential of AI’s digital transformation.
Legal teams have a critical role to play by advising on integrated governance frameworks, supporting cross-border compliance, and embedding legal expertise into technology development. This is not merely about managing risk – it is about enabling responsible innovation that creates competitive advantage whilst maintaining stakeholder trust and regulatory compliance in an increasingly complex global landscape.

/Passle/5c4b4157989b6f1634166cf2/MediaLibrary/Images/2025-11-10-11-15-37-252-6911c959be557da3fa78c600.png)
/Passle/5c4b4157989b6f1634166cf2/MediaLibrary/Images/2025-11-18-11-52-26-962-691c5dfa104d74a7c40dcb41.jpg)