This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minute read

Swipe 'right' on online content regulation

Over the past few years, the global tide has turned in favour of online content regulation. This is in part due to the growing negative social and political consequences caused by our increasingly digitised lives. Governments have been grappling with balancing fundamental rights, freedom of information, practical limitations of moderating online content and laying out clear and defensible guidelines to 'govern' an ever-evolving digital landscape.

Final and near-final forms of legislation have emerged in the West, such as the Online Safety Bill (UK); LCEN and Avia Act (France); NetzDG (Germany) and Digital Services Act (EU),with more are likely to follow.

Going forward, tech companies will face additional compliance responsibilities to monitor content and assess harm in the digital space, including a further increase in the scrutiny of organisations’ ESG credentials.

Asia in the race to regulate 

In Asia, we see regulators giving online harms the equivalent to social media 'likes', reprioritising their legislative agendas to target this emerging phenomenon.

Themes are similar across markets:

  • Targeting facilitators (e.g. online service providers) has become a common approach to allocating the burden of online regulation. Online service providers are more easily policed, but are also sufficiently interested in actioning reforms that appease regulators into allowing the vast revenue streams that come from their services to continue.
  • Tightening the existing patchwork of legislation targeting perpetrators (e.g. criminal offences under communications acts, public order acts and the like), including rules more fit-for-purpose in an online environment.
  • Devoting resources to developing and using technologies and systems to improve the media literacy of the public and more accurately targeting users.

One example is Australia's Online Safety Act 2021, which commenced in January 2022, with a particular focus on cyber-abuse. Singapore’s Ministry of Communications and Information (MCI) launched a public consultation in July this year to progress two proposed codes of practice: a code of practice for online safety and a content code for social media services, both of which will impose requirements on social media services to better protect the online safety of Singapore’s internet users. Also in July, India announced potential new rules on how digital and social media platforms should operate in the country, currently home to 1.4 billion people. In Indonesia, the world's biggest tech groups have proactively registered for a national licensing system in the past month, under which they might have to censor content and hand over users’ data to law enforcement agencies.

Three questions you should have

A great degree of uncertainly lies ahead as tech and other companies seek to reconcile the sprawling mix of online content legislation around the globe. Here are some of the tricky issues we expect to work through with our clients:

  1. Which rules apply to me? Some of these proposals or legislation directly target tech giants operating on a global scale and, as such, have extraterritorial application in broad terms. We have seen how the GDPR’s extraterritorial effect has increased the compliance burden on organisations across the globe, and we expect the reach of the various content rules to spill over to the operations of tech companies of varying sizes, particularly those scaling rapidly.
  2. How do these new rules sit alongside my existing obligations? There is lack of clarity in relation to how these new duties should be read in conjunction with the current ones within the existing legislative patchwork (e.g. criminal offences under communications acts, public order acts and the like). Whilst governments have been keen to stress that such protections will remain in place, there is no doubt that the new statutory "duty of care" proposed by the UK Online Harms Bill, for example, will raise questions on how far this is to be read, and how obligations of intermediaries and perpetrators stack up against one other.
  3. How will these new rules be interpreted? Across the board, we expect secondary legislation to be issued as governments provide further guidance on the specific types of content that will fall within the scope of their respective regimes. However, how different legislators define the various harms may be driven by disparate nationalistic biases – concepts of 'fake news', 'terrorism', 'national security' and 'political advertising' may be read and implemented differently across jurisdictions, meaning that multinationals need to build flexibility into their compliance programmes.

This is an area of law that we only see growing in complexity over the coming years. Our cross-practice, Asia teams are here to advise tech and other companies alike that are navigating the increasing challenges from ESG-focussed and other stakeholders.

Nearly half of over 1,000 Singaporeans polled have personally experienced online harms

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

online safety