This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 4 minute read

UK Government announces final proposals for regulating online harms

For several years, concern has grown around the harm caused by online content: from material promoting terrorism to disinformation, from online scams to cyber-bullying. Yet, though Governments, society and tech companies have all recognised that there is a need for regulation, there has been a lot of debate about what a holistic regulatory regime would look like. Now, two sets of ground-breaking proposals are set to land in one day.

Hours before the European Union were set to publicise details of their Digital Services Act, the UK Government released its long-awaited response to the 2019 Online Harms consultation. In proposals for a new regulatory regime, the UK Government set out the detail of how they plan to make the UK “the safest place in the world to go online”. We’ve briefly outlined the key aspects of the proposals below. 

Who will have to comply with the regulatory regime?

Bar a few exemptions, the regime will apply to all companies that:

  • host user generated content which can be accessed by users in the UK; and/or
  • facilitate public or private online user interactions, one or more of whom is in the UK.

This means that the scope is broad: social media platforms, consumer cloud storage sites, video sharing platforms, online forums, instant messaging services, peer-to-peer services, games which enable interaction with users online, and online marketplaces will all need to comply with the regime.

Search engines will also be in scope, despite not hosting user generated content directly or facilitating interaction between users.

What will they have to do to comply?

All in scope companies will be under a duty of care to take action to prevent user-generated content or activity on their services causing “significant physical or psychological harm to individuals”.

To satisfy the duty of care, companies will need to understand the risk of harm to individuals on their services and put in place appropriate systems and processes to reduce the risk of harms they have identified occurring.

Although all in scope companies will be subject to the duty of care, the specific expectations depend on: (i) the type of harm; and (ii) the type of company in question.

All in scope companies will be under a duty to protect:

  • all users against illegal content and activity; and
  • children from harmful content.

Only those considered to be “high-risk and high-reach”  services will be under a duty to take steps in relation to content and activity that is legal but harmful to adults, including some types of misinformation and disinformation.

In addition to the duty of care, in scope companies will have a number of additional duties, including providing mechanisms to allow users to report harmful content or activity.

As we discussed recently, reports suggested that the Government was considering introducing an additional duty into the regime: the duty of impartiality. This duty did not make it into the final proposals.

Companies have a duty to protect users against harm, but what does ‘harm’ mean?

The Government has defined harmful content as content that “gives rise to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”.

The legislation will not set out an exhaustive list of harms, but secondary legislation will set out a ‘limited number’ of priority categories of harms, including:

  • priority categories of criminal offences, such as child sexual exploitation and abuse, terrorism, hate crime, and sale of illegal drugs and weapons;
  • priority categories of harmful content and activity affecting children, such a pornography and violent content; and
  • priority categories of harmful content and activity that is legal when accessed by adults, but which may be harmful to them, such as abuse and content about eating disorders, self-harm and suicide. Some types of disinformation and misinformation are also likely to be included.

Some types of harms will be excluded on the basis that they are being effectively addressed under other regimes, including harms resulting from breaches of intellectual property rights, data protection legislation, and consumer protection law. Harms resulting from fraud and cyber security breaches or hacking will also be excluded.

What happens if a company does not comply?

The Government has confirmed that Ofcom, the existing communications regulator and regulator of the new VSP regime, will be the regulator of the online harms regime.

If a company fails to comply with its regulatory obligations, including breaching its duty of care, Ofcom will have the power to take enforcement action. If Ofcom finds a company in breach, it will have the power to impose a range of sanctions:

  • financial penalty of £18m or up to 10% of global annual turnover, whichever is higher;
  • require third parties to withdraw access to key services that make it less commercially viable for the company to operate within the UK; and
  • require internet infrastructure service providers to take steps to block a company’s services from being accessible in the UK.

There has been much debate about the merits of introducing senior management liability for a company’s breach of the duty of care. The Government continues to reserve its right to introduce criminal sanctions for senior managers, although the scope of any criminal liability is much narrower than previously indicated. If senior management liability is introduced, it will be limited to situations where a senior manager has failed to respond ‘fully, accurately and in a timely manner’ to information requests from Ofcom, rather than liability for a breach of the duty of care itself.

So what is next? 

As mentioned above, online platforms won’t only be unwrapping the UK’s package of proposals today but also the EU’s planned Digital Services Act too, due to be released this afternoon. This looks set to tackle broadly the same problems as the UK’s online harms proposals but using different legislative tools and concepts.

Today’s announcements go some way to giving platforms clarity on their future obligations, but the devil will be in the legislative detail. Over the coming weeks, platforms will want to identify the key areas of divergence between the EU and the UK’s regimes and consider how they will comply with both once the regimes come into force.

The government’s response to online harms is a key part of our plans to usher in a new age of accountability for tech companies, which is commensurate with the role they play in our daily lives. Our ambition is to build public trust in the technologies that so many of us rely on. Ultimately, we must be able to look parents in the eye and assure them we are doing everything we can to protect their children from harm.

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

online safety