This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 7 minute read

The UK’s Online Safety Bill: 5 things that may have flown under the radar

It has now been over 8 weeks since the UK Government released the revised draft of the Online Safety Bill, the UK's landmark piece of legislation that aims to make the UK the safest place in the world to go online when it comes into force (likely in 2023). See our summary of the top 5 things you need to know about the Bill – and how it compares to the previous iteration.

To date, much of the media publications and public debates have focused on the scope of the duties of care and, in particular, the obligations relating to legal but harmful (or "lawful but awful") content; a topic we explored in our comparative analysis report last year. In this Tech Insight, however, we dive a little deeper into the 225 page draft Bill and flag five things that may have flown under the radar…

1. User-to-user platforms' terms of service are about to get (even) longer

One striking feature of the Bill is just how much user-to-user services will be required to explain in their terms of service. ToS are essentially a contract between the platform and their user and, therefore, in the simplest sense, are meant to set out the promises that each party makes to the other. Yet many of the items that platforms will be required to include in their ToS don’t neatly fit into this definition at all. 

ToS will now need to explain how users will be protected from illegal content and how platforms will use "proactive technology"* to achieve this: including what kind of technology this is, how it will be used and how it works. 

Depending on the nature of the service, the ToS may also be required to specify many more things: including the platforms’ policies and processes concerning the ability of children to access certain types of content, freedom of expression, journalistic content etc. (all while ensuring that these provisions are clear, accessible and consistently applied…).

In short, explanations of the outcomes of risk assessments are not exactly contractual obligations, and yet they will have to find their unnatural home in the platforms’ ToS if the Bill becomes law in its current form.

2. …and platforms will have to tell users about their ability to claim for a breach

Another item that platforms are required to include in their ToS is a provision to inform users about "their right to bring a claim for breach of contract if content they generate, upload or share is taken down, or access to it is restricted, in breach of the terms of service". This appears to be the Government's alternative to the Parliamentary Joint Committee's call for the Bill to include a private right of action that users could enforce for breaches of the duties of care (see paragraph #460 of the Joint Committee's report).

At first glance, the provision in the draft Bill makes no difference to users’ rights: parties have always had the right to pursue action for breach of contract, whether or not there is a provision that tells them as much. Yet by forcing platforms to signpost this so clearly – and perhaps creating the expectation that a user will receive some kind of remedy for a breach – this could lead to more litigation in the future.

In a later Tech Insight, we will explore the possible practical implications of this, and the challenges of claiming for a breach of contract due to content being taken down or demoted.

3. The threshold for challenging Ofcom's decisions will be higher than that of similar regulators

In the draft Bill, Ofcom is granted swingeing powers to take a wide range of action. Its regulatory toolkit will include the ability to fine companies up to 10% of their global turnover, to require platforms to take particular steps or use particular technology.

Though a platform can appeal Ofcom's decision to the Upper Tribunal, the Upper Tribunal has to apply judicial review principles to decide the appeal. We will explore this topic in a future post with Jonathan Jones KCB QC but, put simply, this means that platforms will have to meet a very high bar to have Ofcom’s decisions overturned. In essence, they'd need to show that Ofcom acted outside of its powers, in a procedurally unfair way, in a manner that was so unreasonable that no reasonable public authority could have acted in that way or that Ofcom departed from the legitimate expectations that it created through its words or conduct.  

This is out of step with similar regulators in the UK. For all of the Financial Conduct Authority, the Prudential Regulatory Authority, the Information Commissioner's Office and the Competition and Markets Authority (for decisions made under the Competition Act 1998), a sanctioned party has the right to a full re-hearing of their case on appeal. The court hears all the evidence afresh and makes its own determination of what, if any, regulatory sanction is justified.

Though this may sound like a relatively niche legal point, it has major implications. It means that, in an area which is both novel and where entitles face difficult decisions balancing freedom of speech, data privacy and online safety, Ofcom will be granted a margin of discretion greater than many of its regulatory equivalents. 

For example, Ofcom could choose to impose the maximum financial penalty on a platform – which could amount to several billion pounds – and, even if the court felt that the penalty was harsh, it couldn't overturn it unless Ofcom had acted in a procedurally unsound way or outside of the bounds of what a reasonable authority would do. More importantly, the risk that decisions could be tested and overturned on appeal is an important check and balance on a public authority's exercise of its powers. Setting the bar too high makes that check and balance far weaker.

 

4. The Bill does not account for societal harms

A lot of the discussion around the Bill has focused on whether the framework it imposes can keep pace as technology and the harms evolve. Whether the Bill can live up to the challenge of the metaverse is a topic we'll explore in a future Tech Insight. But one area that the Bill is already (consciously) omitting, is the risk of societal harms.

As the Joint Committee neatly put in their report, the harms resulting from online activity are not limited to individuals. Harmful effects may be felt by groups (sometimes referred to as "collective harms") or by wider society ("societal harms"). The Joint Committee give the example of persistent racism or misogyny online, where the cumulative effect is to make groups of people feel less safe or less able to express themselves (a collective harm). 

Another example was provided by BT Group which told the Joint Committee about the impact on their staff and contractors of the 5G conspiracies, which led to arson on 5G infrastructure. Despite this real-world harm likely being linked to online content, a post that claims that there is a link between 5G and COVID-19 may not meet the definition of an individual harm.

The Government chose to omit societal harms from the Bill due to concerns about the workability of any provisions and their vulnerability to legal challenge. Yet, Europe's equivalent of the Bill – the Digital Services Act – is set to include provisions that expressly refer to systemic risks to both individuals and societies. This divergence between the two laws is just one theme that we will be exploring in a future post comparing the two regimes, once the final text of the DSA is available.

5. No statutory explanation of how regulatory overlaps will be resolved

Finally, a theme we've written about extensively before is the competing regulatory objectives that tech companies have to try to comply with (read mojre: Clash of the regulators: is a coherent approach to tech regulation just a pipe dream?). For example, introducing a feature that may help mitigate online safety risks could be intrusive from a privacy perspective: how do you balance the two regulatory objectives?

The Joint Committee called upon the Government to set out a framework for how the various regulators would work together in the Bill, including when and how they would share powers, take joint action and conduct joint investigations. This is not in the current Bill: instead, it appears that the Digital Regulators Cooperation Forum will continue on its non-statutory footing (read more : A step towards the pipe dream: UK regulators promise closer cooperation on tech.)

Encouragingly, the DCRF has recognised the need to provide clarity on how the various regulatory regimes intersect: one of its three overarching goals is to promote coherence between the various regulatory regimes. To this end, its plan of work for 2022-23 includes prioritising:

  • synchronising privacy and online safety efforts when it comes to the protection of children (a topic we’ll return to in a future Tech Insight);
  • mapping the interactions between the various regulatory regimes;
  • publishing a joint statement on how the regulators plan to work together to address the interaction between the online safety and privacy regimes;
  • developing a clear articulation of the relationship between competition and online safety policy; and
  • building on the engagement between Ofcom and the FCA on online fraud and scams (read more: You have been warned: the FCA’s messages to banks and social media about online scams).

Though this development should be welcomed by the industry, it is still a less coherent solution than the statutory underpinning suggested by the Joint Committee. Given the sheer breadth of technology services affected by the various regulatory regimes, the DCRF will have its work cut out to provide guidance that can be applied with sufficient precision by the multitude of different types of services that will need to implement the Online Safety Bill while, at the same time, complying with other laws and regulations.

 

*Including technology that uses algorithms, keywords, image matching etc. for content moderation, user profiling or behaviour identification.

The Online Safety Bill delivers the government’s manifesto commitment to make the UK the safest place in the world to be online while defending free expression. The Bill has been strengthened and clarified since it was published in draft in May 2021, and reflects the outcome of extensive Parliamentary scrutiny.

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

online safety