Following a summer where its fate appeared to hang in the balance, the UK’s long-awaited flagship online safety legislation is now back on the table. Last week, the UK Government revealed further changes to the proposed Online Safety Bill which is expected to pass into law in the first half of 2023. We haven’t yet seen the revised Bill, but here’s what you need to know based on the Government’s 28 November press release.

No requirement to regulate “legal but harmful” content in favour of a new “triple shield” 

The most controversial (and, it’s fair to say, least well-understood) aspect of the Bill has always been the provisions concerning content that is not unlawful but is harmful to adults. 

Under the previous version of the Bill, the duties about legal but harmful content would only apply to the biggest platforms and, in essence, were transparency measures: platforms would have been required to specify in their terms of service how they would deal with certain types of content that is harmful to adults and then apply those terms consistently. Contrary to some commentary on the Bill, platforms would not have been required to block or remove this content. Indeed, they could have chosen to permit (and even recommend) this content if they so wished.

The amendments announced last week essentially remove the requirement on the largest platforms to deal with specific types of content that is harmful to adults in their terms of service. 

Instead, the position has become: (1) if your terms of service prohibit or restrict certain types of legal content, you must enforce those restrictions consistently; and (2) if your terms do not prohibit or restrict that type of legal content, you cannot remove it.

The Government has said that this will remove any motivation for platforms to take down legitimate posts to avoid sanctions. 

The "triple shield” explained

In the place of the legal but harmful provisions, the Government press release introduces a new concept for the Bill: a “triple shield” of protection when online.

In reality, this appears to just be rebranding of many measures that were already in the Bill. The shield comprises three elements - the requirements for platforms to: (1) “remove illegal content” (we assume this still means having proportionate systems and processes to tackle illegal content); (2) take down material in breach of their own terms of service; and (3) provide adults with greater choice over the content they see and engage with.

1. Requirement for tools to tailor online experience 

As noted above, the third element of the “triple shield” requires the largest and most popular platforms to provide adults with tools to tailor their online experience:

  • Not to see certain types of content: The Bill will require large platforms to provide tools for users to control access to certain types of content, including legal content relating to the glorification of suicide, self-harm or eating disorders, or certain content that is abusive or incites hatred. This could include enhanced content moderation, warning messages or sensitivity and warning screens.

     
  • Block anonymous trolls: The Bill will also seek to provide users with the ability to block anonymous trolls by requiring platforms to provide tools to control whether users can be contacted by unverified users.

     
  • Reporting mechanisms: The Bill will also seek to require platforms to develop “better” reporting mechanisms for certain types of content (illegal content, content which is harmful to children or content in breach of terms of service). Further, the Bill seeks to ensure that user reports are processed and resolved more quickly. 

2. Further duties specific to children 

In this refresh, the Government also heavily emphasizes its commitment to protecting children with the following measures: 

  • Publish risk assessments: Previously, the Bill required platforms to carry out risk assessments regarding the dangers of their site to children; the latest iteration now requires publication of these risk assessments.

     
  • Specify age verification measures in TOS: The Government has explained that platforms’ responsibilities to provide age-appropriate protections are enhanced. Where platforms specify a minimum age for users, they must explain in their terms of service how they will enforce this. 

 3. Changes to the planned new criminal offences

Though the Bill is primarily focused on duties that apply to user-to-user services and search engines, it will also introduce new criminal offences for individuals too. The Government’s announcement signals a change of approach on this front too.

  • New offence of “assisting or encouraging self-harm online”: The Bill will now create a new offence of assisting or encouraging self-harm online. Following significant campaigning, the Government will now address this type of content through making this content illegal rather than expecting platforms to tackle it under the (now removed) legal but harmful requirements.

     
  • No more “harmful communications” offence: The revised Bill removes the “harmful communications” offence (a proposed offence relating to communications made online with the intention to cause at least “serious distress”). The Government cites a desire to ensure the Bill’s measures are proportionate and do not unduly criminalise content that some may find offensive.

Where to from here?

Exactly how the new Bill is formulated is yet to be seen. The Bill returns to the House of Commons today and could be enacted into law as early as Spring 2023.