In three recent announcements, the UK Government has given its first indication of how it will respond to the Joint Committee’s scrutiny of the draft Online Safety Bill (see this overview of the original draft: ‘At a glance’ summary). These latest updates widen the scope of the illegal content which is subject to the Bill’s most stringent compliance measures, introduce three new criminal offences recommended by the Law Commission, and potentially introduce age verification requirements for sites that publish pornography.

For platforms: A long list of priority illegal content 

The current draft of the Bill lists terrorism and child sexual exploitation and abuse (CSEA) as ‘priority illegal content’, and gives the Secretary of State the power to add other offences to the list via secondary legislation. The Government, potentially in response to the Joint Committee’s recommendation, has published a list of 11 further categories of offences which will be deemed priority illegal content when the Bill is brought before Parliament. It says that it has done this so that “Ofcom can take faster enforcement action against tech firms” as secondary legislation won’t be needed to add the additional offences to this category.

  • What do platforms have to do with this priority illegal content? It remains to be seen exactly how these categories of offences will be incorporated in the Bill, but under the current draft, priority illegal content is covered by clause 9(3). This means platforms must use proportionate systems and processes to minimise the presence of priority illegal content, including the length of time it is live on the platform and the dissemination of such content. The Government describes this as a proactive duty and suggests platforms can meet it through content moderation, banning illegal search terms and ‘spotting’ suspicious users. It isn’t clear whether it will update other sections of the Bill to cover this expanded list of priority illegal content (for example, as drafted, technology warning notices can only be issued in relation to terrorist and CSEA content) but this would seem to be the logical consequence.

  • What are these new categories of priority illegal content? These range from fraud, financial crime and money laundering to threats of violence and encouraging suicide. The criminal law defining some of these offences can be complex – and if platforms get this assessment wrong, they are at risk of overblocking and thus infringing users’ freedom of expression.

It will also be interesting to see how Ofcom will work with other regulators where these offences overlap with their regulatory scope (for example, the FCA and money laundering). As we flagged back in December, the FCA and the CMA both called for fraud to be designated as priority illegal content – now they must figure out how this will work in practice. This is another opportunity for the Digital Regulation Cooperation Forum (DRCF) to facilitate close collaboration between regulators (read more  on the DRCF).

For individuals: Three new criminal offences 

The UK Government has also published its interim response to the Law Commission’s report on Modernising Communications Offences (read more on the report). It confirms that the Online Safety Bill will introduce three new offences:

  1. A threatening communications offence - where communications convey a threat of serious harm (including serious financial harm)

  2. A harm-based communications offence - to capture communications sent to cause harm (including psychological harm, amounting to at least serious distress) without reasonable excuse

  3. An offence of sending a communication the sender knows to be false - with the intention of causing non-trivial emotional, psychological or physical harm

This is a significant step – the Bill was previously focused solely on creating a regulatory regime for user-to-user platforms and search engines, but will now bolt on three new criminal offences for individuals too. 

The Government also says it is considering the Law Commission’s recommendations for specific offences of cyberflashing, encouraging self-harm and epilepsy trolling. Watch this space for the Government’s full response to see whether further offences will be introduced into this Bill or through other legislation.

Age verification: A long-awaited addition?

Finally, on Safer Internet Day (8 February 2022), the Government announced a new legal duty requiring all sites that publish pornography to put robust checks in place to ensure that users are at least 18 years old. They suggest that this could include age verification mechanisms, like credit card checks or having third-party services carry out checks against government data (like passport details). Measures like this have been trailed since the Digital Economy Act, but now the Government is set to expand the scope of the Online Safety Bill to capture commercial providers of pornography, not just providers that host user-generated content, with a new standalone provision. 

The exact scope of this duty remains to be seen (we will not have sight of the exact drafting until the revised Bill is published), but it is sure to reignite the debate on how these measures can be effectively used in compliance with privacy rights and the UK’s data protection regime. With this latest announcement, Ofcom will need to add the fourth and final DRCF member, the ICO, to its list of collaborators.

The draft Online Safety Bill was already broad in scope and ambitious in its aims. These three announcements mean the Bill has expanded in scope further still and will now include provisions aimed not just at user-to-user platforms and search engines, but criminal offences for individuals and duties for commercial pornography providers. With the Bill due enter Parliament in the near future, it remains to be seen if the scope will expand further still.