On 17 March 2022, the UK Government published its highly anticipated re-draft of the Online Safety Bill. Standing at 225 pages long (with over 100 pages of explanatory notes), it is safe to say that this is an incredibly complex piece of legislation which poses a huge compliance challenge for online platforms that fall within its scope.
A landmark piece of legislation
Since the original draft was published on 13 May 2021 (see our overview: ‘At a glance’ summary), debate has raged about the scope and reach of this landmark piece of legislation. A Joint Parliamentary Committee provided recommendations on how the Bill could be amended, building on contributions from a wide range of industry bodies, individuals and regulators (see our post: The UK's Online Safety Bill: What do other regulators want?).
Though the Government has acted on many of these recommendations – as well introducing a handful of other recently-announced expansions in the reach of the Bill (see our post: 'The UK’s Online Safety Bill: An ever-expanding scope?') – other changes are brand new.
Five key areas of change
We will be publishing an updated and more in-depth analysis of the Online Safety Bill in due course. For now, the following are the key changes you need to know about:
1. Changes to senior manager criminal liability
- The time frame in which executives of services can become criminally liable has been radically reduced. Under the original draft, the power to bring criminal sanctions against executives was deferred until two years after Royal Assent. In the revised draft, they could now face prosecution or jail time within just two months of the Bill becoming law.
- New criminal offences have been included for executives who fail to cooperate with Ofcom, including: (i) the suppression, destruction or alteration of information, (ii) the failure to comply with an audit or interview notice and (iii) the falsifying of information as part of an audit response or interview. Importantly though, these offences would still only apply to 'information offences' – not for the executive’s platform failing to comply with their duties of care (see our opinion piece on this topic: Senior managers can’t be the fall guys in the fight against online harm - Financial News).
2. New duties of care
- The Bill proposes a new duty which requires services to take action to minimise the likelihood of fraudulent adverts being published on their services, something the FCA has pushed hard for (see our post: The UK's Online Safety Bill: What do other regulators want?). This takes the scope of the duties beyond their original application solely to user-to-user or search engine services and into the realm of paid-for content.
- Platforms which publish or host pornographic content (i.e. non-user generated content) will also need to ensure that children do not “normally encounter” pornographic content on their service – with the Bill noting that this could, for example, be achieved through age verification. This means all providers of online pornography will be within the scope of the Bill.
- Duties concerning anonymous abuse have been introduced. In a move to “empower” users with greater control online, ‘Category 1’ services must offer users a way to verify their identities and control who they can interact with – this includes providing the option not to interact with unverified users. For some platforms, this could radically alter the way they design and deliver their services and fundamentally change the experience of users who 'opt out' of seeing content from unverified users.
3. Changes to the classifications of types of harm
- Parliament will now define what is meant by ‘legal but harmful’ content. Instead of allowing services to decide what it means, the categories of ‘legal but harmful’ content will be set out in secondary legislation. However, services will still be required to judge for themselves whether or not content accessed by children is ‘legal but harmful’ (in addition to the designated content set out in secondary legislation).
- Additional priority offences have been included on the face of the Bill. 11 further categories of offences – in addition to terrorism, child sexual abuse and exploitation content – have been named as ‘priority illegal content’ in the Bill. This includes assisting suicide, people smuggling, online drug and weapons dealing, fraud, and inciting or controlling prostitution for gain.
4. New communication offences for individuals
- The original draft of the Bill purely focused on obligations for online platforms. Now, the Government also proposes to use the Bill to introduce three new offences for individuals. These implement the recommendations of the Law Commission to introduce new offences covering harmful, false and threatening communications. These offences capture: (i) communications sent to cause harm without reasonable excuse, (ii) communications that include information that the defendant knows is false and which is intended to cause non-trivial psychological or physical harm without reasonable excuse and (iii) communications sent or posted which convey a threat of serious harm (see our post on the Law Commission’s recommendations: Good things come in threes? The latest online harms updates in the UK).
- A ‘cyber flashing’ offence has also been introduced to complement the harms-based offence, which was another recommendation from the Law Commission.
5. Greater power given to Ofcom
- Ofcom have gained greater powers through new provisions which recommend the use of technology and tools for content moderation, user profiling and behaviour identification that providers should adopt where “proportionate” to comply with their duties under the Bill.