Measures are needed to protect users from illegal content (e.g. terrorist content and child sexual abuse material) and, in some circumstances legal but harmful content (e.g. disinformation and cyberbullying) that they may encounter online. However, when regulating in this space, governments and regulators have to deal with difficult questions about the impact on freedom of speech and the risk of positioning social media companies as arbiters of questions of fundamental rights. There are also practical difficulties, for example the measures must work effectively at scale.

The UK's draft Online Safety Bill is unlikely to take effect until 2022 at the earliest, but there is currently a regime in place that requires UK based video sharing platforms to take appropriate measures to protect users from harmful content (under Part 4B of the Communications Act 2003).

The telcoms regulator Ofcom has recently published guidance which is intended to help VSP providers understand their regulatory obligations under the new regime. Although the guidance is non-binding, Ofcom states that in certain circumstances it is unlikely that VSPs can effectively protect users (and therefore comply with the regime) without following the approach Ofcom has set out. 

In order to comply with the new regime, VSPs are likely to need new systems and processes in place, and with the Online Safety Bill (and equivalent overseas regimes) coming down the track, they will need to have one eye to this wider legislation when designing those processes.

Read more in our recent DigiLinks post.