The UK Government’s online harms regime looks set to be one of, if not the, most ambitious and wide-ranging regulatory regimes for combatting harmful online content anywhere in the world. Even on the scope of the initial white paper, the regime was at risk of being drawn too broadly. Yet, in recent months, the shopping lists of harms in scope has threatened to expand further still. In the past couple of months alone, the FCA said that it wanted online scams added to the list and one Member of Parliament asked for the online sale of faulty electrical goods to be covered. Yesterday, the FT reported that the Government was considering an expansion of scope of an altogether different magnitude: the addition of a “duty of impartiality” to prevent political bias.

The debate seems to have been imported from America where concerns have been raised that the most prominent social media platforms have deprioritised or even “silenced” voices expressing conservative views. After Twitter flagged several of Donald Trump’s tweets as being factually unsubstantiated in May 2020, the President issued an Executive Order proposing that online platforms should lose their protection from liability for the content they host if they are not politically-neutral.

Concerns about political bias have not featured in any of the UK Government’s papers on the online harms regime to date. Likewise, in Ofcom’s research on online harms conducted earlier this year, political bias did not even feature in the long list of “online harms” they used to gauge public concern. Yet, apparently, it could make an eleventh-hour appearance in the final proposals.

But how would a “duty of impartiality” fit within the outline of the online harms regime that’s been shared to date? That depends on what the Government expects platform should do to comply.

In its lightest incarnation, the duty may not add a great deal. In February’s update, the Government made it clear that, to comply with the statutory duty of care, online platforms would have to take different action depending on whether content was illegal (like content promoting terrorism) or “legal but harmful” (like self-harm imagery). For “legal but harmful” content, the proposal was that firms would need to decide what types of content were acceptable on their platform, set that out in clear and accessible terms and conditions and then enforce those terms effectively, consistently and transparently.

Most of the major social media platforms already have policies that set out their commitment to freedom of expression so, if this is the extent of the expectation, the “duty of impartiality” is unlikely to affect the global platforms much (though it may affect smaller platforms that only welcome certain political viewpoints).

What would be more challenging is if the duty of impartiality was meant to alter how each user experiences an online platform. Every person sees a unique version of Facebook, Twitter, Instagram or YouTube. The content promoted to them is not the result of a human editorial decision but a combination of the users’ own “curation” (e.g. choosing who to follow or what groups to join) and the output of algorithms that sift, select and promote content.

The Government would not be suggesting that online platforms should stop hosting "partial" content altogether, so what might it expect platforms to do to comply?

If the expectation is merely that platforms will not design their algorithms to intentionally favour certain political views, then that is unlikely to be controversial. Likewise, if any interventions by platforms – for instance, to label content that contains disinformation or is factually inaccurate – are impartial and objectively justified, then this is also unlikely to be controversial.

The question is whether the duty would require more than this: for instance, to adjust the design of platforms to balance the content people see. Though there may be good arguments for puncturing the “filter bubbles” many users operate in, doing so in an automated way would be hugely challenging. Can algorithms really be expected to identify content that has a political slant (itself a very difficult task), understand the political leanings of the content (and avoid, for instance, mistaking satire for genuine content), find countervailing political content and then ensure the users end up with a duly impartial experience?

As is often the case with regulation, should the duty to be impartial become part of the final package, the devil will be in the detail. Which platforms will be expected to comply: just the mainstream social media platforms or all platforms in scope? Will it have any application to private communication channels? Will platforms be permitted to be explicitly biased towards one political viewpoint, provided that is clear in their terms and conditions? How should the duty affect the user experience? And what outcomes are platforms expected to achieve and how will success be measured? If the duty to be impartial does make it in, it could end up as the most ambitious element of an already-ambitious regime.