The way social media platforms tackle disinformation has been under intense scrutiny during the COVID-19 pandemic (see this article and this article). Last week, the European Commission released its assessment of how the self-regulatory system to tackle disinformation - a Code of Practice agreed to by Facebook, Google, Twitter, Mozilla, Microsoft, TikTok and various advertising bodies - performed in its first 12 months of operation.
The report shows that the biggest platforms have taken many positive and unprecedented steps to reduce the volume and impact of disinformation circulating online, from removing hundreds of thousands of misleading adverts to prioritising information from trustworthy sources.
However, the overriding message of the report is that self-regulation can only take you so far. The report highlights the shortcomings of any voluntary regime, including the “regulatory asymmetry” between those who choose to comply and those who don't, the absence of any independent oversight to monitor compliance and the lack of any effective remedies where there are breaches.
With the EU’s Digital Services Act on the horizon, as well as new regimes in other Member States (see here), this looks to be another step along the road from self-regulation to binding legislation on online harms.