The Covid-19 pandemic has been accompanied by a wave of false or misleading information about the origin of the virus, its effects and supposed remedies. From conspiracy theories about the virus being linked to 5G infrastructure to the selling of bogus "miracle cures", there are many examples of demonstrably false information circulating online, much of it disseminated through social media.
This "infodemic" has presented an extraordinary test for social media platforms. In the face of a global health emergency, companies have rightly adopted a tough stance on "disinformation". "Disinformation" is false information that is shared with the intention to deceive, cause public harm or make economic gain (as opposed to "misinformation" which is false information shared in good faith).
Social media platforms have removed or demoted content that has been fact-checked as false or misleading, limited adverts that promote false products or services and promoted accurate information about Covid-19 from public health organisations.
However, policymakers globally do not believe this has been enough to limit the influence of bad actors who have sought to spread mistruths during the crisis. The EU has today called out the role of foreign state actors in spreading disinformation, as well as those who seek to make financial gain.
In today's announcement, the EU has asked social media platforms for further transparency on the steps they are taking, including producing detailed monthly reports on the actions they are taking to tackle disinformation about Covid-19, and is encouraging online platforms who are not yet signatories to the voluntary Code of Practice on Disinformation to participate.
Today’s requests are for platforms to take these steps voluntarily, but the EU is expected to unveil broader regulation of the sector later this year. In the meantime, the debate will only intensify about how far platforms should go to police the content on their sites.