As the novel coronavirus began to make its way around the world earlier this year, an epidemic of online misinformation spread alongside it. Companies stepped up their moderation efforts — sometimes with unintended results.
Public health officials warned that rumors and conspiracy theories, including false claims that 5G technology was causing the virus or that bleach could cure it, would cost lives. Social media companies came under mounting pressure to fight the problem, which has long plagued their platforms. A Facebook boycott organized in recent weeks by hundreds of major advertisers over misinformation and hate speech on the site, amid widespread demonstrations against racism and police brutality, compounded that pressure. Next week, Facebook CEO Mark Zuckerberg, along with the heads of other tech giants, is set to testify before the House antitrust subcommittee, as part of an “ongoing investigation of competition in the digital marketplace.”
As the pandemic took hold, platforms began to implement new measures: working with governments and the World Health Organization to push accurate information; introducing misinformation warning systems; and removing more content than ever, increasingly through the use of algorithms as the pandemic forced companies to send human moderators home.