Bill of Health - Mask with "misinformation" written in it with a "stop" above on a gray background, covid misininformation

Emergency Measures: Free Speech and Online Content Moderation During Coronavirus

By Jeremy Dang

In the chaos and confusion of the coronavirus pandemic, few have stopped to notice or second-guess the unprecedented role that online platforms have assumed in the past few months. Among the few who have, fewer still have looked beyond the pandemic and questioned what is truly at stake. In a matter of years, social media has become an essential, indispensable part of how we relate to each other and the world around us, and the power of online communication is now undeniable. Today, a tweet or a Facebook post can quickly reach vast personal, social, and even political networks, and online communication has even facilitated important national and global movements. But during a global pandemic, when staying informed can mean the difference between life or death, the power of online speech takes on a new urgency, and the dangers of pernicious speech can quickly become lethal. In a pandemic, misinformation is not abstractly disruptive or potentially harmful, but immediately dangerous in obvious, devastating ways.

With this in mind, online platforms have been widely (and rightfully) praised for the unprecedented steps they are taking to stem the steady flow of misinformation regarding COVID-19. At the same time, however, these novel measures also raise important questions about the future of online content moderation. Stifling misinformation during a global pandemic has obvious, undeniable public health benefits, but it is natural to wonder whether these new measures are a temporary response to a unique moment of global urgency or a first step in a new age of unprecedented content moderation. If they are a sign of things to come, it will become vital not to forget the widespread procedural demands that platforms faced before the pandemic, even as we continue to praise some of the substantive decisions the platforms are making to address the current global health emergency.

An “Infodemic” of Misinformation

As health authorities raced to keep up with new developments in the early months of the coronavirus pandemic, misinformation and conspiracy theories were already proliferating across the Internet at an alarming, almost impressive pace. On social media, unproven home remedies were widely shared and quickly reached scared and misinformed people, often with fatal consequences. When political leaders and outspoken celebrities injected life into many of these misinformed cures, people ended up poisoning themselves with substances whose potential to combat the virus had already been disproven by global health authorities.

The impulse to believe what we read online is of course understandable, especially in the uncertainty and chaos that has characterized the past several months. But the World Health Organization was quick to recognize the unique threat that misinformation poses in a global pandemic, particularly in a digital age when a misinformed opinion posing as fact can easily reach scared, vulnerable people around the world. As early as February, the WHO urged social media platforms to exercise their broad powers over their networks to stifle the spread of false information, noting that misinformation was “spreading faster than the virus.” The WHO described the phenomenon as an “infodemic,” where “an overabundance of information – some accurate and some not…makes it hard for people to find trustworthy sources and reliable guidance when they need it.” Within only a few months of the first documented case of coronavirus, media outlets were already reporting countless cases of home remedies gone disastrously wrong.

Major Platforms Respond: Content Moderation During a Pandemic

Major online platforms immediately responded to the WHO’s call for action by committing to unprecedented measures to moderate the spread of health misinformation. Twitter, for example, announced that it was “broadening [its] definition of harm to address content that goes against guidance from authoritative sources of global and local public health information.” Meanwhile, Facebook and Google have each taken down thousands of posts and videos that could mislead users to harm themselves. While platforms have always regulated some of the content that their users post, Harvard Law School lecturer evelyn douek pointed out in a recent interview with me that these new measures are unprecedented in important ways. For one thing, while platforms routinely moderate objectionable posts like hate speech and nudity, they have generally refused to eliminate false information purely because it is false, instead maintaining that a free marketplace of ideas allows the best ideas to rise to the top. But with coronavirus misinformation, online platforms have dropped any pretense that they are not acting as “arbiters of truth” by removing content purely on the basis of its false nature. Moreover, the sheer scale of content moderation is staggering and unprecedented – in the month of March alone, Facebook “displayed fact-checking warnings on 40 million posts related to the pandemic and took down hundreds of thousands of posts.” During the pandemic, more so than ever before, these decisions are also being driven by artificial intelligence (AI), a trend that platforms have acknowledged will lead to more errors.

A New Era?

That these measures have served vital public health benefits is undeniable. As evelyn douek puts it, “if ever an emergency justified a clampdown on misinformation and other extraordinary measures, the coronavirus pandemic is surely it.” But such unprecedented levels of content moderation also raise serious questions about the future of online speech. After all, health misinformation is far from the only “fake news” that proliferates social media, and coronavirus is not the only context in which platforms have been pushed to be more aggressive in flagging and removing posts. If heavy-handed moderation is becoming the new norm, it is important not to forget the widespread procedural concerns levied against platforms before the pandemic, even as we continue to praise and acknowledge the public service that content moderation during a pandemic serves. As douek explained in our discussion,

“The fear for me is not necessarily about whether [online platforms] will be more hands on or not … We’ve abandoned the early-days-of-the-internet ideal that platforms shouldn’t be hands on. The fear for me is that they’re doing all this without the requisite transparency or appeals mechanisms or accountability that we would ordinarily want for such heavy-handed content moderation.”

Praising the substantive decisions that platforms have made in the past few months should not undercut the serious procedural criticisms that were widespread before the pandemic. In the context of health misinformation, where the stakes are immediate and misinformation is ostensibly easier to define and identify, douek acknowledges that the heavy-handed approach might make sense, particularly in the midst of a global health emergency. But if aggressive moderation expands to encompass misinformation or disinformation in political contexts, for example, online platforms should not be allowed to make their decisions in the dark. Given the power and importance of online speech as a means of conducting public discourse, we must continue to hold online platforms to higher procedural standards. She continued to explain that,

“It varies platform by platform, but generally the trend has been towards more [transparency and due process]…That has been the push from civil society and academia towards these platforms…But the concern that I had was that we were kind of abandoning that approach and the call for those kinds of principles in a moment of emergency. And that makes sense, emergency powers should be invoked in emergencies. But also we need to think about what happens when the emergency subsides.”

Indeed, even in a pandemic where the benefits of moderation are obvious and undeniable, platforms’ lack of transparency should be disconcerting. For one thing, douek notes that “we actually have no idea how effective platforms have been at policing COVID mis- and disinformation.” We know only what platforms have told us, which is that they are taking down posts at an unprecedented pace and scale, but “we just have no transparency into how much stuff they’re not taking down, so those really large numbers…are almost certainly not all of it.” Moreover, “we don’t know how much of the stuff they took down wasn’t misinformation or disinformation but was some other type of content. All that we know for sure is that those figures are both over- and underinclusive, and we just don’t know the extent.” In the past few years, critics, legislators, and platforms themselves have reached a consensus that some form of regulation is desirable. But before any sort of regulatory or legislative discussion can even be had, we have to know what goes on behind the scenes – we have to know how content moderation decisions are actually made, which is impossible without a stronger commitment to transparency. As douek put it,

“Every substantive rule that you make in this area is going to have tradeoffs, and so we need to have a conversation about what kinds of things we want to prefer, and we can’t even do that at the moment because we just don’t have the information about what those tradeoffs are. So step one for me is more transparency so that we can actually have the substantive conversation.”

Demands for greater transparency and more stringent due process guarantees are nothing new, and they have not gone unaddressed. In the past five or ten years, evelyn douek noted, online platforms have come a long way in terms of developing more robust appeals mechanisms and releasing more detailed transparency reports. But she also acknowledges that serious obstacles remain to achieving the level of transparency that would be necessary to effectively and critically evaluate how successful platforms have actually been in moderating controversial content. First, transparency reporting and robust appeals mechanisms are expensive – online platforms have already spent a considerable amount of money hiring more content moderators and preparing more transparency reports, and it’s still “nowhere near enough.” Secondly, because transparency reporting is not mandatory, “the platforms that choose to release more information also open themselves up to more criticism,” creating a perverse incentive structure that rewards the least transparent platforms. Changing this incentive structure requires legislation that creates mandatory procedural requirements, which has not materialized in the United States despite a widespread consensus that some kind of regulation is necessary. Indeed, douek concluded her conversation with me by acknowledging that “regulation of social media platforms is going to be very difficult. Government regulation of speech is very difficult, particularly in America, obviously, with the First Amendment being so robust. I think that while there is consensus that something needs to be done…there’s still not really consensus on what exactly should be done.”

This lack of a concrete direction is, in some sense, understandable. The landscape of online content moderation is constantly evolving as new technology emerges (the shift to AI is just one example of how improving technology can radically change the way moderation happens), and different jurisdictions will have different concerns about content moderation. In the end, “[these] questions are going to have to be answered differently for different platforms, for different legal systems, and at different times. So one of the things we have to be conscious of is a sort of humility… we don’t know a lot about what’s going on, and these things develop and change very quickly.”

If we forget the widespread calls for transparency and due process that animated the back-and-forth between critics and platforms before the pandemic, however, difficult questions will become impossible ones, and heavy-handed content moderation will become disconcerting and unpredictable. There are undoubtedly contexts (like the current pandemic) that call for extraordinary measures, and we are perhaps right to praise the aggressive decision-making of platforms during an emergency that calls for decisive action. But without the transparency to understand how and why content moderation decisions are made (especially when they are driven largely by AI), we are doing little more than “suggesting that current practices are a good model for future models without actually knowing how effective those practices are.” While we view social media as arenas of speech and debate, the companies behind the platforms are still private enterprises whose revenues are driven by advertisements. Their priorities are not defined by their purported commitments to free speech ideals, but by their profit margins. Indeed, the decisions that online platforms have made about what to censor have at times directly contradicted their purported ideals of free expression. Online platforms are rapidly replacing public squares and other traditional speech fora as arenas of public debate. As they take on more proactive roles in policing public discourse, then, some measure of accountability is necessary to ensure that our ideals of free expression do not become relics of the past.

Thank you to evelyn douek, who is an S.J.D. Candidate and Lecturer at Harvard Law School, for her time and thoughtfulness in speaking with me on this evolving topic.

Jeremy Dang graduated from Harvard Law School in May 2021. 

This post was originally published on the COVID-19 and the Law blog

The Petrie-Flom Center Staff

The Petrie-Flom Center staff often posts updates, announcements, and guests posts on behalf of others.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.