Free Speech versus Public Health: The Role of Social Media (Part One)

by Claudia E. Haupt

Social media is the new public health battleground. Three current examples highlight the building clash between free speech and public health playing out in connection with social media: the rapid spread of health mis- and disinformation on social media; the extent to which public health officials can influence social media’s role in spreading potentially harmful health advice to the public; and, finally, the harmful effect of social media itself and what the government may do about it. A look across these three scenarios vividly illustrates the theoretical and doctrinal weakness of current First Amendment jurisprudence, its ill fit with online speech, and its potentially detrimental effects on public health.

I.  Health Misinformation and Disinformation on Social Media

Social media plays a central role in the spread of health misinformation (false or inaccurate information) and disinformation (misinformation that is intended to mislead others). On social media, bad health advice can be disseminated by individuals with no particular health expertise and licensed health professionals alike. And oftentimes, it’s hard to tell the difference. First Amendment doctrine establishes a strict division between speech within a  professional relationship and speech outside of it. I have previously critically assessed the “Dr. Oz Paradox”: licensed healthcare professionals are permitted, as a matter of free speech, to give potentially harmful health advice to the general public whereas giving the same advice to a patient within the doctor-patient relationship would subject them to malpractice liability, without running afoul of the First Amendment, if it resulted in harm.

The premise is that individuals within the professional relationship are protected by a variety of legal guardrails (e.g., professional licensing, informed consent, malpractice liability, and fiduciary duties) while these guardrails run counter to the presumed equality of speakers outside of this relationship. Outside of the professional relationship, everyone enjoys the same free speech protection. In public discourse, in short, everyone is on their own. This is defensible as a matter of free speech theory where, in the interest of democratic self-governance, all individuals are treated as fully and equally competent to engage in public discourse. (However, this principle of equality in public discourse problematically assumes that everyone also has equal access to good medical advice and individuals don’t have to depend on publicly available health information.) But the protection of bad health advice by licensed professionals—including medical misinformation distributed on social media—may be a different matter. Such “pseudoprofessional advice,” I have argued, may be subject to professional discipline consistent with the First Amendment.

Yet, vast amounts of bad health advice remain widely available on social media. The platforms themselves, to the extent they host third-party content, are insulated from liability via Section 230 of the Communications Decency Act. But social media platforms may, and do, engage in content moderation. The platforms may enforce their own terms of service and community standards which may prohibit posting and sharing medical mis- and disinformation. As private entities, social media companies are not subject to the First Amendment’s constraints on content and viewpointneutrality that apply to state actors. In the United States, free speech is first and foremost a negative right of individuals against the government as reflected in the state action doctrine. Accordingly, social media companies may engage in content- and viewpoint-based moderation practices, including the demotion or removal of posts and suspension of users. This remains true after the Supreme Court’s June 2024 decision in Moody v. NetChoice, which addressed content moderation on social media platforms.

Consequently, social media platforms may keep out bad health advice if they so choose, but the decision is solely up to these platforms. This directly leads to the next site of conflict: to what extent may government actors, such as public health officials, communicate with the platforms to have certain potentially harmful content removed.

II.  Public Health Communication and Social Media

In response to social media’s role in spreading health misinformation in the context of the COVID-19 pandemic, the Biden administration took steps to communicate with the platforms regarding public health via White House officials, the Surgeon General, and the Centers for Disease Control and Prevention. Allegations of jawboning—using government pressure to unduly influence the communicative output of platforms—quickly followed, and two states (Missouri and Louisiana) along with five individual platform users filed suit in federal court seeking an injunction against communications between the government and the platforms. The plaintiffs prevailed in the lower courts, but the U.S. Supreme Court reversed.

The Supreme Court’s decision in Murthy v. Missouri, announced in June 2024, ultimately held that the plaintiffs did not establish Article III standing. In so doing, the majority, via Justice Amy Coney Barrett, noted that platforms had long “targeted speech they judge to be false or misleading.” Likewise, well before the government started communicating with the platforms, they had begun tightening their content moderation practices in response to health misinformation.

In a dissent authored by Justice Samuel Alito and joined by Justices Clarence Thomas and Neil Gorsuch, however, the jawboning argument was more successful. In the dissenters’ view, the platforms’ content moderation choices were based on “what the District Court termed ‘a far-reaching and widespread censorship campaign’ conducted by high-ranking federal officials against Americans who expressed certain disfavored views about COVID-19 on social media.” Thus, while the majority opinion leaves open the precise scope of public health officials’ permissible engagement with social media platforms, the dissent seems weary of the extent of involvement displayed in this case. To further elucidate the First Amendment limits of such engagement, we must turn to another recent decision.

Another free speech case decided this term, National Rifle Association v. Vullo, provides some guidance regarding potential First Amendment constraints on government involvement. Writing for a unanimous court, and relying on the Supreme Court’s 1963 decision in Bantam Books, Inc. v. Sullivan, Justice Sonia Sotomayor concluded that coercion is constitutionally impermissible: “Government officials cannot attempt to coerce private parties in order to punish or suppress views that the government disfavors.” The Vullo case involved the superintendent of the New York Department of Financial Services. In this role, she “allegedly pressured regulated entities to help her stifle the NRA’s pro-gun advocacy by threatening enforcement actions against those entities that refused to disassociate from the NRA.” This was in violation of the First Amendment. “Ultimately,” Justice Sotomayor concluded, “the critical takeaway is that the First Amendment prohibits government officials from wielding their power selectively to punish or suppress speech, directly or . . . through private intermediaries.”

Read narrowly, the impermissible coercion according to Vullo must come from a government official with direct enforcement authority, and be articulated directly. Justice Alito’s Murthy dissent concedes as much, noting that in Vullo, “the alleged conduct was blunt,” focusing on the identity of the government speaker as “head of the state commission with regulatory authority” over the intermediaries who were told “directly and in no uncertain terms” that their regulatory infractions would go unpunished in exchange for terminating their relationship with the NRA. By contrast, the communication between the public health officials and the platforms in Murthy, according to the dissent, “was delivered piecemeal by various officials over a period of time in the form of aggressive questions, complaints, insistent requests, demands, and thinly veiled threats of potentially fatal reprisals.” The reprisals identified in the dissent were threats to remove the platforms’ statutory liability protection of Section 230 and to take antitrust enforcement action against the platforms.

Compared to Vullo, however, both the government’s communication and the threatened government action are quite tenuous. Taken together, these two Supreme Court decisions thus suggest that a continued exchange of views between public health officials and private social media companies remains permissible as long as the government does not coerce the platforms to engage in specific content moderation, with the outer bounds set by Vullo.

Claudia E. Haupt is a Professor of Law and Political Science at Northeastern University.

Part Two is posted here.

The Petrie-Flom Center Staff

The Petrie-Flom Center staff often posts updates, announcements, and guests posts on behalf of others.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.