Opinion
Society

The Fine Line between Persuasion and Coercion

How government can—and cannot—influence social-media platforms' content-moderation policies on controversial issues.

Posted
illuminated figure in a crowd of darker figures

Credit: Shutterstock

In 2020 and 2021, at the height of the COVID-19 pandemic, social media platforms were awash in dangerous health misinformation. These posts included false claims about the dangers of vaccines, false claims about the health benefits of alternative treatments, and much more. This was a problem for public health—and it was also a content-moderation problem for the platforms. Federal officials in the White House and at the Centers for Disease Control frequently contacted the platforms to point out posts that flew in the face of science. The platforms used this information to decide which posts to remove.

This kind of content moderation raises a sharp legal question. Many of these posts, even the ones that are blatantly false, are protected speech under the First Amendment. The government generally cannot compel platforms to remove legal content. But platforms can decide on their own to remove health misinformation and other content, and the government is mostly free to persuade platforms to do so.

In Murthy v. Missouri,a decided in June 2024, the U.S. Supreme Court wrestled with the line between coercion and persuasion. It held that users suing the government must show there is a “concrete link” between government pressure and the removal of their specific posts. As long as platforms “exercise their independent judgment” over content moderation, there is no First Amendment violation.

In this column, I will describe the history of the Murthy case and explain how it leaves platforms free to set their own content-moderation policies on controversial issues. This is the third column in a series about recent changes to online speech law. Future columns will deal with the TikTok ban and platform liability for algorithmic recommendations, both of which are currently being litigated.

Background

Almost everyone has strong opinions about social media, including governmental officials. For years, they have spoken out publicly about what they see as dangerous posts, such as dangerous viral challenges, terrorist propaganda, and scientifically dubious health claims. Many officials, at every level of government, have used the “bully pulpit” of their public prominence to denounce this material and ask platforms to work harder to block it. Some officials have gone further, and said that unless platforms clean up their act, they will pass laws to force the platforms to do so.

In particular, starting with the Obama administration, federal officials have been in regular contact with the major platforms, including Facebook, Twitter, and YouTube, to discuss their concerns about specific types of content. The Cybersecurity and Infrastructure Security Agency, for example, forwarded to platforms information about networks of accounts that appeared to be controlled by foreign intelligence services. These networks typically violate platform policies against what Facebook calls “coordinated inauthentic behavior,” so the messages frequently led the platforms to suspend these accounts.

As another example, the Centers for Disease Control would host meetings for platform representatives to lay out its best understanding of the science behind the COVID-19 pandemic. The platforms used this information to remove posts containing false and dangerous information about vaccine side effects and about ineffective alternative “cures.”

Unsurprisingly, many users vehemently disagreed with the platforms’ content-moderation decisions. For example, some doctors believed that the U.S. policy response to the pandemic was far too aggressive and that quarantine orders disrupted people’s lives for very little health benefit. From their perspective, governmental officials and platform executives had conspired to silence dissenting viewpoints.

The Litigation

A group of five individual users, joined by the U.S. states Missouri and Louisiana, sued a long list of Biden administration officials, including President Biden himself and Surgeon General Vivek Murthy (whose name became the case caption). They argued that the platforms had removed their posts at the officials’ request, in violation of the First Amendment.

It is important to understand why this was a lawsuit against governmental officials, rather than against the platforms. Federal law gives platforms strong rights to engage in content moderation as they see fit. In its recent decision in Moody v. NetChoice,b the Supreme Court held that platforms have their own First Amendment rights to decide which content they will and will not carry. In the last few years, some states and plaintiffs have made a series of increasingly creative assaults on these doctrines, but for now, it is clear that if Facebook decides on its own to remove my posts about my favorite music, it has every right to do so.

Thus, the Murthy plaintiffs instead sued an array of government officials, arguing that they had illegally pressured the platforms. The First Amendment does not protect government officials who compel private actors to engage in censorship. If a police officer is upset at a a journalist’s expose of police corruption, and orders a bookstore to take the book off the shelf, that is a First Amendment violation. The journalist can sue the police officer, even if it was the bookstore clerk who physically removed the book.

On the symbolic date of July 4, 2023, the trial court ruled for the states and users. In its view, the First Amendment is violated whenever the government either “coerces” or “substantially encourages” a platform to remove user-posted speech. It issued a sweeping injunction prohibiting Biden, Murthy, and dozens of individuals at numerous agencies from coercing or encouraging the platforms to moderate the plaintiffs’ posts.

The defendants immediately appealed, arguing both that the decision was wrong and that the injunction was so broad and vague as to leave them with no useful guidance as to what they could and could not say. The Fifth Circuit federal appeals court substantially affirmed the trial court’s ruling that there was a First Amendment violation, but it narrowed the injunction somewhat, removing some of the agencies that had never directly communicated with the platforms. The defendants asked the Supreme Court to hear the case, and it did.

Understanding Standing

Justice Amy Coney Barrettc wrote the majority opinion dismissing the plaintiffs’ claims. Notably, she did so on a procedural ground—the doctrine of “standing”—rather than reaching the First Amendment analysis itself. Still, the opinion says a great deal about how the government can and cannot influence platforms’ content-moderation decisions.

Standing is a judicial doctrine that prevents people from bringing lawsuits unless they have a personal stake in the outcome of a case. If you hit me, I have standing to sue you for battery. But if you hit my neighbor, only he has standing to sue. As a bystander, I have nothing to gain or lose from the lawsuit. I was not the one injured, and I will not receive any damages if you are found liable.

Lawyers would say that standing is a “procedural” rule, not a “substantive” one. It affects how the litigation process proceeds, rather than deciding who is entitled to what under the law. But standing is sometimes said to be “entwined” with the substance of a case, because often the only way to know whether a plaintiff has standing is to look closely at the gist of their claims.

In Murthy, the plaintiffs had all clearly been harmed by the platforms’ content moderation: their posts had been removed or their accounts suspended. That was more than enough to have standing to sue the platforms—but it did not by itself give them standing to sue the government officials.

The missing link, Justice Barrett’s opinion held, was that they could not show that their injuries were “fairly traceable” to the government’s action. If you hit my neighbor, he can’t sue me; yes, he has been injured, but not as a result of anything I did. If he wants to get around this limit by arguing that it’s my fault you hit him, he will need to show that you hit him because of something I did.

And that was where the plaintiffs’ proof problems became insurmountable. In every specific instance that the Supreme Court examined, Biden administration officials had complained about some classes of content, and some content within those classes had been removed. But that only showed that the platforms “exercise[d] their independent judgment” to remove the posts, which they had every right to do.

In short, Murthy clarified the line between the government persuading platforms to act (generally legal) and compelling them to (generally illegal). A user who objects to government pressure must be able to show that their specific posts were removed (or will be removed in the future) as a consequence of that pressure. If the platform would have removed the content anyway, or voluntarily chose to remove the content after having it pointed out, there is no standing. This is a First Amendment rule in all but name.

A Role for Government in Content Moderation

Murthy was not unanimous. Justice Samuel Alito wrote a dissenting opinion for a three-justice minority. It took a very different view of the facts than Justice Barrett’s opinion for the six-justice majority did. He was far more willing to see governmental outreach to platforms as coming with an implicit threat: take down these posts or we will take revenge on you using our other powers. He would have upheld the lower courts’ injunctions barring a wide range of government contacts with platforms.

In Justice Alito’s view, content moderation on major platforms is currently a dystopia, one in which powerful government officials use back-channel threats to suppress dissenting viewpoints and entrench their own hold on power. And Justice Alito is clearly right that it is easy to imagine cases in which informal “requests” to platforms are in fact demands in all but name.

But there is also something dystopian about the world that Alito’s rule would create. The foremost experts in governmental service—including public-health officials and counter-intelligence analysts—would be legally prohibited from sharing what they know about lies and confusion circulating on social media. Elected officials, who were voted into office because of their views—could not even talk about what they would like to see happen on the Internet, lest their remarks be construed as a threat by platforms. Alito’s dissent, like the lower courts’ injunctions, would have created an upside-down First Amendment rule in which private citizens can use the courts to suppress government speech they disagree with.

Justice Barrett’s opinion offers a persuasive response to the dissent’s concerns. On the one hand, by using a standing analysis focused on the platforms’ independent judgment, the opinion preserves the platforms’ rights to perform content moderation and government officials’ ability to speak on important matters of public policy. But on the other hand, the opinion leaves open the possibility that other plaintiffs, who have stronger and clearer evidence of improper pressure, could come to court to protect their rights to speak online.

There are good reasons to be worried about social-media platforms’ power over online speech. There are even stronger reasons to be worried about governmental power regarding online speech. Murthy encourages platforms and government to be in dialogue with each other. But it also signals that the courts can step in if this dialogue crosses the line into coercion.

    • 603 U.S. 43 (2024).
    • No. 22-277 (U.S. July 1, 2024).
    • Justice Barrett also wrote the opinion in Lindke v. Freed, 601 U.S. 187 (2024), a case about government officials’ use of social media decided by the Supreme Court in March 2024. I discussed Lindke in my September 2024 Communications column.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More