Opinion
Society

Public Business and Private Accounts

Government officials cannot block critics on social media.

Posted
hands holding mobile phones, illustration

Almost everyone who uses social media agrees that sooner or later you need to use the block button. Maybe it is a spammer who wants you to buy an obscure cryptocurrency, maybe it is an obsessive sports fan who directs obscene tirades at fans of rival teams, or maybe it is a stalker who keeps showing up in your mentions. People have all kinds of good reasons for blocking other users.

But in at least some situations, the Supreme Court held this spring in a case called Lindke v. Freed,a it is illegal to block other users. If you are a government official, and you are using social media as part of your job duties, they may have a First Amendment right against being blocked. Indeed, even if you also use your account to post pictures of your cat and birthday wishes for your friends, your official position may put the block button off-limits.

In this column, I will give some context on Lindke and its implications for government and citizen speech on the Internet. This is the second in a series on how the legal landscape of U.S. law on online speech is changing rapidly. Future columns will deal with platforms’ liability, app bans, and other high-profile controversies.

Legal Background

Identifying the First Amendment speech interests of users blocked by government officials is a little tricky. The obvious but not-quite-right answer is the constitutional right “to petition the Government for a redress of grievances.” But this principle is limited. The right to speak to the government does not oblige it to listen at all times and in all ways. You cannot barge into your Senator’s office and demand a meeting on the spot. (For this reason, some courts have held that public officials are free to use the mute feature, so they do not see messages from particular users.)

Instead, the First Amendment problem with blocking is that it prevents the blocked users from speaking to other users. A blocked user on X cannot respond to a tweet, which means that the thread of replies to a politician’s tweets could become a supporters-only zone. The same is true for a user blocked from commenting on a government agency’s Facebook posts.

The crucial issue in most social-media blocking cases has been whether the blocker is acting in a “public” or “private” capacity. The First Amendment says that “Congress shall make no law … abridging the freedom of speech.” So while being blocked might abridge your speech, it only becomes a constitutional issue if the blocking happens because the government ordered it—in constitutional terms, if the blocking is “state action.”

Usually it is obvious whether there is state action or not. When a state government enacts a law requiring social-media companies to block minors (as I discussed in my May 2024 Communications column), that is clearly state action, for which the government must provide a sufficient legal justification. But when you unfriend a nosy neighbor on Facebook and keep them from seeing your posts, that is not state action. Private citizens can block to their heart’s content, for any reason, or even for no reason at all.

The first and most famous blocking case illustrates why state action has been such a recurring challenge. Ex-President Trump started using Twitter in 2009 when he was just a private real-estate developer and reality-TV host, and continued to use it heavily after he was elected in 2016. He was almost as prolific a blocker as he was a tweeter.

Two federal courts held that he was acting as President Trump, rather than as Citizen Trump, when he used Twitter.b For one thing, he used government resources on it; he was assisted in tweeting by Dan Scavino, the White House Director of Social Media. (Trump’s famously ungrammatical and seemingly off-the-cuff Twitter style was in fact a carefully crafted media presence.) For another, he used @realDonaldTrump for official business: announcing nominations and firings, declaring policy moves, and making what even his own press office characterized as official statements. But because the case was still pending at the Supreme Court when he lost the 2020 election and ceased to be a government official, it became moot and is no longer a precedent.c

The Lindke v. Freed Case

The Trump case opened the floodgates for other suits against politicians over their social-media policies. Officials at all levels of government, from school board members to Representative Alexandria Ocasio-Cortez, were sued by constituents they had blocked. The plaintiffs won most, but not all, of these cases. The courts frequently had difficulty explaining whether an official’s social-media posts were public or private, and in April 2023, the Supreme Court agreed to hear two cases to clarify the issue.

In Lindke v. Freed, James Freed created a private Facebook page in 2008 when he was a college student, then later made it public. In 2014, he was appointed city manager of Port Huron, Michigan, and started mixing posts about his official duties (such as whether city residents could raise chickens) and personal posts (such as family photos). He blocked Kevin Lindke over criticism of Port Huron’s pandemic response.

Another case, O’Connor-Ratcliff v. Garnier, involved two schoolboard members who created Facebook pages for their campaigns, and then used the pages to post news about the board’s activities and solicit feedback from constituents.d They blocked parents of children in the district for posting repetitive comments—including “nearly identical comments on 42 separate posts on O’Connor-Ratcliff’s Facebook page.”

Although the facts in the two cases were quite similar, the appellate courts took very different approaches to them. The Sixth Circuit held that Freed’s Facebook account was private, because he did not use government resources to run his the account and he wasn’t required to post as part of his official duties. But the Ninth Circuit held that O’Connor-Ratcliff and Zane were acting as public officials because there was a “close nexus between the Trustees’ use of their social media pages and their official positions.”

The Supreme Court used Lindke to announce a new two-part test. (As it often does when it hears two related cases, it remanded O’Connor-Ratcliff for a fresh look in light of the Lindke decision.) Justice Barrett’s crisp opinion focused on the authority governments confer on officials to act on their behalf. According to her opinion, an official’s speech is state action when they “(1) possessed actual authority to speak on the State’s behalf, and (2) purported to exercise that authority when [they] spoke.”

On the first prong, not everything an official says is within the scope of their job duties. Barrett explained, “imagine that Freed posted a list of local restaurants with health-code violations,” even though his responsibilities did not include public health. This would be private speech, because only the Health Department has responsibility for enforcing the city’s laws on safe handling of food.

On the second prong, even officials can speak unofficially. If a school-board president had described a new policy to friends at a backyard barbecue, that would be private speech, even though the exact same statements might be state action if he gave them while standing behind a podium at a press conference.

Importantly, the opinion explained, the right unit of analysis for social-media speech is the post, not the account. Freed’s account, for example, was not all public or all private; it mixed posts of both types. And even individual posts can require subtle analysis. A mayor who announces a temporary suspension of alternate-side parking might be engaged in state action, but if the mayor merely reshares an announcement posted elsewhere, it might be private.

Unanswered Questions

Lindke v. Freed is short, engaging, and persuasive. It is hard to argue with its examples. And it has some notable virtues.

For one thing, the granular per-post analysis is a significant improvement over the all-or-nothing approach of classifying an entire account as public or private. Freed’s account is a good example, since it long predated his appointment, and he continued to post Bible quotes and dog pictures afterwards. The distinction is a heartening sign that the Supreme Court can sometimes rise to the occasion when technical knowledge is required.

The opinion is also honest about its consequences. Because blocking is an account-level action, it means that the blocked user cannot comment on any of the blocker’s posts, whether they are classified as private or public. But this means that if Freed blocks Lindke because he objects to Lindke’s comments on his dog photos, Freed is also blocking Lindke from city announcements.

“A public official who fails to keep personal posts in a clearly designated personal account therefore exposes himself to greater potential liability,” the opinion explains. This may seem like a harsh outcome, but the rule creates good incentives. It encourages politicians and public servants to clearly separate their personal, campaign, and official social-media presences. That is good for the public’s speech, and good for government ethics.

At the same time, Lindke has some surprising, and perhaps unintended, implications. One is that it may inhibit government agencies from engaging with the public on social media. Deleting comments and blocking users are essential tools of content moderation, without which the quality of discussion rapidly declines. Anyone who has a social-media presence knows that spammers, trolls, and abusers breed on unmoderated comments sections like fruit flies on a rotting banana.

Technically, Lindke is silent on whether government officials may sometimes be justified in blocking other users. It answers only the threshold question of when a person is acting in their official capacity, not the substantive question of what actions officials can take.

But other cases paint a discouraging picture. The city of Sammamish, WA, for example, had a social-media policy that it would delete off-topic comments from its Facebook posts. A federal court held that this was a violation of the First Amendment, because it was a content-based restriction on speech.e Under this rule, Facebook users have a right to post restaurant reviews, childish insults, and memes about YouTube stars on any governmental post they want.

This may be good for individual users, but it is bad for public engagement. If a governmental body can’t keep its social-media comments from descending into chaos and abuse, it may choose to disable comments entirely. To be sure, some of the best governmental social-media usage is broadcast-only: the Consumer Products Safety Commission posts surreal memes, and New Jersey disparages other states. But still, something has been lost when people who want to seriously discuss policy cannot do so in the same place where their governmental officials are actually talking about it.

Lindke’s focus on individual posts, however, may offer a productive way forward. The Sammamish case, and others like it, have turned on courts’ findings that governments create “designated public forums” when they post on social media. A designated public forum is a space opened up by government for public discussion and debate. Governments are not required to create designated public forums, but once they do, they must genuinely allow speech on any subject from any point of view.

Lindke, however, suggests that the relevant public forum might not be the account but instead each individual post. In that context, there is a stronger argument that each post is instead a “limited public forum” devoted to discussion of one specific subject: this post is about garbage collection, that one is about City Hall’s business hours, and so on. Government can generally enforce restrictions on speech to keep a limited public forum dedicated to its particular subject. If so, then off-topic comments really can be off-limits, and governments may have more leeway to engage in content moderation against spam and abuse.

The judicial system is sometimes accused of being out of touch with technological changes. But Lindke is an example of a court engaging productively with new communication technologies. The Supreme Court’s opinion is a modest, incremental step: resolving the case before it, while providing some helpful guidance for future ones.

    • Lindke v. Freed, 601 U.S. 187 (2024).
    • Knight First Amendment Institute v. Trump, 302 F. Supp. 3d 541 (S.D.N.Y. 2018), aff’d, 928 F.3d 226 (2d Cir. 2019).
    • Biden v. Knight First Amendment Institute, 141 S.Ct. 1220 (2021).
    • O’Connor-Ratcliff v. Garnier, 601 U.S. 205 (2024).
    • Kimsey v. City of Sammamish, 574 F. Supp. 3d 911 (W.D. Wash. 2021).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More