Opinion
Computing Applications Technology strategy and management

Section 230 and a Tragedy of the Commons

The dilemma of social media platforms.
Posted
  1. Article
  2. References
  3. Author
  4. Footnotes
red megaphones and shadow of person using a megaphone, illustration

At the center of debate regarding regulation of social media and the Internet is Section 230 of the U.S. Communications Decency Act of 1996. This law grants immunity to online platforms from civil liabilities based on third-party content.22 It has fueled the growth of digital businesses since the mid-1990s and remains invaluable to the operations of social media platforms.6 However, Section 230 also makes it difficult to hold these companies accountable for misinformation or disinformation they pass on as digital intermediaries. Contrary to some interpretations, Section 230 has never prevented platforms from restricting content they deemed harmful and in violation of their terms of service. For example, several months before suspending the accounts of former President Donald Trump, Twitter and Facebook started to tag some of his posts as untrue or unreliable and Google YouTube began to edit some of his videos. Nevertheless, online platforms have been reluctant to edit too much content, and most posts continue to spread without curation. The problem with false and dangerous content also seems not to have subsided with the presidential election: Social media is now the major source of anti-vaccine diatribes and other misleading health information.21

Given the law, social media platforms face a specific dilemma: If they edit too much content, then they become more akin to publishers rather than neutral platforms and that may invite strong legal challenges to their Section 230 protections. If they restrict too much content or ban too many users, then they diminish network effects and associated revenue streams. Fake news and conspiracy theories often go viral and have been better for business than real news, generating billions of dollars in advertisements.1


We must raise the consequences of spreading misinformation or disinformation.


The reluctance to edit content also has created what economists and others describe as a "moral hazard." This phrase refers to a situation where an individual or organization can take risky actions because they do not have to bear the full consequences of taking those risks, such as when there is good insurance or weak government oversight.2 In this case, social media platforms can pass on highly profitable falsehoods with relatively minor adverse consequences given the protections of Section 230 and (so far) manageable financial penalties for violating digital privacy rules or even antitrust regulations.14

Yet moral hazard may not be a strong enough term to describe what could happen. As my coauthors and I have written elsewhere,5,7 another motivation for platform businesses to self-regulate more aggressively is the potential for a "tragedy of the commons." This phrase refers to a situation where individuals or organizations narrowly pursue their own self-interest, as with moral hazard, but in the process deplete an essential common resource that enabled their prosperity to begin with.11 Think of the native on Easter Island who cut down the last tree from a once-bountiful forest to make a fire—and then left everyone with an island that had no more trees. With online platforms, we can view the essential common resource as user trust in a relatively open Internet that has become a global foundation for digital commerce and information exchange. User trust, especially in dominant social platforms such as Facebook and Twitter, as well as in online marketplaces like Amazon and their product reviews, has been declining for years.8,15 Facebook, YouTube, and TikTok are now claiming to be more transparent in how their algorithms work to relay information and detect false accounts aimed to manipulate readers for political and other purposes. However, most of these efforts seem to have been superficial or temporary.4

Of course, it is not unusual for politicians to manipulate the media for their own ends. In this regard, Donald Trump has been compared to former Senator Joseph McCarthy of Wisconsin (1908–1957).19 To get out his message about the threat of a Communist conspiracy at all levels of government and society, McCarthy had to rely on newspapers and magazines, radio and TV interviews controlled by a few established companies, and public Senate hearings.12 By contrast, Trump relayed his message mainly through television (the Fox network) and social media platforms. With the latter medium, network effects can supercharge the flow of information as well as nonsense and dangerous falsehoods at nearly 123,000 miles per second—the speed of Internet data transmission. With modern technology, Trump was able to communicate directly and at will with 88 million Twitter followers and another 60 million subscribers and readers on other social media sites.20 Clearly, Trump exploited social media with a mastery that would have awed Joseph McCarthy, who only made it to the Senate.

After the insurrection attempt and Trump's ongoing allegations of election fraud, Twitter and Facebook, followed by Google YouTube and Snapchat, all suspended his accounts. The platforms claimed Trump had violated terms of service that prohibit content encouraging violence or criminal acts. Amazon also stopped hosting the right-wing Parler social media app that some Trump followers had gravitated to as an alternative. However, these measures came after the Capitol violence and multiple deaths, and attracted criticism from both liberals and conservatives.16

Both Republicans and Democrats have advocated for the repeal of Section 230, for opposite reasons. While president, Trump argued the online platforms were already editing content from him and other right-wing sources and so they should no longer be protected from lawsuits charging them with discrimination.9 As a presidential candidate, Joseph Biden argued for the repeal of Section 230 because he felt online platforms should be held responsible for disseminating false or misleading content, which Section 230 prevents.13 So what can we do about the current dilemma facing social media?

First, the U.S. Justice Department or the U.S. Congress must amend Section 230 to reduce the blanket protections offered online platforms. Digital businesses still need some Section 230 protections to facilitate the open exchange of information, goods, and services. Yet we would all benefit from a revision of Section 230 that allows the public to hold online platforms accountable at least for advertisements and profits tied to the willful dissemination of false and dangerous information. We need government guardrails not only to protect the public but also to protect online platforms from their worst tendencies—the temptation to give in to harmful or destructive content that generates billions of dollars in sales and profits. Even Mark Zuckerberg has acknowledged the problem of regulating digital content is too big for Facebook and other social media platforms to solve by themselves.3

Second, the social media platforms must become more systematic and transparent in how they detect and curb false information that might endanger the public and individual lives.7 Suspending accounts that openly promote violence or dangerous misinformation is one measure they have taken already, but platform companies must act faster and more frequently. Facebook's recent use of an external oversight board, which upheld the ban on Trump's account—albeit temporarily—is a step in the right direction, but it was slow to engage and we do not know how often Facebook will call on its services.10 In addition, to sort through the vast amount of everyday Web traffic, social media platforms have employed thousands of human editors to work along with computers running artificial intelligence and machine learning algorithms. Going forward, these companies will probably have to invest much more in both human editors and AI technology.

Third, we must acknowledge that technology, government regulation, or even more effective self-regulation by online platforms cannot by themselves fix the deep intellectual problems and political polarization that now plague American society, fueled by social media. For example, just after January 6, 2021, The Washington Post interviewed a man whose wife had been one of the police officers defending the Capitol building. He admitted his mother had been one of the rioters: "My mother has always been a conservative evangelical with extreme religious beliefs. My childhood was shaped by her profound distrust in science, public education, and vaccines. And yet my mom's political activities basically tracked the mainstream of the Republican Party."17 In another case, The New York Times described a former Navy Seal trained in counterintelligence who had come to the Capitol protests but remained outside. A registered Republican, we are told he completely "bought into the fabricated theory that the election was rigged by a shadowy cabal of liberal power brokers who had pushed the nation to the precipice of civil war. No one could persuade him otherwise."18

We need to better understand why so many Americans are so susceptible to misinformation from social media and other sources. I once thought better education in science as well as history and ethics would help people think more clearly. Yet many well-educated Americans, including scores of politicians in our federal, state, and local governments, still claim the presidential election was rigged or oppose vaccines and masks. Education alone, apparently, is not the answer.

And so we risk a potential tragedy of the commons, where trust in online platforms declines to the point where very few people believe what they read on the Internet. Companies will always pursue their own self-interests, but we must raise the consequences of spreading misinformation or disinformation that can result in death, destruction, and broad damage to society. We need both citizens and government actors to persuade platform company executives and boards of directors to recognize the danger of destroying trust in a technology that has provided them—and most of us—with so many benefits.

    1. Aral, S., The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health—And How We Must Adapt. Currency, New York, 2020.

    2. Arrow, K. Uncertainty and the welfare economics of medical care. The American Economic Review 53, 5 (1963), 941–973.

    3. BBC News. Mark Zuckerberg asks governments to help control internet content. (Mar. 30, 2019); https://bbc.in/3CE7sIQ

    4. Boyd, A. TikTok, YouTube, and Facebook want to appear trustworthy. Don't be fooled. The New York Times (Aug. 8, 2021).

    5. Cusumano, M., Gawer, A., and Yoffie, D. Can self-regulation save digital platforms? Industrial & Corporate Change. Special Issue on Regulating Platforms & Ecosystems, (2021).

    6. Cusumano, M., Gawer, A., and Yoffie, D. The Business of Platforms: Strategy in the Age of Digital Competition, Innovation, and Power. Harper Business, New York, 2019.

    7. Cusumano, M., Gawer, A., and Yoffie, D. Social media companies should self-regulate. Now. Harvard Business Review (Jan. 15, 2021).

    8. eMarketer. Facebook ranks last in digital trust among users. eMarketer.com (Sept. 24, 2020).

    9. Edelman, G. On Section 230, it's Trump vs. Trump. Wired (Dec. 3, 2020).

    10. Feiner, L., and Rodriguez, S. Facebook upholds Trump ban but will reassess decision over coming months. CNBC Tech Drivers (May 5, 2021); https://cnb.cx/3AuGP71

    11. Hardin, G. The tragedy of the commons. Science 162 (1968), 1243–1248.

    12. Hofstadter, R. Anti-Intellectualism in American Life. Vintage, New York, 1963.

    13. Kelly, M. Joe Biden wants to revoke Section 230. The Verge (Jan. 17, 2020).

    14. Kira, B., Sinha, V., and Srinivasan, S. Regulating digital ecosystems: Bridging the gap between competition policy and data protection. Industrial & Corporate Change, Special Issue on Regulating Platforms & Ecosystems, (2021).

    15. Khan, L. Amazon's antitrust paradox. The Yale Law Journal 126, 3 (2017), 710–805.

    16. Kirkpatrick, D., McIntire, M., and Triebert, C. Before the Capitol riot, calls for cash and talk of revolution. The New York Times (Jan. 16, 2021).

    17. Marshall, D. My wife guarded the Capitol. My mom joined the horde surrounding it. The Washington Post (Jan. 23, 2021).

    18. Philipps, D. From Navy SEAL to part of the angry mob outside the capitol. The New York Times (Jan. 26, 2021).

    19. Remnick, D. What Donald Trump shares with Joseph McCarthy. The New Yorker (May 17, 2020).

    20. Riley, K. and Stamm, S. How Twitter, Facebook shrank President Trump's social reach. The Wall Street Journal (Jan. 15, 2021).

    21. Shephard, K. Miami private school says teachers who get corona virus vaccine aren't welcome, citing debunked information. The Washington Post (Apr. 27, 2021).

    22. The Wall Street Journal. Section 230: The Law at the Center of the Big Tech Debate. (Nov. 18, 2020); https://bit.ly/2VzSNxD

    I thank Alexander Eodice, Annabelle Gawer, Gary Gensler, Mel Horwitch, John King, Nancy Nichols, Gilly Parker, Xiaohua Yang, and David Yoffie for their comments on prior drafts of this column.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More