Opinion
Computing Applications Viewpoint

A Brave New World of Mediated Online Discourse

Is artificial intelligence up to the task of managing online discourse in social networks?
Posted
  1. Article
  2. References
  3. Author
  4. Footnotes
metal representation of speech emojis

In a recent Communications column,9 Moshe Vardi asked how to regulate speech on social media platforms. Vardi reminds us Facebook's 50,000 employees require algorithms to achieve content moderation given the platform's massive number of users and contributions. While the challenges are evident, algorithms alone may not be the solution. We are facing a large-scale, collective problem that requires not just technical solutionism, but in-depth consideration of some of society's most basic values. One such value, freedom of expression, requires thoughtful discussion about how to guarantee it in the digital world—including where its limits should lie, who defines those limits, and how to execute them. This includes questions of technical approaches to content moderation, definitions of harmful content, and more fundamentally, what to expect from online public discourse?

Online social media platforms have become essential components of our societies as shapers of opinions. Both small regional forums and large social networks provide key infrastructure for public discourse in the digital public sphere.8 Some, such as Facebook, have sizeable power reaching more than two billion people and up to 90% of some countries' entire populations.a The networks can largely exercise discretion in managing the discourse, for example, by selecting or prioritizing comments. Although some countries have enacted regulation, it is mostly limited to incentivizing platforms to remove illegal content and more recently, to removing content in response to complaints.b Often, these laws do not directly prescribe content removal nor how it should be done but instead encourage platforms by way of potential penalties for hosting illegal content.c Most regulation is negative, that is, there is little regulation that grants the right to get published on social media. Otherwise, Facebook, for example, uses a single set of community guidelines for the planet.

Given the sheer numbers, shaping online discourse requires IT assistance and a whole range of algorithms for different purposes.9 They can be used for removing unwanted content or help in prioritizing messages shown to users. They help assess the trustworthiness or credibility of a user based on their historical online behavior. They can even assess the likelihood of generating a serious discussion or an emotional response. Both small regional and large international networks make use of algorithms. In fact, for smaller media it is vital they can manage content without large numbers of staff.

An important emerging question then is, to what extent and with what objective should algorithms supervise the digital public sphere? One multi-pronged approach would be to use algorithms to keep discussions factual and friendly, to remove illegal and harmful content, to support those who might otherwise remain unheard, and to keep everybody interested. For some, algorithmically moderated discourse promises a brave new world of dialogue and quality discussion. In such a world, everybody strives to behave with decency, argues based solely on facts, expresses opinions carefully and with consideration, and never uses any abusive or hateful language. Unfortunately, such a vision of algorithmically policed online discourse is unwise, unrealistic, and unfair. Let me explain why:

This vision is unrealistic, because the AI used today is often simple and much of it is, in fact, quite bad. Take for instance the famous example of a white noise video that created five false copyright claims when uploaded to YouTube.1,9 Although Google can build some remarkable language processing tools given its harvesting of online texts in various languages, many of the algorithms used for content moderation are simple pattern matchers that operate only on sentences. Many use keywords to detect hate speech or remove protected intellectual property using large databases and file hash comparison.9 Today there is very little in terms of context recognition, tracing arguments over longer periods of dialogue or efforts to disambiguate subtle meaning of language in cases such as humor or, worse, irony—the latter being notoriously difficult to identify (for example, Wallace13). Commercial tools reportedly only deliver approximately 70% to 80% accuracy, which means that one in five comments will be misclassified.5 Moreover, people are clever when it comes to outsmarting AI-based content moderation. They create new words beyond "k!ll" or use graphic text patterns and so forth. While certain filters have reached surprising levels of detection for some of these constructs, it seems only to fuel the creativity of online trolls.


Online social media platforms have become essential components of our societies as shapers of opinions.


The use of AI algorithms in content moderation can also be unfair. Language technology is strongly language-dependent. Statistical algorithms only work well for the most common languages. This problem was a likely root cause behind the Rohingya scandal—a tragic episode in Myanmar history partially fueled by online hate speech.10,11 While it might be acceptable to prioritize filters for English in an only-English-speaking country, bad algorithms in rarer languages could exacerbate existing discrimination and unfair content moderation regardless of whether that means more or less removed content.

Although AI and natural language processing have made great progress, it is debatable whether any of the applied techniques truly grasp the meaning of language. Understanding language is a very difficult problem, even for humans. Arguments about what we mean (more often about what we meant) may lead to domestic discussions and courtroom procedures. Human language reflects all complexities of life and is a universal tool with which we describe plans, woo partners, order food, end relationships, pray, play, sing, hope, and so forth, to use some of Wittgenstein's famous examples. Fully understanding all these different language practices is probably AI-hard, meaning that we need human-like AI to solve it. Consequently, algorithms can get it wrong in both ways: they miss content some actors would like to see removed and they also remove harmless content. The latter is evidenced by increasing numbers of complaints from users who want to see their removed content rehabilitated.4 The pandemic worsened the situation as reduced staff were forced to deal with increasing content; some online media failed to manage the resulting number of complaints about wrongly policed content.

Given these challenges, it is important: to acknowledge the shortcomings of AI; to inform policymakers accordingly; to ensure access to data about the current practice; and to foresee proper appeal mechanisms and procedures. As computer scientists, we have a responsibility to inform about the potential technological drawbacks as much as about the apparent successes of algorithms. Despite the shortcomings, some policymakers may expect platforms to go further than removing only illegal content. The recent U.K. online safety bill was strongly criticized as a `recipe for censorship'7 as it would require platforms to take down harmful (but legal) content. While they do not usually prescribe the use of algorithmic tools, they are obviously aware of them; but there is a danger policymakers overestimate algorithmic efficacy and precision. In a recent discussion with a Member of the European Parliament, I pointed to shortcomings of AI for online content moderation. The politician's reply was: "We know that. But we have to do something." This sounds desperate, but it may be true. Then the question is whether algorithms are the only answer and to what extent we should really seek to automatically remove harmful content.

Letting AI delete harmful content may also be unwise. One concern about "harmful" content is who defines it? We walk on thin ice if we ask platforms to algorithmically remove what their owners or AI experts consider harmful. Already today, we ridicule Facebook when it removes images of nude statues standing in our cities' public spaces. Another, more principled concern is whether we should even aim for the algorithmic policing of content so that online discourse loses its sharp edges. The German-Korean philosopher Byung-Chul Han suggests the digital world tends to result in complete smoothness as it aims to offer no resistance to our intentions.3 We certainly have not reached such a state in online discourse, but resistance and some roughness might be necessary ingredients for a good and productive discourse. There is little need for a right to free speech if speech is never harmful. The protester who informs about a corrupt politician and the whistle-blower uncovering the anti-democratic conspiracy require freedom of speech. The same applies for the civil-rights activist fighting for liberty in the dictatorship. Such speech may be harmful to those targeted—especially to those in power. Free speech as a human right was invented to enable free democracies and democratic discourse. This may include "information that offends, shocks or disturbs." Opinions may need to "run counter to those defended by the official authorities or a significant part of public opinion."d In addition, discourse may need to shake societal grounds more fundamentally.

Mind you, freedom of speech is not limitless. Many constitutions include rules that create space for lawmakers to regulate this freedom and rightfully prohibit terrorist propaganda. Such well-defined limits are clearly required. But it is an entirely different game to remove content with the justification that it may be "harmful" when it remains unspecified and unclear who considers it as such. Today, we may often not even know what is removed and why.4,14 I am worried that we are delegating responsibilities to AI that should not be delegated at the current state of the technology, because it may lead to treating people unfairly, can build unrealistic expectations, and may shift the scale in favor of suppression of opinion. More importantly, it may give potentially undemocratic power to those deleting.

Alternatives to algorithms have been discussed. One idea is to do away with anonymity. At least in democratic societies with proper protection of human rights, this can have positive effects. But, it does not solve the problem completely. Education, for example in media literacy, is often proposed but some research shows little-to-no impact or even harmful overconfidence in assessing content.2 Some online media use ranking systems for trusted online contributors, but they risk being attacked for social scoring. And then there are audits or governance boards, a current trend in social media that requires democratic oversight and may help improve the situation. Finally, there is massive discussion about platform regulation including rules for online content.6 There is still ample room for improving these suggestions, for further research, and for new approaches. We need more bright minds working on such technological choices, something that we try to encourage with our initiative on digital humanism.e But first, before embracing a "brave new world" of AI-moderated content, we must better understand what we expect from online discourse. Discourse serves many different purposes and may thus require significantly more differentiated approaches to algorithmic moderation than what is used today.

    1. Baraniuk, C. White noise video on YouTube hit by five copyright claims. BBC News (Jan. 5, 2018); https://bbc.in/3eiGQlL

    2. Bulger, M. and Davison, P. The promises, challenges, and futures of media literacy. Data & Society Research Institute. (2018); https://bit.ly/3q6xu1J

    3. Byung-Chul Han Im Schwarm. English: In the Swarm: Digital Prospects. MIT Press, Cambridge, MA, 2017.

    4. Cowls, J. et al. Freedom of Expression in the Digital Public Sphere [Policy Brief]. Research Sprint on AI and Content Moderation (2020); https://bit.ly/3IZ5INn

    5. Duarte, N., Llanso, E., and Loup A. Mixed Messages? The Limits of Automated Social Media Content Analysis, Center for Democracy and Technology. (2017); https://bit.ly/3mdXFCB

    6. Evens T. and Donders K. Regulating digital platform power. Journal of Digital Media and Policy 1, 3 (2020), 235–239.

    7. Hern A. Online safety bill `a recipe for censorship', say campaigners. The Guardian. (May 12, 2021); https://bit.ly/32cOnzJ

    8. Mazzoleni G. et al., Eds. The digital public sphere. In The International Encyclopaedia of Political Communication, Wiley Blackwell, 2015, 322.

    9. Sartor, G. The impact of algorithms for online content filtering or moderation. Policy Department for Citizens' Rights and Constitutional Affairs, DG Internal Policies. 2020; https://bit.ly/32gvYle

    10. Solon, O. `Facebook's Failure in Myanmar is the Work of a Blundering Toddler.' The Guardian (Aug. 16, 2018); https://bit.ly/3ISXrup

    11. Stecklow, S. Why Facebook is losing the war on hate speech in Myanmar. Reuters Special Report (2018); https://reut.rs/3GSaElo

    12. Vardi, M.V. What should be done about social media? Commun. ACM 63, 11 (Nov. 2020), 5; DOI: 10.1145/3424762.

    13. Wallace, B.C. Computational irony: A survey and new perspectives. Artif. Intell. Rev. 43 (2015), 467–483; https://doi.org/10.1007/s10462-012-9392-5

    14. York, J.C. Silicon Values. Verso, London, U.K., 2021.

    a. See https://bit.ly/3m9HsOP

    b. For example, the proposed regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online or Article 17 of the Copyright in the Digital Single Market Directive 2019.

    c. For example, the German Network Enforcement Act includes penalties of up to € 50 million.

    d. Guide on Article 10 of the European Convention on Human Rights. European Court of Human Rights; https://bit.ly/33u1RaG

    e. See https://bit.ly/3e3Sel6

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More