News
Computing Profession

Demanding the Truth

Posted
Social media outlets inadvertently aid fake news reports by not closing down accounts generating them, and only checking the veracity of the items once they were posted.
The best social media outlets can do is to detect fake news (or other offensive content) quickly after it is posted; they cannot prevent individuals from posting fake news.

The U.S. Senate's Select Committee on Intelligence in October released to the public a report titled Russian Active Measures Campaign and Interference in the 2016 U.S. Election, which concluded that the Internet Research Agency (IRA), a Russian group engaged in online influence operations on behalf of Russian business and political interests, had "used social media to sow disinformation and discord among the American electorate."

According to the report, IRA used bots to upload posts to Facebook (>61,500), Instagram (>116,000), and Twitter (>10.4 million), as well as Reddit, YouTube, and other social media platforms. The Russian bots were controlled by English-speaking human operatives called "trolls" who reportedly paid $700 each to "deepen distrust in our political leaders; exploit and widen divisions within American society; undermine confidence in the integrity of our elections; and, ultimately, weaken America's democratic institutions and damage our nation's standing in the world," according to the report.

The report concluded that the social media outlets inadvertently aided the more than 10.5 million fake news reports posted by the bots by not closing down IRA accounts until the posts had gone viral.

Recently, the University of Utah's S.J. Quinney College of Law hosted the Lee E. Teitelbaum Utah Law Review Symposium on News, Disinformation, and Social Media Responsibility. The event aimed to address the "twin problems arising from the changing media landscape," as its description noted: "In recent years, major social media platforms have become the primary mechanism for the distribution of news-delivering, but not producing, journalism on matters of public concern, while attracting much of the advertising revenue that once supported traditional news organizations. These same platforms have also become the primary target for those who would spread disinformation, which is regularly consumed and shared by users as if it were news. The ways these companies' policies and practices foster or impede both news and disinformation have significant consequences for the American citizen and for democracy as a whole."

The four-hour Symposium focused on media law, press freedom, and corporate social responsibility, featuring speakers who addressed corporate obligations, the role of social media companies, and the business operating decisions that have failed to reshape civic cyberspace in the age of fake news.

The Symposium's slate of presenters, which included social media executives, journalists, and media-law experts, agreed that Facebook, Instagram, Twitter and the other social media companies will never actually be journalistic news outlets. Genuine news is authored by journalists, but social media reduces journalists to the same status as any other user, since those platforms do not publish actual news stories, but merely personal opinions.

"Social media is mainly for young people who don't want to have anything to do with journalism," said Dahlia Lithwick, senior editor and legal correspondent for Slate. "They only want to read things they already believe."

The social media platforms were conceived and designed for non-journalists to share information among themselves, not for critical news analysis. It is only due to their overwhelming popularity that social media has become a substitute for "news outlets," albeit without journalist principles, according to University of Miami law professor Lili Levi. "Legacy media used journalists to cultivate a target audience for advertisers, then paid the journalists with advertising revenue," said Levi. Today, however, "Social media ads target audiences by tracking the sites they visit, where they click; consequently, they don't need journalists at all."

Further, in the legal documents accompanying the initial public offerings (IPOs) of the social media companies, claimed Levi, they describe their platforms as being for ordinary people communicating among themselves, with no mention of journalism at all. Thus Facebook, Instagram, Twitter and the like have a fiduciary duty to their shareholders to maintain an instant posting capability for registered users, rather than acting like news outlets requiring news be edited journalistically.

"People share information without critically analyzing it," said Andy Pergam, director of Governance and Strategic Initiatives at Facebook. "We now use third-party fact checkers after a post is up, but human fact checkers do not scale, and they take too long to verify posts."

The tangible result of this dilemma—items purporting to be news posted without journalism—is that ordinary users now have the right to post fake news with no legal repercussions, according to Pergan. Fact checkers can take down fake news after it is posted, or even close down repeated offenders' accounts, but not before posts go viral.

In contrast, legacy news organizations sift journalist stories through a hierarchy of editors—content editors, fact checkers, copy editors, and the like—who prevent (most) fake news from ever being published. Yet journalism is tacitly absent from social media, and cannot be reinstated without violating social media's duty to shareholders; that is, to maintain the right of registered users to instantly post anything, according to Pergam.

As a result, the best that social media outlets can do today, according to Pergam, is to detect fake news (or other offensive content) quickly after it is posted. The worst part is that human fact checkers take so long to detect fake news that often it has already gone viral by the time they recognize it. Removing it then is relatively easy but almost useless, since the content will continue to be reposted on special-interest sites. Pergam admits that detecting and removing fake news before it goes viral is the responsibility of social media outlets, adding that it cannot be achieved by normal journalistic methods, since journalistic editing must be done before posting and that violates social media's charter as public companies.

"Journalists used to be the critical watchdogs for democracy, but now their First Amendment protection is being extended to social media instead of certified journalists," said Sonja West, a lawyer who is Otis Brumby Distinguished Professor in First Amendment Law at the University of Georgia School of Law.

Consequently, because of the sheer volume of posts (millions per second), the problem of fake news cannot be solved by human fact checkers, according to  Pergam.  The only scalable solution, he says, is artificial intelligence (AI). However, so far AIs cannot reliably detect fake news, so the current stop-gap measure is for AI to quickly flag suspect posts for human fact checkers.

Facebook, Instagram, and other social media outlets are developing AIs that can speed up the process by quickly flagging potential fake news for human fact checkers. Unfortunately, according to Pergan, even if an AI detects suspect content immediately after it is posted, at best it takes human fact checkers at least an hour—and sometimes much longer—to factually verify that a news story is fake, and even an hour is often too late to keep fake news from going viral.

Therefore, during the 2020 election season, Pergam says, social media outlets will only be able to identify the majority of fake news posts after they have gone viral.

Social media fake news disseminators will also not be held to normal journalistic standards for the foreseeable future, according to David Green, Civil Liberties Director at the Electronic Frontier Foundation.

"Journalists are held to a higher standard of what is news, but social media doesn't hold disseminators of fake news accountable to anything but violations of their service agreements," said Green. "Under U.S. law, legacy publications and social media outlets are protected equally,  even though there are no journalistic requirements on social media…what social media needs is to learn how to judge news journalistically, how to instill the desire in its users to be good journalists, but the First Amendment's right to freedom of expression makes this impossible to enforce."

The social media outlets themselves are looking for regulatory bodies to set sup post-publication rules that avoid holding them accountable for their users' lack of pre-publication journalist safeguards. For instance, Facebook and Instagram are setting up their own Oversight Boards since regulatory bodies are not stepping forward, according to Pergam, who says those social media platforms are making their Oversight Boards appear "independent" by not paying them. Facebook is also inviting other social media outlets to volunteer to adhere to its Oversight Board's principles. The Oversight Board will review articles and posts on social media deemed fake news after the stories have gone viral and been taken down. The social media outlets will follow these "independent" regulatory bodies' judgements, and so will not be blamed for the stories going viral, says Green.

Another reason fake news is inevitable during the 2020 election year is that the rules permitting social media to remove fake news posts do not even apply to politicians. For instance, Twitter has already stated its policy—which is typical of all social media outlets—that "there is a clear public interest value to keeping Tweets from world leaders online," even though they violate their service agreement. "We may place this content behind a notice that provides context about the violation, but which allows people to click through should they wish to see the content."

Crowdsourced solutions have already been perfected, but will not be implemented by social media platforms because of their responsibility to shareholders to instantly post anything, according to Hannah Bloch-Wehba, a professor of law at Drexel University's Kline School of Law.

"The social media business model is instant publication, to which they have a fiduciary duty to shareholders to continue, but that is also the cause of the whole fake news problem," said Bloch-Wehba. "The cream [real news] no longer comes to the top on social media, but that is not a good-enough reason to suppress the right of their users to free speech."

An example of crowdsourcing genuine news journalistically is Wikinews, according to the Wikimedia Foundation. It solves the fake news problem with the same journalistic methods already proven to work at the crowdsourced encyclopedia Wikipedia.

Each story published on Wikinews is the result of a collaboration between a writer and independent editors. Unlike Facebook, Instagram, Twitter and other social media, Wikinews follows the time-tested rules of journalism. just in a crowdsourced manner. For instance, stories are not instantly posted, but are rigorously developed from an initial submission which is then reviewed by independent editors; crowdsourced editors fact check and provide feedback to the writer, and the news item is not published until the author has changed the article to meet standard journalistic criteria, such as requiring two independent sources of confirmation (with publicly accessible links). Each story is neutral—any opinions must be from cited sources. Each story is about a newsworthy event or phenomenon that is specific, relevant, and fresh, among other traditional journalistic criteria.

Unfortunately, social media—having been designed for citizen-to-citizen information exchange—will never even try to meet these journalistic criteria, for fear of being sued by their shareholders. That's why current social media outlets will never substitute for journalism.

While new social media outlets could offer IPOs with journalistic charters, current stockholders in existing social media platforms have no monetary interest in banishing fake news (since it generating billions of dollars in ad revenue).

For the foreseeable future, the best that can be hoped for, says Pergam, is that social media outlets will invent AIs that more efficiently flag suspicious posts, which can then be fact checked by humans and more quickly removed from the social media platforms, albeit after they go viral.

R. Colin Johnson is a Kyoto Prize Fellow who ​​has worked as a technology journalist ​for two decades.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More