Research and Advances
Computing Applications Contributed articles

Trust and Distrust in Online Fact-Checking Services

Even when checked by fact checkers, facts are often still open to preexisting bias and doubt.
Posted
  1. Introduction
  2. Key Insights
  3. Analyzing Social Media Conversations
  4. Results
  5. Discussion
  6. Conclusion
  7. Acknowledgments
  8. References
  9. Authors
Truth-O-Meter

While the internet has the potential to give people ready access to relevant and factual information, social media sites like Facebook and Twitter have made filtering and assessing online content increasingly difficult due to its rapid flow and enormous volume. In fact, 49% of social media users in the U.S. in 2012 received false breaking news through social media.8 Likewise, a survey by Silverman11 suggested in 2015 that false rumors and misinformation disseminated further and faster than ever before due to social media. Political analysts continue to discuss misinformation and fake news in social media and its effect on the 2016 U.S. presidential election.

Back to Top

Key Insights

  • Though fact-checking services play an important role countering online disinformation, little is known about whether users actually trust or distrust them.
  • The data we collected from social media discussions—on Facebook, Twitter, blogs, forums, and discussion threads in online newspapers—reflect users’ opinions about fact-checking services.
  • To strengthen trust, fact-checking services should strive to increase transparency in their processes, as well as in their organizations, and funding sources.

Such misinformation challenges the credibility of the Internet as a venue for authentic public information and debate. In response, over the past five years, a proliferation of outlets has provided fact checking and debunking of online content. Fact-checking services, say Kriplean et al.,6 provide “… evaluation of verifiable claims made in public statements through investigation of primary and secondary sources.” An international census from 2017 counted 114 active fact-checking services, a 19% increase over the previous year.12 To benefit from this trend, Google News in 2016 let news providers tag news articles or their content with fact-checking information “… to help readers find fact checking in large news stories.”3 Any organization can use the fact-checking tag, if it is non-partisan, transparent, and targets a range of claims within an area of interest and not just one single person or entity.

However, research into fact checking has scarcely paid attention to the general public’s view of fact checking, focusing instead on how people’s beliefs and attitudes change in response to facts that contradict their own preexisting opinions. This research suggests fact checking in general may be unsuccessful at reducing misperceptions, especially among the people most prone to believe them.9 People often ignore facts that contradict their current beliefs,2,13 particularly in politics and controversial social issues.9 Consequently, the more political or controversial issues a fact-checking service covers, the more it needs to build a reputation for usefulness and trustworthiness.

Research suggests the trustworthiness of fact-checking services depends on their origin and ownership, which may in turn affect integrity perceptions10 and the transparency of their fact-checking process.4 Despite these observations, we are unaware of any other research that has examined users’ perceptions of these services. Addressing the gap in current knowledge, we investigated the research question: How do social media users perceive the trustworthiness and usefulness of fact-checking services?

Fact-checking services differ in terms of their organizational aim and funding,10 as well as their areas of concern,11 that in turn may affect their trustworthiness. As outlined in Figure 1, the universe of fact-checking services can be divided into three general categories based on their area(s) of concern: political and public statements in general, corresponding to the fact checking of politicians, as discussed by Nyhan and Reifler;9 online rumors and hoaxes, reflecting the need for debunking services, as discussed by Silverman;11 and specific topics or controversies or particular conflicts or narrowly scoped issues or events (such as the ongoing Ukraine conflict).

f1.jpg
Figure 1. Categorization of fact-checking services based on areas of concern.

We have focused on three services—Snopes, FactCheck.org, and StopFake—all included in the Duke Reporters’ Lab’s online overview of fact checkers (http://reporterslab.org/fact-checking/). They represent three categories of fact checkers, from online rumors to politics to a particular topic, as in Figure 1, and differences in organization and funding. As a measure of their popularity, as of June 20, 2017, Snopes had 561,650 likes on Facebook, FactCheck.org 806,814, and StopFake 52,537.

We study Snopes because of its aim to debunk online rumors, fitting the first category in Figure 1. This aim is shared by other such services, including HoaxBusters and the Swedish service Viralgranskaren. Snopes is managed by a small volunteer organization that has emerged from a single-person initiative and funded through advertising revenue.

We study FactCheck.org because it monitors the factual accuracy of what is said by major political figures. Other such services include PolitiFact (U.S.) and Full Fact (U.K.) in the second category in Figure 1. FactCheck.org is a project of the Annenberg Public Policy Center of the Annenberg School for Communication at the University of Pennsylvania, Philadelphia, PA. FactCheck.org is supported by university funding and individual donors and has been a source of inspiration for other fact-checking projects.


Consequently, the more political or controversial issues a fact-checking service covers, the more it needs to build a reputation for usefulness and trustworthiness.


We study StopFake because it addresses one highly specific topic—the ongoing Ukraine conflict. It thus resembles other highly focused fact-checking initiatives (such as #Refugeecheck, which fact checks reports on the refugee crises in Europe). StopFake is an initiative by the Kyiv Mohyla Journalism School in Kiev, Ukraine, and is thus a European-based service. Snopes and FactCheck.org are U.S. based, as are more than a third of the fact-checking services identified by Duke Reporters’ Lab.12

All three provide fact checking through their own websites, as well as through Facebook and Twitter. Figure 2 is an example of a Twitter post with content checked by Snopes.

f2.jpg
Figure 2. Example of Snopes debunking a social media rumor on Twitter (March 6, 2016); https://twitter.com/snopes/status/706545708233396225

Back to Top

Analyzing Social Media Conversations

To explore how social media users perceive the trustworthiness and usefulness of these services, we applied a research approach designed to take advantage of unstructured social media conversations (see Figure 3).

f3.jpg
Figure 3. Outline of our research approach; posts collected October 2014 to March 2015.

While investigations of trust and usefulness often rely on structured data from questionnaire-based surveys, social media conversations represent a highly relevant data source for our purpose, as they arguably reflect the raw, authentic perceptions of social media users. Xu et al.16 claim it is beneficial to listen to, analyze, and understand citizens’ opinions through social media to improve societal decision-making processes and solutions. They wrote, for example, “Social media analytics has been applied to explain, detect, and predict disease outbreaks, election results, macroeconomic processes (such as crime detection), (… ) and financial markets (such as stock price).”16 Social media conversations take place in the everyday context of users likely to be engaged in fact-checking services. This approach may provide a more unbiased view of people’s perceptions than, say, a questionnaire-based approach. The benefit of gathering data from users in their specific social media context does not imply that our data is representative. Our data lacks important information about user demographics, limiting our ability to claim generality for the entire user population. Despite this potential drawback, however, our data does offer new insight into how social media users view the usefulness and trustworthiness of various categories of fact-checking services.

For data collection, we used Meltwater Buzz, an established service for social media monitoring. crawling data from social media conversations in blogs, discussion forums, online newspaper discussion threads, Twitter, and Facebook. Meltwater Buzz crawls all blogs (such as https://wordpress.com/), discussion forums (such as https://offtopic.com/), and online newspapers (such as https://www.washingtonpost.com/) requested by Meltwater customers, thus representing a large, though convenient, sample. It collects various amounts of data from each platform; for example, it crawls all posts on Twitter but only the Facebook pages with 3,500 likes or groups of more than 500 members. This limitation in Facebook data partly explains why the overall number of posts we collected—1,741—was not more than it was.

To collect opinions about social media user perceptions of Snopes and FactCheck.org, we applied the search term “[service name] is,” as in “Snopes is,” “FactCheck.org is,” and “FactCheck is.” We intended it to reflect how people start a sentence when formulating their opinions. StopFake is a relatively less-known service. We thus selected a broader search string—”StopFake”—to be able to collect enough relevant opinions. The searches returned a data corpus of 1,741 posts over six months—October 2014 to March 2015—as in Figure 3. By “posts,” we mean written contributions by individual users. To create a sufficient dataset for analysis, we removed all duplicates, including a small number of non-relevant posts lacking personal opinions about fact checkers. This filtering process resulted in a dataset of 595 posts.

We then performed content analysis, coding all posts to identify and investigate patterns within the data1 and reveal the perceptions users express in social media about the three fact-checking services we investigated. We analyzed their perceptions of the usefulness of fact-checking services through a usefulness construct similar to the one used by Tsakonas et al.14 “Usefulness” concerns the extent the service is perceived as beneficial when doing a specific fact-checking task, often illustrated by positive recommendations and characterizations (such as the service is “good” or “great”). Following Mayer et al.’s theoretical framework,7 we categorized trustworthiness according to the perceived ability, benevolence, and integrity of the services. “Ability” concerns the extent a service is perceived as having available the needed skills and expertise, as well as being reputable and well regarded. “Benevolence” refers to the extent a service is perceived as intending to do good, beyond what would be expected from an egocentric motive. “Integrity” targets the extent a service is generally viewed as adhering to an acceptable set of principles, in particular being independent, unbiased, and fair.

Since we found posts typically reflect rather polarized perceptions of the studied services, we also grouped the codes manually according to sentiment, positive or negative. Some posts described the services in a plain and objective manner. We thus coded them using a positive sentiment (see Table 1) because they refer to the service as a source for fact checking, and users are likely to reference fact-checking sites because they see them as useful.

t1.jpg
Table 1. Coding scheme we used to analyze the data.

For reliability, both researchers in the study did the coding. One coded all the posts, and the second then went through all the assigned codes, a process repeated twice. Finally, both researchers went through all comments for which an alternative code had been suggested to decide on the final coding, a process that recommended an alternative coding for 153 posts (or 26%).

A post could include more than one of the analytical themes, so 30% of the posts were thus coded as addressing two or more themes.

Back to Top

Results

Despite the potential benefits of fact-checking services, Figure 4 reports the majority of the posts on the two U.S.-based services expressed negative sentiment, with Snopes at 68% and FactCheck.org at 58%. Most posts on the Ukraine-based StopFake (78%) reflected positive sentiment.

f4.jpg
Figure 4. Positive and negative posts related to trustworthiness and usefulness per fact-checking service (in %); “other” refers to posts not relevant for the research categories (N = 595 posts).

The stated reasons for negative sentiment typically concerned one or more of the trustworthiness themes rather than usefulness. For example, for Snopes and FactCheck.org, the negative posts often expressed concern over lack in integrity due to perceived bias toward the political left. Negative sentiment pertaining to the ability and benevolence of the services were also common. The few critical comments on usefulness were typically aimed at discrediting a service, by, say, characterizing it as “satirical” or as “a joke.”

Positive posts were more often related to usefulness. For example, the stated reasons for positive sentiment toward StopFake typically concerned the service’s usefulness in countering pro-Russian propaganda and trolling and in the information war associated with the ongoing Ukraine conflict.

In line with a general notion of an increasing need to interpret and act on information and misinformation in social media,6,11 some users included in the study discussed fact-checking sites as important elements of an information war.

Snopes. The examples in Table 2 reflect how negative sentiment in the posts we analyzed on Snopes was rooted in issues pertaining to trustworthiness. Integrity issues typically involved a perceived “left-leaning” political bias in the people behind the service. Pertaining to benevolence, users in the study said Snopes is part of a larger left-leaning or “liberal” conspiracy often claimed to be funded by George Soros, whereas comments on ability typically targeted lack of expertise in the people running the service. Some negative comments on trustworthiness may be seen as a rhetorical means of discrediting a service. Posts expressing positive sentiment mainly argue for the usefulness of the service, claiming that Snopes is, say, a useful resource for checking up on the veracity of Internet rumors.

t2.jpg
Table 2. Snopes and themes we analyzed (n = 385).

FactCheck.org. The patterns in the posts we analyzed for FactCheck.org resemble those for Snopes. As in Table 3, the most frequently mentioned trustworthiness concerns related to service integrity; as for Snopes, users said the service is politically biased toward the left. Posts concerning benevolence and ability were also relatively frequent, reflecting user concern regarding the service as a contributor to propaganda or doubts about its fact-checking practices.

t3.jpg
Table 3. FactCheck.org and themes we analyzed (n=80).

StopFake. As in Table 4, the results for StopFake show more posts expressing positive sentiment than we found for Snopes and FactCheck.org. In particular, the posts included in the study pointed out that StopFake helps debunk rumors seen as Russian propaganda in the Ukraine conflict.

t4.jpg
Table 4. StopFake and themes we analyzed (n=130); note * also coded as integrity/positive.

Nevertheless, the general pattern in the reasons users gave us for positive and negative sentiment for Snopes and FactCheck.org also held for StopFake. The positive posts were typically motivated by usefulness, whereas the negative posts reflected the sentiment that StopFake is politically biased (“integrity”), a “fraud,” a “hoax,” or part of the machinery of Ukraine propaganda (“benevolence”).

Back to Top

Discussion

We found users with positive perceptions typically extoled the usefulness of fact-checking services, whereas users with negative opinions cited concerns over trustworthiness. This pattern emerged across all three services. In the following sections, we discuss how these findings provide new insight into trustworthiness as a key challenge when countering online rumors and misinformation2,9 and why ill-founded beliefs may have such online reach, even though the beliefs are corrected by prominent fact checkers, including Snopes, FactCheck.org, and StopFake.

Usefulness. Users in our sample with a positive view of the services mainly pointed to their usefulness. While everyone should exercise caution when comparing the various services, topic-specific StopFake is perceived as more useful than Snopes and FactCheck.org. One reason might be that a service targeting a specific topic faces less criticism because it attracts a particular audience that seeks facts supporting its own view. For example, StopFake users target anti-Russian, pro-Ukrainian readers. Another, more general, reason might be that positive perceptions are motivated by user needs pertaining to a perceived high load of misinformation, as in the case of the Ukraine conflict, where media reports and social media are seen as overflowing with propaganda. Others highlighted the general ease information may be filtered or separated from misinformation through sites like Snopes and FactCheck.org, as expressed like this:

“As you pointed out, it doesn’t take that much effort to see if something on the Internet is legit, and Snopes is a great place to start. So why not take that few seconds of extra effort to do that, rather than creating and sharing misleading items.”

This finding suggests there is increasing demand for fact-checking services,6 while at the same time a substantial proportion of social media users who would benefit from such services do not use them sufficiently. The services should thus be even more active on social media sites like Facebook and Twitter, as well as in online discussion forums, where greater access to fact checking is needed.

Trustworthiness. Negative perceptions and opinions about fact-checking services seem to be motivated by basic distrust rather than rational argument. For some users in our sample, lack of trust extends beyond a particular service to encompass the entire social and political system. Users with negative perceptions thus seem trapped in a perpetual state of informational disbelief.

While one’s initial response to statements reflecting a state of informational disbelief may be to dismiss them as the uninformed paranoia of a minority of the public, the statements should instead be viewed as a source of user insight. The reason the services are often unsuccessful in reducing ill-founded perceptions9 and people tend to disregard fact checking that goes against their preexisting beliefs2,13 may be a lack of basic trust rather than a lack of fact-based arguments provided by the services.

We found such distrust is often highly emotional. In line with Silverman,11 fact-checking sites must be able to recognize how debunking and fact checking evoke emotion in their users. Hence, they may benefit from rethinking the way they design and present themselves to strengthen trust among users in a general state of informational disbelief. Moreover, users of online fact-checking sites should compensate for the lack of physical evidence online by being, say, demonstrably independent, impartial, and able to clearly distinguish fact from opinion. Rogerson10 wrote that fact-checking sites exhibit varying levels of rigor and effectiveness. The fact-checking process and even what are considered “facts” may in some cases involve subjective interpretation, especially when actors with partial ties aim to provide the service. For example, in the 2016 U.S. presidential campaign, the organization “Donald J. Trump for President” invited Trump’s supporters to join a fact-check initiative, similar to the category “topics or controversies,” urging “fact checking” the presidential debates on social media. However, the initiative was criticized as mainly promoting Trump’s views and candidacy.5

Users of fact-checking sites ask: Who actually does the fact checking and how do they do it? What organizations are behind the process? And how does the nature of the organization influence the results of the fact checking? Fact-checking sites must thus explicate the nuanced, detailed process leading to the presented result while keeping it simple enough to be understandable and useful.11


Users with negative perceptions thus seem trapped in a perpetual state of informational disbelief.


Need for transparency. While fact-checker trustworthiness is critical, fact checkers represent but one set of voices in the information landscape and cannot be expected to be benevolent and unbiased just because they check facts. Rather, they must strive for transparency in their working process, as well as in their origins, organization, and funding sources.

To increase transparency in its processes, a service might try to take a more horizontal, collaborative approach than is typically seen in the current generation of services. Following Hermida’s recommendation4 to social media journalists, fact checkers could be set up as a platform for collaborative verification and genuine fact checking, relying less on centralized expertise. Forming an interactive relationship with users might also help build trust.6,7

Back to Top

Conclusion

We identified a lack of perceived trustworthiness and a state of informational disbelief as potential obstacles to fact-checking services reaching social media users most critical to such services. Table 5 summarizes our overall findings and discussions, outlining related key challenges and our recommendations for how to address them.

t5.jpg
Table 5. Challenges and our related recommendations for fact-checking services.

Given the exploratory nature of this study, we cannot conclude our findings are valid for all services. In addition, more research is needed to be able to make definite claims on systematic differences among the various fact checkers based on their “areas of concern.” Nevertheless, the consistent pattern in opinions we found across three prominent services suggests challenges and recommendations that can provide useful guidance for future development in this important area.

Back to Top

Acknowledgments

This work was supported by the European Commission co-funded FP7 project REVEAL (Project No. FP7-610928, http://www.revealproject.eu/) but does not necessarily represent the views of the European Commission. We also thank Marika Lüders of the University of Oslo and the anonymous reviewers for their insightful comments.

Back to Top

Back to Top

    1. Ezzy, D. Qualitative Analysis. Routledge, London, U.K., 2013.

    2. Friesen, J.P., Campbell, T.H., and Kay, A.C. The psychological advantage of unfalsifiability: The appeal of untestable religious and political ideologies. Journal of Personality and Social Psychology 108, 3 (Nov. 2014), 515–529.

    3. Gingras, R. Labeling fact-check articles in Google News. Journalism & News (Oct. 13, 2016); https://blog.google/topics/journalism-news/labeling-fact-check-articles-google-news/

    4. Hermida, A. Tweets and truth: Journalism as a discipline of collaborative verification. Journalism Practice 6, 5–6 (Mar. 2012), 659–668.

    5. Jamieson, A. 'Big League Truth Team' pushes Trump's talking points on social media. The Guardian (Oct. 10, 2016); https://www.theguardian.com/us-news/2016/oct/10/donald-trump-big-league-truth-team-social-media-debate

    6. Kriplean, T., Bonnar, C., Borning, A., Kinney, B., and Gill, B. Integrating on-demand fact-checking with public dialogue. In Proceedings of the 17th ACM Conference on Computer-Supported Cooperative Work & Social Computing (Baltimore, MD, Feb. 15–19). ACM Press, New York, 2014, 1188–1199.

    7. Mayer, R.C., Davis, J.H., and Schoorman, F.D. An integrative model of organizational trust. Academy of Management Review 20, 3 (1995), 709–734.

    8. Morejon, R. How social media is replacing traditional journalism as a news source. Social Media Today Report (June 28, 2012); http://www.socialmediatoday.com/content/how-social-media-replacing-traditional-journalism-news-source-infographic

    9. Nyhan, B. and Reifler, J. When corrections fail: The persistence of political misperceptions. Political Behavior 32, 2 (June 2010), 303–330.

    10. Rogerson, K.S. Fact checking the fact checkers: Verification Web sites, partisanship and sourcing. In Proceedings of the American Political Science Association (Chicago, IL, Aug. 29-Sept. 1). American Political Science Association, Washington, D.C., 2013.

    11. Silverman, C. Lies, Damn Lies, and Viral Content, How News Websites Spread (and Debunk) Online Rumors, Unverified Claims, and Misinformation. Tow Center for Digital Journalism, Columbia Journalism School, New York, 2015; http://towcenter.org/wp-content/uploads/2015/02/LiesDamnLies_Silverman_TowCenter.pdf

    12. Stencel, M. International fact checking gains ground, Duke census finds. Duke Reporters' Lab, Duke University, Durham, NC, Feb. 28, 2017; https://reporterslab.org/international-fact-checking-gains-ground/

    13. Stroud, N.J. Media use and political predispositions: Revisiting the concept of selective exposure. Political Behavior 30, 3 (Sept. 2008), 341–366.

    14. Tsakonas, G. and Papatheodorou, C. Exploring usefulness and usability in the evaluation of open-access digital libraries. Information Processing & Management 44, 3 (May 2008), 1234–1250.

    15. Van Mol, C. Improving web survey efficiency: The impact of an extra reminder and reminder content on Web survey response. International Journal of Social Research Methodology 20, 4 (May 2017), 317–327.

    16. Xu, C., Yu, Y., and Hoi, C.K. Hidden in-game intelligence in NBA players' tweets. Commun. ACM 58, 11 (Nov. 2015), 80–89.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More