News
Artificial Intelligence and Machine Learning News

Fact-Finding Mission

Artificial intelligence provides automatic fact-checking and fake news detection, but with limits.
Posted
  1. Introduction
  2. Learning by Doing
  3. Speeding Verification
  4. Author
dial gauge measures from 'true' to 'false'

Seeking to call into question the mental acuity of his opponent, Donald Trump looked across the presidential debate stage at Joseph Biden and said, “So you said you went to Delaware State, but you forgot the name of your college. You didn’t go to Delaware State.”

Biden chuckled, but viewers may have been left wondering: did the former vice president misstate where he went to school? Those who viewed the debate live on an app from the London-based company Logically were quickly served an answer: the president’s assertion was false. A brief write-up posted on the company’s website the next morning provided links to other fact-checks from National Public Radio and the Delaware News Journal on the same claim, which explain that Biden actually said his first Senate campaign received a boost from students at the school.

Logically is one of a number of efforts, both commercial and academic, to apply techniques of artificial intelligence (AI), including machine learning and natural language processing (NLP), to identify false or misleading information. Some focus their efforts on automating fact-checking to verify the claims in news stories or political speeches, while others try to root out fake news deployed on social media and websites to deliberately mislead people.

While 2020, with its U.S. presidential election and a global pandemic, provided plenty of fodder for fake news, the problem is not new. A 2018 study from the Massachusetts Institute of Technology’s Media Lab found false news stories on Twitter were 70% more likely to be retweeted than true ones, and that true stories take about six times as long to reach 1,500 people as false ones. In April 2020, Facebook—combining AI with the work of more than 60 fact-checking organizations in more than 50 languages—placed warning labels on 50 million pieces of content related to COVID-19.

Logically relies heavily on a team of human fact-checkers, who examine perhaps 300 claims each day, says Anil Bandhakavi, head of data science at the company. Those people seek out sources that allow them to label an assertion as true, false, or partially true, and add those assessments to a database. The software examines text or speech to automatically extract claims, and groups similar claims into clusters. Once the humans have ruled on the veracity of one of those claims, that ruling is propagated to the rest of the claims in the cluster, thus quickly expanding the universe of examined claims. “In that way, we are constantly growing our database of facts by this semi-automated process,” Bandhakavi says. Humans carry about 60% of the Logically workload, but Bandhakavi hopes that will shift more to computers over time.

The company also uses fairly common techniques to determine where the content comes from and how it propagates through the network, tracing it to its source domain and determining which other domains that source links to and which link to it. If a domain is the source of a lot of stories that have been deemed to be untrustworthy, or it passes a lot of content among other less-credible sites, then new content from that same source will be considered questionable, too. Content from a respected news source will score better for credibility.

At the same time, Logically’s software also is learning on its own to tell truth from fiction, using NLP to develop statistical descriptions of factual and non-factual statements and how they differ from one another. Bandahavi says the software can examine the style of language used in conveying falsehoods, and distinguish it from language used for conveying facts.

Such style-based examinations also can help an AI algorithm distinguish between content written by a human and that produced by a machine. Computer scientists worry about so-called ‘neural fake news’, which uses language models developed by neural networks to produce convincing stories, mimicking the style of particular news outlets and adding bylines that make those outlets look like the source of the stories.

Back to Top

Learning by Doing

Researchers at the University of Washington’s Paul G. Allen School of Computer Science and Engineering developed an algorithm called Grover to both generate and detect neural fake news. Grover uses a generative adversarial network. One part, the adversary, which is trained on a collection of real news stories, learns to generate fake stories from a prompt, such as the headline “Research Shows Vaccines Cause Autism.” A second system, the verifier, is given an unlimited set of real news stories, plus fake stories from the adversary, and has to determine which are false. Based on the verifier’s results, the adversary tries again, and through repeated iterations both get better at their tasks.

With moderate training, Grover learned to distinguish neural fake news from human-written news with 71% accuracy. It did even better at detecting the fake news it generated itself, with an accuracy rate of 92%.

Grover is built on the same concept as other language modeling algorithms, such as Google’s Bidirectional Encoder Representations from Transformers (BERT) or Open AI’s Generative Pre-trained Transformer (GPT), which produce text that appears written by humans. When Open AI produced its second iteration, GPT-2, it opted not to release it, saying the potential to create fake news was too dangerous. The company has since developed GPT-3, and while not fully releasing that, it has provided access to an application programming interface.

Just trying to keep such tools out of the hands of bad actors is not enough, says Franziska Roesner, a specialist on computational threat modeling at the University of Washington who took part in the Grover research. The researchers made their work available to help researchers understand how advances in language modeling algorithms can produce fake news and how they might detect it, “If one way of generating fake news is to do it automatically, then we have to assume that our adversaries are going to be doing that and they’re going to be training stronger models,” she says. “Security through obscurity is not ultimately effective.”

Back to Top

Speeding Verification

At the Duke University Reporter’s Lab, researchers string together a number of techniques to provide real-time fact checking on events such as presidential debates. They feed debate audio to Google’s speech-to-text tool, which uses machine learning to automatically transcribe the speech. They then hand off the text to ClaimBuster, an NLP system developed at the University of Texas at Arlington that examines each sentence and scores it according to the likelihood it contains an assertion of fact that can be checked. Duke then takes those checkable sentences and searches them against a database of fact checks done by humans to see if they match a previously checked claim. They send those results to human editors who, if they think the ruling looks reasonable, quickly post it to debate viewers’ screens.

It takes the system only about half a second to a full second from the time the audio comes in to passing its ruling to the editor for review, says Christopher Guess, lead technologist at the Reporter’s Lab, making real-time fact checking a viable option. Eliminating the slowest parts of the process, though—the initial human fact check and the editorial review—will not happen anytime soon, he says. The tendency of politicians to be deliberately vague, or to couch claims in terms that are favorable for them, makes it too difficult; human fact checkers often have to follow up with a politician to clarify just what claim he was trying to make. Automated fact checking of new assertions “with purely novel fact checking, isn’t even on our radar,” Guess says, “because how do you even have a computer determine what that person was saying?”


“A lot of the efficiency of fake news has to do with its emotional load, and a lot of the willingness to believe a crazy theory is more related to emotion than to thinking or reason.”


Even claim matching can be a challenge for computers. “Just seeing if somebody else said the same thing seems like a simple task,” Guess says. “But the fact is that in the vagaries of the English language, there’s a lot of different ways to say the same thing. And a lot of modern natural language processing is not good at determining the differences.”

At INRIA, France’s National Institute for Research in Digital Science and Technology, researcher Ioana Manolescu looks at fact checking as a data management problem for journalists. She is leading a team, with other research groups and journalists from the daily newspaper Le Monde, to develop ContentCheck, which uses NLP, automated reasoning, and data mining to provide fact checks of news articles. The system checks articles against data repositories such as France’s National Institute of Statistics and Economic Studies, and also helps journalists develop stories based on such data.

Manolescu’s goal is not so much to root out fake news as to help journalists find and make sense of data so they can use their storytelling skills to enlighten readers with valid information. “I am not as thrilled about fact checking as I used to be, because fact checking would assume that people yield to reason,” she says, and in many cases they do not. “A lot of the efficiency of fake news has to do with its emotional load, and a lot of the willingness to believe a crazy theory is more related to emotion than to thinking or reason.”

The best way for computer scientists to combat misinformation, she argues, is to find ways to provide more valid information. “Journalists do not have enough tools to process data at the speed and the efficiency that would serve society well,” she says. “So right now, that’s what I believe would be most useful.”

*  Further Reading

Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, R., and Choi, Y.
Defending Against Neural Fake News, Proceedings of the Neural Information Processing Systems Conference, 32, (2019). https://papers.nips.cc/paper/9106-defending-against-neural-fake-news

Duc Cao, T., Manolescu, I., and Tannier. X.
Extracting statistical mentions from textual claims to provide trusted content, 24th International Conference on Applications of Natural Language to Information Systems, (2019). https://link.springer.com/chapter/10.1007/978-3-030-23281-8_36

Schuster, T., Schuster, R., Shah, D.J., and Barzilay, R.
The Limitations of Stylometry for Detecting Machine-Generated Fake News, Compute. Linguistics 46(2):499–510 (2020). https://bit.ly/2FRxgIT

Hassan, N., Arslan, F., Li, C., and Tremayne, M.
Toward Automated Fact-Checking: Detecting Check-worthy Factual Claims by ClaimBuster, KDD ’17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2017). https://dl.acm.org/doi/10.1145/3097983.3098131

Truth of Varying Shades, Analyzing Language in Fake News and Political Fact-Checking https://vimeo.com/238236521

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More