News
Computing Profession

The Fake News Wars Have Only Begun

Posted
A 2016 Buzzfeed News poll found that fake news headlines fool American adults about 75% of the time.
The era of Fake News 2.0 is approaching.

One thing that almost everyone can agree on is that fake news has emerged as a huge problem in the digital age. Social media and the Internet have led to the proliferation of stories that simply are not true, ranging from absurd articles about an exorcism saving a house from destruction during a hurricane to blatantly false stories chronicling an investigation of the Clinton Foundation for running a pedophile sex ring.

"Some of it is done maliciously in order to manipulate thinking, often for political reasons, and some of it done for fun, sort of like old-fashioned juicy gossip. But the common thread is that these articles, images, videos and memes are objectively false. They can be debunked by reputable, well-known and independent fact-checking organizations such as Snopes or PolitiFact," says Soroush Vosoughi, a postdoctoral associate in the Massachusetts Institute of Technology (MIT) Media Lab.

Of course, fake news (some prefer the term "false news") is nothing new. Attempts to manipulate thinking through propaganda and other disinformation techniques are as old as humanity. However, the methods by which fake news is generated, shared, and consumed are advancing rapidly. Bots, artificial intelligence (AI), and next-generation videos that allow "face swapping" (essentially turning the person into a digital puppet) are ratcheting up the stakes on a massive scale.

Beyond the Headlines

All false news is based on a straightforward concept: each such item must contain enough truth to make each item seem true, even if it is not. A 2016 Buzzfeed News poll found that fake news headlines fool American adults about 75% of the time.

Yet, the technology currently used to spread falsehoods to date has been comparatively primitive. "You essentially have humans at troll farms generating stories and then pushing them into channels such as Reddit, Twitter, and Facebook," explains Sean Gourley, CEO and founder of Primer, a machine intelligence firm that manages and automates analysis of large datasets.

However, the era of Fake News 2.0 is approaching. One of its most disturbing aspects is the ability to produce realistic-seeming videos of people saying and doing things they never said or did. These so-called "deepfake" videos can depict politicians making statements they never made, insert celebrities into porn videos, and show people committing crimes they did not commit. One of the best examples of deepfake is a video created by comedian Jordan Peele, who used face-swapping technology to transform former U.S. president Barack Obama into a digital puppet.

At an MIT EmTech conference in November 2017, Ian Goodfellow, a staff research scientist at Google Brain, noted that AI technology such as generative adversarial networks (GANs), the deep learning framework he helped developed, could create fake images quickly and learn to make them more believable. "It's been a little bit of a fluke, historically, that we're able to rely on videos as evidence that something really happened," he stated at the conference.

Vosoughi says AI may ultimately spur a battle of algorithms to deploy and thwart false news on social media and elsewhere.

Getting Real

There is no simple solution for combating fake news.

Vosoughi believes one approach is to have trusted authorities analyze stories and provide "information quality metrics" that rate a story as being of high, medium, or low quality.

Another possibility, particularly for videos, is to embed a digital watermark, blockchain ledger, or other type of authentication device to prove the images are authentic. This could be particularly valuable for political leaders and as evidence in court.

Still another idea is to use AI to spot unusual signatures and patterns that point to suspicious accounts and groups.

Gourley believes consumers need better tools to combat fake news. "A person using online platforms that they do not pay for becomes a commodity whose data is bought, sold, and traded by algorithms. These algorithms are not working for them; they have little or no control over what they see."

Yet another idea is to "inoculate" the public, an idea borrowed from the biological world. This concept has been promoted by Sander Van der Linden, an assistant professor of social psychology and director of the Social Decision-Making Lab at Cambridge University.  He explains, "By exposing people to a weakened dose of misinformation, you trigger mental antibodies that make a person more resistant to fake news." Such a "digital vaccine" would need to precede a full dose of misinformation.

Van der Linden's research into such "prebunking" shows the process delivers positive outcomes. "By warning people that that they would be subjected to a clear political agenda, they are more resistant to easily succumbing to that agenda," he says.

Avoiding a dystopian future where facts and fiction are equal is critical. Concludes Vosoughi: "False news will never go away. We probably don't want to give governments and others broad powers to determine what people should see and hear. We have to find ways to help people identify and recognize false and manipulative information. We have to build tools to better support the dissemination of factual information."

Samuel Greengard is an author and journalist based in West Linn, OR, USA.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More