News
Artificial Intelligence and Machine Learning

The Campaign Against Deepfakes

Posted

Politics has never had a great reputation. Candidates fib, lie, and toss insults at each other with near-impunity. Opponents bend, distort, and misrepresent each other’s words. Yet in the age of artificial intelligence (AI), the stakes are escalating, and the blast zone is expanding. Deepfakes and other forms of manipulation are proliferating in the political arena.

“If the past few years are any indication, we will continue to see bad actors weaponize deepfakes for…disinformation campaigns that are designed to sow civil unrest and interfere with elections,” says Hany Farid, a professor of electrical engineering and computer sciences at the University of California, Berkeley, and a leading expert on digital forensics.

The fingerprints are everywhere. In January 2024, preceding the New Hampshire Primary, a convincing deepfake AI-generated robocall of Joe Biden told people not to vote. In September 2023, a deepfake video surfaced depicting Florida governor Ron DeSantis dropping out of the presidential race while he was still actively campaigning. A 2019 deepfake showed House Speaker Nancy Pelosi falsely intoxicated. All these media were widely shared on social media sites.

Stamping out the problem is no simple task, however. Although detection tools exist—and technologies like watermarking, fingerprinting, and blockchain could improve authentication methods for media—preventing the spread of deepfakes is remarkably challenging. “There’s no single way to address this issue,” says Maneesh Agrawala, a professor of computer science and director of the Brown Institute for Media Innovation at Stanford University. “It’s a social and political problem as much as it is a technology problem.”

A Vote for Reality

While there are legitimate uses for deepfakes, particularly in the entertainment industry, digital deceit is on the rise. Generative AI tools have become cheap and easy to use—with some software available for free. As a result, the technology is now used for a variety of unsavory purposes, including generating sexually explicit fake images, sophisticated phishing schemes and attacks on businesses, including attempts to manipulate stock prices.

However, as the 2024 presidential election unfolds, video and audio that bends or breaks reality is piling up. Sources might include individuals, as well as political Action Committees (PACs) and foreign agents looking to take sway public opinion, exacerbate social divisions, or erode trust in key institutions, including democracy. The content can take direct aim at a candidate or stoke the fires of a political issue, such as immigrant border crossings or political marches. As Farid puts it: “If anything can be fake, then nothing has to be real.”

Combating deepfakes resembles a game of whack-a-mole. For example, 11ElevenLabs can clone a candidate’s voice for $5 or less, while DeepSwap can generate fake videos for $10 per month. Adding to the problem: in the U.S., there are no laws addressing the veracity of political ads, and no penalties for those that create fake content.

Those who debunk deepfakes say they are unable to keep up with the growing torrent of deepfakes. Farid, Agrawala, and others have developed software tools that can automate parts of the detection process. Digital forensic algorithms identify metadata inconsistencies and spot distorted pixels and other telltale signs of AI manipulation, such as unnatural shapes, shadows, lines, and light patterns.

“Current generative models for creating deepfakes often struggle to produce consistently high-quality images,” says Siwei Lyu, a professor in the Department of Computer Science and Engineering at the University at Buffalo (UB) and director of the UB Media Forensic Lab.

Taming the Risks

Identifying deepfakes is crucial, yet the endgame is to prevent them from going viral and spreading online. One method for authenticating content is digital watermarking, which embeds unique identifiers into a photo, video, or audio file. Humans can’t see the code or image with the naked eye, though software can determine whether it is real or concocted. Conversely, digital fingerprinting extracts unique characteristics that display signs of manipulation, which makes it possible to search for only verified files.

However, neither method is completely effective. For example, “Watermarking is fragile and there are ways to remove it,” Lyu says. Fingerprinting also can succumb to some forms of manipulation, and it can deteriorate because of compression, transmission errors, and file format conversion. False positives can also be a problem, and there also are data privacy concerns, particularly for activists and whistleblowers. Another tool, control capture technology, extracts key characteristics of audio and video and embeds them into a blockchain, but it only works if the stored signatures remain on a secure cloud-based server, Lyu explains.

Further complicating things, most media outlets and social media sites have refused to identify or block AI-generated content. In February, Meta announced it will begin labeling AI-generated content created with its Meta AI feature, while working to develop an industry standard. Unfortunately, this will not impact content created outside Facebook, Instagram, and Threads. It also does nothing to slow the flow of deepfakes on other social media sites, such as X (formerly Twitter).

As a result, machine learning and AI continue to assume more prominent roles in the deepfake wars. For instance, researchers at Drexel University, led by associate professor Mathew Stamm, are working with the U.S. Defense Advanced Research Projects Agency (DARPA) to teach systems to better recognize fake content and make informed qualitative judgements. One technique taps specialized algorithms to break an image into tiny parts and compare fingerprints. Another looks for inconsistent emotions in a person’s face.

“By teaching computers to recognize fakes, we may be able to stop many of them before they spread around the Internet,” Stamm note

Getting to a Better Image

Unfortunately, technology alone cannot solve the problem, Agrawala says. “No software tool will be 100% effective. There is no foolproof way to identify every deepfake,” he says.

What’s more, media outlets and social media sites have not shown any interest in adopting preventative measures, and there has been no momentum to update defamation and slander laws to reflect today’s digital realities. For now, Section 230 of the Communications Decency Act of 1996 insulates computer services from third-party content.

As the 2024 U.S. presential election nears, a broader focus on deepfakes will be required. “It’s important to combat deepfakes holistically,” Agrawala says. “It’s vital to use technology to verify that content is real and detect violations whenever possible. But it’s also important to educate people and explore ways to hold content producers and distributors responsible while balancing free speech.”

To be sure, deepfakes are now a regular stop on the campaign trail and blaming the technology alone detracts from the underlying problem—and how to fix it, Agrawala contends. “You could argue that that the word processor has created more fake content than any tool in the history of the world—but nobody is looking to ban it. We need to focus on ways to build guardrails around content.”

Samuel Greengard is an author and journalist based in West Linn, OR, USA.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More