News
Artificial Intelligence and Machine Learning

Keeping AI Out of Elections

How can the interference of artificial intelligence be limited or eliminated in the 2024 U.S. elections?

Posted
cyborg president, illustration

The ease with which generative AI can create disinformation has raised serious concerns about its potential impact on the November 5 U.S. presidential election this year. But what exactly are the dangers and what can we do to mitigate them? These questions were central to the session “AI and the 2024 U.S. elections” at the recent American Association for the Advancement of Science (AAAS) Annual Meeting in Denver.

Jennifer Golbeck, a computer science professor at the University of Maryland, has been studying for years how information spreads on social media. In particular, she focuses on far-right groups, radicalization, and online extremism. One lesson that we can learn from the 2020 presidential election, Golbeck said, is that social media platforms just didn’t enforce their own policies: “We saw a wide range of content that violated these platforms’ terms of service, from harassment and calls for violence to spreading misinformation. But platforms consistently made exceptions to their own rules to allow this. They said, ‘if it’s in the public interest for someone to say this, we’ll leave it there’.”

 That enforcing the platform’s own rules can have a significant effect was demonstrated during and after the attack on the Capitol Building in Washington D.C. on January 6, 2021, Golbeck said. “We saw a mass de-platforming of Trump and his supporters on January 6th and 7th. That led to a drop of 70% of misinformation on the social media platforms. Finally, the platforms were enforcing their own rules.”

What’s new in 2024? Golbeck said generative AI tools will continue to permit good and bad things at scale, and there aren’t any rules at present to prevent that. “Recently, Facebook allowed a manipulated video of Joe Biden putting an ‘I Vote’ sticker on the chest of his granddaughter to stay on its platform. They absolutely need to do something about manipulated, fake, and AI-generated content. We need platforms to have rules, because a platform with no rules, like 4chan, is a terrible place. And we can see with Twitter or X what happens when you stop enforcing the rules you have.”

Are there any restrictions against AI-driven political campaigns within current campaign finance laws and regulations? Jessica Selinkoff, an attorney at the Federal Election Commission (FEC) in Washington D.C., explained that “Basically, the FEC is an anti-corruption agency. It’s about who is paying to get communications in the hands of the public to influence their votes. The FEC makes sure that the public can follow, via our website, who pays what in the election campaign, with the idea of fighting corruption via transparency.”

 Selinkoff said the FEC already has been asked to amend its regulations to address the use of AI in election campaigns via a regulation addressing “Fraudulent Misrepresentation of Campaign Authority.” That regulation, she said, “addresses situations in which a person is speaking, writing, or otherwise acting for or on behalf of a candidate or a political party. Presently, the Commission is reviewing thousands of suggestions for improvement.

“However, the regulation is limited; there are only two circumstances in which it applies. If a person to raise funds is pretending to be a candidate but actually is slurping all the money for him or herself, or if one candidate purports to be another. The FEC has asked Congress to amend this law so that it gets a broader reach, but Congress hasn’t done so yet.”

Vivek Krishnamurthy, an associate professor at the University of Colorado Law School, said that there are worldwide around 80 elections happening this year. He explained how AI already had impacted elections earlier this year in Pakistan: “There we saw that the opposition candidate and former prime minister Imran Khan, who was sent to jail, was claiming victory in the elections in an AI-generated video.”

The technology also can be used in a more innocent way to influence politicians, Krishnamurthy said. “The relatives of victims of mass shootings recently created AI-generated videos of their deceased loved ones in an effort to get Congress to do something about gun violence. We can debate whether that is fair or not, but it shows that this AI technology has many uses, which makes regulation not easy.”

However, that relevant regulation can be implemented was demonstrated by the European Union, which recently approved the AI Act. Krishnamurthy explained, “The AI Act has a lot of good ideas, but it took the EU four years to develop. We don’t have enough time before November for such an effort.”

Krishnamurthy said while there has been some legislative activity to combat negative applications of AI-generated content in the U.S., “I’m not sure how much that is going to move the needle, because we have a vast array of actors who have different incentives in interfering with the election. Making a law in Texas will not do much about the incentives of Russia or China.”

What, then, can be done in the short period between now and the elections in November? Krishnamurthy gave the example of some 20 tech companies, Meta and OpenAI among them, who in February announced their own initiative to combat AI interference in elections. “It’s great that the tech industry is doing this,” he said.

“Another thing that can be done is supporting local journalism so that they can create truthful content. But ultimately, we need a society-wide conversation about how to respond to this technological change with a much broader array of responses.”

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More