top of page

Trust in Jeopardy, The Potential Impact of Deepfakes and AI on US Elections

By T. Jayani, JadeTimes News

 
Trust in Jeopardy, The Potential Impact of Deepfakes and AI on US Elections
Image Source : Brandon Bell

On January 21, Patricia Gingrich was preparing to have dinner when her landline phone rang. The New Hampshire voter answered and heard a voice advising her not to vote in the upcoming presidential primary.


“As I listened, I thought, gosh, that sounds like Joe Biden,” Gingrich recounted to Al Jazeera. “But the fact that he was saying to save your vote, don’t use it in this next election I knew Joe Biden would never say that.” The voice may have mimicked the US president, but it wasn’t him, It was a deepfake, created using artificial intelligence (AI).


Experts warn that deepfakes AI generated audio, video, or images intended to mislead pose a significant risk to US voters ahead of the November general election. They can inject false content into the race and erode public trust.


Gingrich did not fall for the Biden deepfake, but she worries it may have suppressed voter turnout. The message reached nearly 5,000 New Hampshire voters just days before the state’s primary.


“This could be bad for people who aren’t well informed about what’s happening with the Democrats,” said Gingrich, chair of the Barrington Democratic Committee in Burlington, New Hampshire. “If they really thought Joe Biden was telling them not to vote, they might not show up.”


The Biden call was not the only deepfake this election cycle. Before ending his presidential bid, Florida Governor Ron DeSantis’s campaign shared a video with AI generated images of Donald Trump hugging immunologist Anthony Fauci two figures who publicly clashed during the COVID 19 pandemic. In September, another robocall with an AI generated voice imitating Senator Lindsey Graham went out to 300 voters expected to participate in South Carolina’s Republican primary.


The practice of altering or faking content for political gain has existed since the early days of US politics. Even George Washington faced “spurious letters” appearing to show him questioning US independence. However, AI tools now make it possible to convincingly mimic people quickly and cheaply, increasing the risk of disinformation.


A study published earlier this year by George Washington University researchers predicted that daily “AI attacks” would escalate by mid 2024, posing a threat to the November general election. Lead author Neil Johnson told Al Jazeera that the highest risk comes from convincing deepfakes, not the obviously fake robocalls.


“It’s going to be nuanced images, altered images, not entirely fake information because fake information attracts the attention of disinformation checkers,” Johnson said.


The study found that online communities are interconnected in a way that allows bad actors to spread manipulated media directly into the mainstream. Communities in swing states and parenting groups on platforms like Facebook could be especially vulnerable.


“The role of parenting communities is going to be a big one,” Johnson said, citing the rapid spread of vaccine misinformation during the pandemic as an example. “We’re likely to face a wave of disinformation lots of content that stretches the truth.”


Voters themselves are not the only targets of deepfakes. Larry Norden, senior director of the Elections and Government Program at the Brennan Center for Justice, has been working with election officials to help them spot fake content. He teaches poll workers to verify the messages they receive to protect themselves from AI generated instructions that could disrupt voting.


Norden emphasized that misleading content can be created without AI. “The thing about AI is that it just makes it easier to do at scale,” he said.


Last year, Norden created a deepfake video of himself for a presentation on AI risks. The video wasn’t perfect, but AI tools are rapidly improving. “Since we recorded that, the technology has gotten more sophisticated, and it’s more and more difficult to tell,” he said.


As deepfakes become more common, public awareness and skepticism will increase, which could erode public trust and make it easier for political figures to dismiss legitimate footage as fake. Legal scholars call this the “liar’s dividend.”


Norden pointed to the Access Hollywood audio from the 2016 election as an example. If similar audio leaked today, it would be easier for a candidate to call it fake. “One of the problems we have right now in the US is a lack of trust, and this may only make things worse,” he added.


While deepfakes are a growing concern, relatively few federal laws restrict their use in US elections. The Federal Election Commission (FEC) has yet to regulate deepfakes, and bills in Congress remain stalled. Individual states are taking action, with 20 state laws enacted to regulate deepfakes in elections and several more bills awaiting a governor’s signature.


Norden was not surprised to see states act before Congress. “States are supposed to be the laboratories of democracy, so it’s proving true again. The states are acting first. We all know it’s really hard to get anything passed in Congress,” he said.


Voters and political organizations are also taking action. After receiving the fake Biden call, Gingrich joined a lawsuit led by the League of Women Voters seeking accountability for the deception. The source of the call was Steve Kramer, a political consultant who claimed his intention was to highlight the need to regulate AI in politics.


Kramer came forward after NBC News revealed he had commissioned a magician to create the deepfake of Biden’s voice using publicly available software. The deepfake took less than 20 minutes to create and cost only $1, but Kramer claimed it generated “$5 million worth of exposure” for his efforts.


Kramer’s case shows that existing laws can be used to curtail deepfakes. The Federal Communications Commission (FCC) ruled that voice mimicking software falls under the 1991 Telephone Consumer Protection Act, making it illegal in most circumstances. The FCC proposed a $6 million penalty against Kramer for the illegal robocall. The New Hampshire Department of Justice charged Kramer with felony voter suppression and impersonating a candidate, which could result in up to seven years in prison. Kramer has pleaded not guilty.


Norden noted that none of the laws Kramer is accused of breaking are specifically tailored to deepfakes. “The criminal charges against him have nothing to do with AI,” he said. “Those laws exist independently of the technology used.”


However, those laws are harder to apply to unidentifiable bad actors or those located outside the US. “Intelligence agencies are already seeing China and Russia experimenting with these tools, and they expect them to be used,” Norden said. “In that sense, you’re not going to legislate your way out of this problem.”


Both Norden and Johnson believe voters need to inform themselves about deepfakes and learn how to find accurate information. Gingrich agrees, emphasizing the importance of voter awareness. Her message to voters? “Make sure you know you can vote.”

More News

bottom of page