The 'Deepfake Election': How AI and Fake Content Could Influence 2024 US Contests
The 'Deepfake Election': How AI and Fake Content Could Influence 2024 US Contests

By Chase Smith

As the 2024 U.S. presidential election approaches, concerns about the influence of artificial intelligence (AI) and deepfake technology on the electoral process are reaching new heights.

Deepfakes are highly convincing and deceptive digital media—typically videos, audio recordings, or images—increasingly generated using artificial intelligence, and often for misleading or fraudulent purposes.

Experts in the field of AI and deepfake detection provided The Epoch Times with valuable insights into the challenges posed by AI-generated misinformation and disinformation and the steps being taken to combat them.

Rijul Gupta, co-founder and CEO of DeepMedia. (DeepMedia)

Rijul Gupta, co-founder and CEO of DeepMedia_AI, highlighted the alarming rise of deepfake technology and its implications for the 2024 election.

“2024 is going to be the deepfake election in ways that previous elections weren’t,” he said. “It is only becoming prevalent in 2024, and that’s because of access to this technology. Basic versions of this technology existed in 2020, but the quality wasn’t good enough to fool people. People could still tell it was fake.”

Mr. Gupta said the quality improved in 2022 and continues to advance monthly. Content is becoming good enough that the average person could see or hear it on TikTok or YouTube and would be convinced that it’s real.

“They’re not looking for a fake,” he said. “They’re not trying to determine that it’s fake. It sounds real to them, so they just assume it’s real. But the last thing that was needed for this to really become a massive problem was access. So the first hurdle was quality, and that was just accomplished last year. The second hurdle was access.”

(Left) A woman views a manipulated video of Presidents Donald Trump and Barack Obama that illustrates how deepfake technology can deceive viewers, in Washington on Jan. 24, 2019. (Right) Paul Scharre of the Center for a New American Security views a manipulated video by BuzzFeed with filmmaker Jordan Peele (on screen, R) using readily available software and applications to change what was said by President Barack Obama (on screen, L), in Washington on Jan. 25, 2019. (ROB LEVER/AFP via Getty Images)

Now, Mr. Gupta said, tools are available online to the masses, and usually for free.

“Most of the time, you don’t even need a credit card, and you can create deep fakes of anyone in just five seconds,” he said. “That is what’s new. That’s what has just happened in the past few months—and that’s why 2024 is going to be the deepfake election.”

It’s an issue that “keeps me up at night,” Mr. Gupta said.

“This is one of the major reasons why I founded DeepMedia, to be the leader in pushing against misinformation and disinformation,” he said. “Around six years ago, I saw this writing on the wall. I saw a future where people would not be able to tell what’s real and what’s fake.”

Detecting Deepfakes

To address the growing threat of deepfakes, Mr. Gupta’s partner Emma Brown, co-founder and chief operating officer of DeepMedia, emphasized the importance of accurate and scalable detection methods.

DeepMedia has developed advanced and intelligent deepfake detection capabilities that they note have been validated by the U.S. government and companies worldwide.

Ms. Brown said that DeepMedia boasts the most accurate face and voice deepfake detection capabilities globally, ensuring reliable identification of AI-generated content.

Scalability is another factor that’s important to the company, she said, adding that DeepMedia has the ability to process a vast number of videos daily, allowing for the rapid identification of deepfakes within large datasets.

Thirdly, she said the robustness of DeepMedia’s platform offers advanced features such as precise content categorization, enabling users to extract specific information from AI-generated content.

“We’re actively involved in securing our elections ahead of AI threats,” Ms. Brown said.

Four Twitter accounts apparently generated by artificial intelligence software, displayed on a laptop in Helsinki on June 12, 2023. The fake profiles, posing as American environmentalists, posted tweets in support of the United Arab Emirates, its handling of the COP28 climate summit, and the role of its COP28 chief, oil executive Sultan Al Jaber, in promoting climate action. (Olivier Morin/AFP)

Empowering the Public

Recognizing the need for public awareness and involvement in combating AI-generated misinformation, DeepMedia recently made its deepfake detection capabilities available for free on X, formerly known as Twitter.

Users can tag @DeepMedia and add the hashtag #DeepID to any media content, and DeepMedia’s Twitter bot will promptly run the content through its deepfake detection system and provide results in the thread, Mr. Gupta said.

“We hope that this will empower the average user and citizen journalists with the tools they need to protect themselves against AI misinformation,” he said.

The move represents a significant step toward democratizing the fight against AI-driven disinformation and misinformation.

The 2024 U.S. presidential election will serve as a critical test of the ability to combat the influence of deepfakes on public perception and electoral outcomes. Mr. Gupta stressed that the challenge extends beyond technical solutions, and he urges people to remain vigilant and informed.

“The fusion of AI and elections demands our collective attention and action to safeguard the integrity of our democratic processes,” Mr. Gupta said.

Voters cast their ballots at a polling location inside the Museum of Contemporary Art in Arlington, Va., on Nov. 8, 2022. (Nathan Howard/Getty Images)

When asked if AI or deepfakes have already been noticed in the 2024 election cycle, Ms. Brown said it was only in a few cases that didn’t gain much traction.

One fake video purportedly showed Hillary Clinton endorsing presidential candidate Gov. Ron DeSantis.

“It was completely fake, and our detectors were able to detect it with a high degree of accuracy,” Ms. Brown said. “But instances like that could be detrimental to DeSantis’s campaign.

“So we’ll be seeing more and more of it, especially as coverage in the election season really picks up.”

Trusting Your Sources

Jim Kaskade, CEO of Conversica. (Jim Kaskade)

AI company Conversica has been operating in the AI space since 2007. CEO Jim Kaskade offered a unique perspective on the role of AI in the 2024 election.

“We are driven by the positive purpose and the do-good aspect of applied AI,” he said.

This commitment suggests that AI technologies, when employed thoughtfully, can contribute to the betterment of the electoral process.

AI could be a force for good in elections, Mr. Kaskade said. AI-driven tools can enhance communication, provide valuable insights into voter preferences, and streamline campaign efforts. It’s essential to strike a balance between leveraging AI for positive purposes and guarding against its misuse, he said.

He emphasized the need for an ethical use of AI-generated content in political discourse and for vigilance in ensuring the authenticity of information in electoral contexts, which highlights the importance of trusting sources in the age of AI-generated content.

“I think it’s just fundamental,” Mr. Kaskade said. “You have to research your source.”

He acknowledged that while technology hasn’t yet fully caught up to differentiate between real and fake content, one can still exercise caution by assessing the credibility of the sources that are providing information.

The Issue in 2024

The challenge posed by deepfakes in the 2024 election is multifaceted. With the rapid advancement of AI technology, malicious actors can easily create highly convincing fake content—including videos, audio recordings, and images—to manipulate public opinion and sow discord.

This poses a significant threat to eroding trust in political candidates and institutions.

It requires a collective effort from governments, tech companies, and society as a whole to raise awareness, promote digital literacy, and enact regulations to combat the malicious use of AI, the experts said.

Mr. Kaskade urged candidates who are running for office to propose policies and regulations on AI that would ensure that its use is beneficial to society, rather than harmful and divisive.

It’s also going to be up to voters and citizens to stay up to date on deepfakes and the advancement of AI, and to know how to protect themselves by using their analytical thinking skills and critically evaluating the information they encounter online.

NH POLITICIAN is owned and operated by USNN World News Corporation, a New Hampshire based media company specializing in the collection, publication and distribution of public opinion information, local,...