On the eve of the presidential election, top state officials from both parties are sending warnings about the dangers of artificial intelligence-generated deepfakes that are designed to confuse voters or sow mistrust.
“As we approach Election Day, it is imperative Mississippians arm themselves with knowledge of the growing threat of AI deepfakes,” Mississippi Secretary of State Michael Watson (R) wrote in a recent op-ed.
“While technology has made it easier than ever to access election information, the rise of AI has also made it easier for misinformation on these topics to spread,” Michigan Attorney General Dana Nessel (D) said as she announced a new online guide to deepfakes.
Other states where governors, attorneys general or secretaries of state sought this month to educate voters include Arizona, Arkansas, Illinois, Kentucky, New Mexico, New York, Vermont and Washington.
The bipartisan alarm signals “a clear concern from those overseeing the elections,” said Bianca Recto, communications director at Accountable Tech, a watchdog nonprofit that is tracking the state-level, pre-election warnings.
The 2024 elections have been described as the first AI election, as generative AI tools have become readily accessible only in the past couple of years. The technology makes it easy for users to create synthetic audio, video and images that look and sound real.
There were several examples of AI-manipulated election content in the U.S. this year, most related to the presidential race. In January, some New Hampshire primary voters received an AI-generated robocall that sounded like President Biden urging them not to vote.
Other examples compiled by Accountable Tech include Elon Musk reposting to X an AI-doctored version of a Vice President Kamala Harris campaign video. And an ad from U.S. Sen. Mike Braun (R-Ind.) that initially failed to include a required disclosure that some of the images were AI-generated. The disclosure was later added.
Indiana is among more than a dozen states that enacted laws this year regulating the use of AI in campaigns, bringing to 20 the number of states with such laws.
California lawmakers went a step further, passing a law known as the Defending Democracy from Deepfake Deception Act that takes effect next year. It requires large online platforms to block deceptive campaign-related content 120 days prior to and 60 days after an election, among other requirements.
A federal judge this month enjoined a different California law that sought to bar individuals or campaign committees from releasing malicious, deceptive election content on either side of an election.
The raft of new laws highlight state lawmakers’ growing concern that bad actors will exploit AI tools to undermine trust in elections or harm candidates. A new poll from Americans for Responsible Innovation backs that up, finding that 55% of voters say they have definitely or probably encountered AI-generated fake information during this presidential cycle.
“The good news is that a lot of voters are aware of AI misinformation going around this election,” Brad Carson, the nonprofit’s president, said in a statement. “The bad news is, it’s everywhere, and Election Day won’t be the end of it.”
OpenAI, the company that created ChatGPT, said in a report this month said it had “disrupted” more than 20 attempts since January to use its tools for nefarious purposes, including efforts to generate social media content about elections in the U.S. and elsewhere. But the report said those efforts had not drawn “viral engagement” or “sustained audiences.”
Accountable Tech said in a memo this month it tracked a dozen election-related deepfakes that have gone viral this year, reaching more than 140 million people. The memo blamed new AI technologies that have not been adequately safety tested and social media platforms with inadequate safeguards.
A coalition of groups including Accountable Tech sent a letter in August to the CEOs of Meta, Snap, TikTok, YouTube and X urging them to more aggressively police deepfakes on their platforms. The letter called on the companies to act before the 2024 election to implement detection systems and require that AI-generated content of a political nature be labeled.
“You must take urgent steps to mitigate against the harm to democracy that deepfakes have the power to yield,” the letter said.
Social media companies say they are taking steps to root out election misinformation and deepfakes on their platforms, including requiring labels on AI content. Major social media platforms and AI companies signed on to an “AI Elections Accord” In February that laid out goals for detecting, preventing and responding to deceptive AI election content.
Meta, for instance, now requires labeling of election-related ads that have been created or altered by AI. In a February blog post, the company also said that it automatically labels content created by Meta AI and is working with industry partners to develop “common technical standards” to identify AI content across the board. But as the election approaches, the French Le Monde newspaper reported that it found examples of synthetic political content appearing on Meta platforms without labels.
Accountable Tech says the warnings to voters from state officials reflect a rush to market of untested AI tools and a lack of robust safety standards online.
“Election officials are still having to take matters into their own hands to counter …. the absence of safeguards on these platforms,” Recto said. “They are clearly pointing out that the threat is very real.”