Disruption

Deepfake legislation gains momentum as lawmakers seek to curb AI misuse

States are passing bills regarding manipulated election-related and intimate images.
Images created by Eliot Higgins with the use of artificial intelligence show a fictitious skirmish with Donald Trump and New York City police officers posted on Higgins’ Twitter account, as photographed on an iPhone in Arlington, Va., Thursday, March 23, 2023. The highly detailed, sensational images, which are not real, were produced using a sophisticated and widely accessible image generator. (AP Photo/J. David Ake)

Lawmakers are clamping down on harmful uses of generative artificial intelligence, as concerns about election-related and intimate deepfakes fuel another round of legislation.

Election deepfake bills recently enacted in Kentucky and South Dakota bring the number of states with such laws to 23, according to tracking by Public Citizen. More could be on the way: North Dakota legislators this week gave a bill final approval, and bills are advancing in Maryland, Montana and Vermont.

New Jersey Gov. Phil Murphy (D) signed an intimate image deepfake law last week, making it the 33rd state to do so. Bills in Kansas and North Dakota are nearing final passage. Similar legislation is under consideration in multiple other states including Connecticut, Montana, Oklahoma and Tennessee.

“We’re seeing strong bipartisan support for both intimate deepfake and election deepfake legislation,” said Ilana Beller, an organizing manager at Public Citizen.

Public Citizen has drafted its own model deepfake legislation and is supporting state-level efforts to address the issue.

States began passing deepfake laws in 2019, but momentum ramped up following ChatGPT’s debut in November 2022. That signaled the arrival of large language generative AI, which allows people to easily create synthetic images, video and audio.

GenAI tools have quickly been exploited to create false election-related content and fake nudes, sparking alarm about election interference and sexual victimization. The issue of intimate deepfakes was amplified in January 2024 when fake images of pop icon Taylor Swift circulated online. Studies have shown that the victims of pornographic AI-generated content are almost always women, including among celebrities.

In Tennessee, a former television news meteorologist testified last month in favor of a bill to create civil and criminal penalties for the distribution of an unauthorized intimate deepfake. Bree Smith told a panel of lawmakers that online scammers had created imposter Facebook accounts and used semi-nude deepfakes depicting her to try to get money from her fans.

“I felt humiliated and scared,” Smith testified. “I didn’t know what to do or how to fight it, and I didn’t know how to protect the viewers and the people that trusted me online from being subject to this kind of extortion.”

Connecticut Sen. Heather Somers (R) is advocating for similar legislation after also being victimized. She recently told the Connecticut Mirror that someone circulated “very suggestive” AI-generated images of her.  

“It’s putting me in clothing that I would never wear, and in a pose that I would never pose,” she told the news site.

Murphy signed New Jersey’s law alongside a high school student who became a teen activist after being victimized.

“We stand with the victims of deepfake imagery and will continue to prioritize the safety and well-being of all New Jerseyans,” Murphy said.

The intimate deepfake laws generally make it a crime to distribute images and videos without permission, and give victims civil recourse.

Separately, lawmakers in California, Minnesota and Texas have introduced bills this year to regulate so-called nudification apps that allow users to create intimate deepfakes.

A bipartisan bill in Congress dubbed the TAKE IT DOWN Act would make it a crime to publish nonconsensual intimate deepfakes and require that online platforms remove such content within 48 hours. The U.S. Senate passed the bill unanimously in February. 

The Electronic Frontier Foundation has warned that the bill “threatens free expression, user privacy, and due process, without addressing the problem it claims to solve.”

Most of the laws governing election deepfakes require disclosure that the content is synthetic, although some of the laws establish outright bans leading up to an election.

Beller of Public Citizen said the laws send a signal about GenAI’s unacceptable uses, establish societal norms, and provide victims a legal avenue of recourse. She said 2025 is notable because bills addressing both types of deepfakes have now been introduced in nearly every state.

“Legislators are aware of the issues, they know they want to do something about them,” Beller said. “So that feels exciting that there’s a lot of momentum here.”