California is taking a get-tough approach to artificial intelligence-generated content designed to confuse or mislead voters, with just weeks until the general election.
Gov. Gavin Newsom (D) on Tuesday signed three deepfake bills authored by Assembly Democrats that build upon a 2019 law, including one that will ban certain AI content close to an election.
In signing the bills, Newsom sought to make an example of billionaire Elon Musk, owner of X, who amplified fake content in July when he retweeted a video of Vice President Kamala Harris that had been doctored using AI.
“You can no longer knowingly distribute an ad or other election communications that contain materially deceptive content — including deepfakes,” Newsom posted on X after signing the bills. His post included a screenshot of a picture of Musk and a headline about his repost of the Harris video.
The new law Newsom was referring to, authored by Assemblymember Gail Pellerin (D), bars individuals and others from “knowingly distributing … materially deceptive content … with malice.”
The ban, which takes effect immediately, applies 120 days before a California election and extends 60 days past an election for content aimed at undermining faith in candidates, elections officials or voting equipment. There is an exception for satirical content or parodies, so long as they come with a disclosure.
Individuals who receive malicious AI content or those directly affected by it could seek a court order to block the distribution of the deepfake and pursue damages.
“With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally-altered content that can interfere with the election,” Pellerin, who chairs the Assembly’s elections committee, said in a statement.
California is among more than a dozen states to pass deepfake election laws this year, according to tracking by Public Citizen. The new laws are an indication of growing concern among state lawmakers of the potential for AI to interfere with campaigns and voting.
California’s law goes further than most by banning certain AI-generated content in the days before and after an election. Other states have instead opted to require disclosures on AI content so as not to run afoul of the First Amendment.
Backers of the California law said they sought to narrowly craft the deepfake restrictions to target harmful content and not impede First Amendment rights. Opponents countered that restricting speech that is “reasonably likely” to cause harm will interfere with free speech.
A second new law, from Assemblymember Wendy Carrillo (D), which does not immediately go into effect, would require disclaimers on political ads that are generated in whole or in part using AI.
The third law, by Assemblymember Marc Berman (D), instructs large online platforms to block deceptive deepfakes 120 days before an election and 60 days after, and label other synthetic content.
The tech industry opposed the legislation on the grounds that companies are not in a position to decide what content should be blocked or labeled.