Disruption

AI safety nonprofit demos dangers for lawmakers

It ‘was more than educational — it was a wake-up call.’

Two Bay Area technologists are using personalized simulations to educate state lawmakers and government officials about artificial intelligence’s real-world dangers.  

Lucas Hansen and Siddharth Hiregowdara, founders of the nonprofit Civic AI Security Program, or CivAI, have developed an interactive presentation that shows how generative AI can be exploited to commit fraud and spread disinformation.

“We’re quite concerned about negative outcomes,” Hansen said.

The California Initiative for Technology & Democracy and Common Cause California sponsored an event in Sacramento last week, where CivAI illustrated for legislative staffers what AI is capable of in the wrong hands.

The presentation featured simulated phishing emails, compromising deepfakes and even news articles that place subjects in an unflattering or scandalous light. They are created using publicly available AI tools and information about individuals found on the web, such as from their LinkedIn profile.

Assemblymember Josh Lowenthal (D), a tech entrepreneur who serves on the Privacy and Consumer Protection Committee, agreed to have his likeness used for the demonstration.

“CivAI’s presentation was more than educational — it was a wake-up call for policymakers and showed that regulation is falling far behind the rapid pace of AI development,” Lowenthal told Pluribus News.

Lowenthal said the CivAI demonstration created “strikingly accurate” clones of his voice and image while also showing ways that AI can be used to undermine trust in elections and democracy. One demo showed how AI could be used to generate fake news stories and tweets about polling stations closing on Election Day due to threats — a form of voter disenfranchisement.

AI is already being used to confuse voters, discredit candidates and stage fake kidnappings. An AI-generated online death hoax last year targeted a Los Angeles Times reporter. This week, former Argentine President Mauricio Macri condemned as “an attempt at electoral fraud” a deepfake video depicting him weighing in on local elections in Buenos Aires.

Hansen and Hiregowdara are software engineers. They do not pitch lawmakers on specific public policy solutions. Instead, they say they want to provide a “foundational education” that gives policymakers insight into AI’s capabilities.

“I hope that whatever they do [policy-wise] is informed by the actual present state of the technology rather than some outdated impression of it,” Hiregowdara said.

State lawmakers have taken note of deepfakes’ dangers. Montana this month became the 25th state to enact an election-related deepfake law, according to tracking by Public Citizen. More than half of states have also enacted laws protecting people from AI-generated intimate deepfakes. President Trump on Monday signed the Take It Down Act, which targets nonconsensual images created by AI.

CivAI’s effort to highlight AI’s dangers comes amid a growing backlash to regulating the technology. 

The Trump administration, congressional Republicans, the venture tech community and some governors have warned that regulations and a patchwork of state laws could stifle innovation and give China a competitive advantage. The massive tax bill being considered in the U.S. House includes a 10-year moratorium on state AI regulations, prompting bipartisan opposition from state officials.

States have taken the lead on AI regulations in lieu of federal action.

California Gov. Gavin Newsom (D) signed at least 17 bills last year concerning the deployment and regulation of generative AI. He also vetoed a high-profile safety bill focused on frontier AI models.

Read more: Newsom vetoes major AI safety bill

Dozens more AI bills were introduced this year in Sacramento, including high-profile measures to safeguard kids and regulate high-risk systems. Those measures are still under consideration.

ChatGPT’s debut in late 2022 galvanized state policymakers and was the impetus for Hansen and Hiregowdara to launch CivAI. The pair met working at Qualia, a real estate software company Hansen co-founded.

“We had always thought that AI would be incredibly consequential for the world,” Hansen said. “But when ChatGPT came about it was a wake-up call that, ‘OK, we need to work on this because it’s going to manifest in the next couple of years.’” 

They have presented to: National Institute of Standards and Technology staff at the U.S. Department of Commerce; a multistate working group of state lawmakers studying AI regulation; Washington, D.C.-area law enforcement officials; and civil society groups including the American Association of Retired Persons.

They also testified before the Texas House select committee on artificial intelligence. And their demo was featured  in February as part of the Artificial Intelligence Action Summit in Paris.

CivAI shared its training tool with Michael Moore, chief information security officer for the Arizona secretary of state’s office. Moore told Pluribus News he provided the demonstration to election officials in the state’s 15 counties, plus three or four other secretary of state offices.

“We get a lot more people saying, ‘Oh wow, I really get it now,’” Moore said. “If I make a deepfake of you, that changes you.”

As AI evolves, CivAI is evolving its training tool. A newer demonstration looks at how AI could increase the threat of biological attacks, a growing subject of concern as AI models become more powerful.  

Lowenthal said the speed of AI’s evolution highlights “the urgent need for action to address its potential harms.”