Disruption

5 ways states want to regulate AI in ‘25

The approaches include addressing transparency, safety and liability.
The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. (AP Photo/Michael Dwyer, File)

State lawmakers are gearing up for another frenzied year of trying to place limits on artificial intelligence.

MultiState, a government affairs firm, is tracking more than 300 AI-related bills across 36 states. Many are modeled on laws passed last year, but there are also some fresh ideas.  

“The rapid pace of AI legislative developments in 2024 has carried over into 2025, with even greater momentum — bringing not only new proposals but also new approaches to regulating AI,” said Tatiana Rice, director of U.S. AI legislation at the Future of Privacy Forum, which brought together a bipartisan working group of legislators interested in AI.

That means an evolving definition of a comprehensive bill. Texas is poised to up the ante on Colorado’s 2024 AI Act by tackling a “broader range of AI technologies, uses, and issues,” Rice said.

Here are five approaches to AI regulation that legislators are pursuing this year:

Comprehensive regulation: Texas Rep. Giovanni Capriglione’s (R) 44-page measure has a little bit of everything, with the goal of advancing the responsible use of AI, ensuring transparency, and protecting individuals from potential harms, while also encouraging AI investment in Texas.

It would require developers, distributors and deployers of high-risk AI systems to take “reasonable care” to protect consumers from algorithmic discrimination; prohibit AI systems from using “subliminal” or “deceptive” techniques to subvert “informed decision-making” on the part of individuals; prohibit “social scoring,” which assigns individuals and groups a value as compared to others; require that consumers be told if they are interacting with a high-risk AI system and give them the right to appeal negative decisions.

The bill would also update Texas data privacy law to allow consumers to prevent their data from being sold to train AI systems, among other protections, and ban the use of generative AI to create child pornography.  

To encourage AI research and development, the bill would create a “sandbox program” to allow developers to test their models in a contained environment with fewer regulatory constraints.

High-risk decision-making: Colorado’s first-in-the-nation law, enacted last year, seeks to protect consumers from algorithmic discrimination when interacting with high-risk AI systems. It places obligations on both developers and deployers of AI systems that make consequential decisions about people’s lives in areas such as housing, employment and health care.

This year, algorithmic discrimination bills have been introduced in Hawaii, Massachusetts, New Mexico, New York and Virginia, with more states expected to follow, including California.

Connecticut Sen. James Maroney (D), who introduced a bill last year that inspired Colorado’s, is back with a revised measure that Rice said falls somewhere between a comprehensive AI bill and a high-risk decision-making bill.

Duty of care/liability: There is an emerging school of thought that, rather than regulate high-risk AI on the front end, a more effective approach would be to hold AI companies accountable if something goes haywire.

Casey Mock of the Center for Humane Technology, a nonprofit best known for fighting social media harms, laid out the concept in detail in a September podcast interview: Treat AI like any other consumer product and make the developers and deployers liable for harms under existing product liability laws. Mock’s team has put out a framework for this approach, and the idea seems to be getting some traction.

The Seattle-based Transparency Coalition, another nonprofit working on AI safety, has embraced duty of care as the most effective way to place guardrails on the industry. It’s working with state lawmakers to develop legislation.

“This product framework today is why we don’t worry about the brakes falling out of our cars, or that aspirin off the shelf won’t poison us,” Transparency Coalition Chairman Rob Eleveld said. “With 120 years of legal precedent behind it, the same legal construct absolutely applies to GenAI products.”

Eleveld said his coalition is hoping to support AI product liability bills in several states this year.

New York and Vermont are two early states to watch. Vermont Rep. Monique Priestley (D) is expected to reintroduce an AI oversight and liability bill for “dangerous artificial intelligence systems” that she first introduced last year. New York Rep. Alex Bores (D) said he is working on a strict liability bill focused on frontier models, which are super powerful AI systems of the future that could do major damage if misused.

“It borrows from how most states regulate explosives … basically saying that we will trust experts in the technology, but in exchange they agree to take on all of the risk,” Bores told Pluribus News.

An AI liability bill could also appear in California where the California Initiative for Technology & Democracy, which sponsored AI deepfake bills last year, said “action is needed to hold AI companies accountable when their products cause damage.”

AI safety: Child pornographers are using generative AI tools to create synthetic images. High school boys are using AI apps to create pornographic deepfakes of female classmates. Companion chatbots have been blamed for encouraging kids to harm themselves.

State lawmakers are moving swiftly to address these bad outcomes. To date, more than half of states have passed laws to ban unauthorized intimate deepfakes. At least a dozen states have criminalized AI-generated child sexual abuse material. And at least 19 states have passed laws regulating election-related deepfakes.

More state action on all these fronts is expected in 2025. New York Gov. Kathy Hochul (D) in her State of the State speech called for legislation to ban AI child pornography and to regulate companion chatbots.

Some also worry that large frontier models could someday unleash doomsday attacks that cripple critical infrastructure or unleash biological or nuclear warfare.

California Sen. Scott Wiener’s (D) bill last year to regulate frontier models drew international attention and a wave of opposition from the tech industry, venture capital and even Democratic members of Congress.

Gov. Gavin Newsom (D) vetoed the splashy and controversial bill, which contemplated AI-spawned mass casualty events. Wiener has signaled that he plans to try again, filing an intent bill.

Bores in New York said he is also working on legislation to regulate frontier models. He said it would be different from what Wiener proposed in California last year but the “same idea.”

Transparency and disclosure: How do consumers know when they are interacting with AI? How can you tell if an image or video or audio is AI-generated? What if AI models are being trained on your data?

Questions like these are giving rise to a growing number of AI transparency and privacy measures.

Bills have been introduced that would require consumers to be notified if they are interacting with an AI system. Public Citizen has drafted model legislation to require disclosure when a consumer is interacting with a chatbot instead of a human.

The Transparency Coalition is working to export two laws California enacted last year to at least four more states. One of those laws requires generative AI system developers to disclose information about the data used to train their model. The other mandates that developers provide consumers with a tool so that they can determine whether content was created or altered by AI.

Washington Rep. Clyde Shavers (D) has filed both a training data bill and the detection tool bill.

There are also legislative proposals to require labeling of AI-generated content to make clear it’s not real. Conversely, Bores is preparing legislation to require a universal watermark on authentic content.

“It is easier to prove what is real than to detect what is fake,” Bores told Pluribus News in October.