The 17 artificial intelligence bills California Gov. Gavin Newsom (D) signed last month capped a year of states racing past Congress to lay down some of the booming technology’s first guardrails.
The momentum in states to protect against AI’s potential harms without hindering innovation is likely to only pick up in 2025.
“We’re just in the very beginning of AI legislation that’s going to shoot across the states,” said Craig Albright, senior vice president for U.S. government relations at the Software Alliance, a tech industry group.
California, Colorado, Illinois and Utah emerged this year as early leaders in adopting AI laws, while dozens of other states moved rapidly to address the imminent threat of election-related and intimate image deepfakes.
The early action on AI signals that state lawmakers do not want to be caught flat-footed as they were with consumer data privacy and social media. It also sends a message to industry that states are unwilling to wait for Congress to act.
“As a consumer advocate, it gave me a lot of hope to see how many ambitious lawmakers are working on AI policy,” said Grace Gedye, a Consumer Reports policy analyst who tracks AI legislation in the states.
Newsom last month vetoed one of the most closely watched AI regulation bills in the country, but he also declared the need for state regulations to govern generative AI, which is capable of creating new content. He is convening AI experts to develop “responsible guardrails” for generative AI and plans to work with lawmakers on the issue next session.
“We have a responsibility to protect Californians from potentially catastrophic risks of GenAI deployment,” Newsom said in a statement announcing his veto. “We will thoughtfully — and swiftly — work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good.”
The focus on AI regulation in the states comes as the technology moved in the less than two years since ChatGPT’s debut from a novelty to showing up in people’s daily lives. Today, internet searches produce an AI-generated summary, AI offers to help write emails, and 43% of adults under 30 told Pew Research they have used ChatGPT. Additionally, AI is increasingly at work behind the scenes as people apply for jobs, search for a rental or otherwise seek to make important decisions about their lives.
State policymakers have responded accordingly. More than 30 states this year adopted AI-related laws or resolutions. More than half of states are formally studying AI. And multiple governors have issued executive orders on the topic.
California, home to a majority of the world’s leading AI companies, received the bulk of the attention as lawmakers considered more than two dozen AI-related bills this year. That includes the one Newsom vetoed, which would have required safety testing and shut-off switches for the most powerful AI models.
A second closely watched California bill to address algorithmic discrimination by AI-backed systems was held back by its author after amendments watered it down.
The AI-related bills he signed included measures to address election deepfakes, bar AI-generated child abuse material, and to require disclosure to health care patients when AI is used to communicate with them. He also signed a pair of laws aimed at protecting Hollywood performers from having AI steal their likeness, similar to measures that were passed in Illinois and Tennessee this year.
Taken together, Newsom’s office called the bills “the most comprehensive legislative package in the nation on this emerging industry.”
“It’s easy to say, ‘Oh man, here are a couple of examples of ambitious bills that didn’t pass this year,’ as some sort of failure of states to act on AI,” Gedye said. “But they passed a lot of bills.”
There is already an effort underway to export two of California’s new laws to other states in 2025. One requires that generative AI developers publicly disclose information about the data used to train their models. The other mandates that developers make available a detection tool that allows users to determine if content was created or altered by AI.
The Seattle-based Transparency Coalition, a new AI safety nonprofit that supported both bills’ passage, says it is crafting model legislation based on each of California’s new laws and is in the process of identifying potential bill authors in more than a half-dozen states, including Washington State, to sponsor them next year.
“We need to build trust in what’s going into these models and we need to build trust in what’s real and what’s not real,” said Rob Eleveld, Transparency Coalition’s co-founder. “These are reasonable guardrails.”
A recent report from the Future of Privacy Forum on the state of AI legislation concluded that state lawmakers are “primarily focused” on ensuring AI systems that make consequential decisions about people’s lives — such as their employment or access to health care — do not discriminate.
That is the goal of Colorado’s first-in-the-nation comprehensive AI regulation law, which requires developers and deployers of high-risk systems to take “reasonable care” to ensure systems do not discriminate. It is likely to become a model for as many as a dozen states next year, according to lawmakers who are participating in an informal, bipartisan multistate AI workgroup that has attracted more than 200 legislators, staff and other officials from 47 states.
Connecticut and Texas will be among the key states to watch next year. Connecticut Sen. James Maroney (D) is retooling a bill that failed this year that inspired Colorado’s law. In Texas, Rep. Giovanni Capriglione (R), who chairs a select committee on AI, is preparing legislation that could become a model for red states.
While many state legislators are taking a risk-based approach to AI regulation, New York Assemblymember Alex Bores (D), who has a degree in computer science, is approaching the issue from a product liability perspective. He is working on legislation for 2025 to hold AI companies strictly liable if their systems go awry.
“If you are developing a particularly risky technology … then you are going to be responsible for how that ends up used in the world, whether or not you are the one directly using it,” Bores told Pluribus News.
Industry groups generally prefer a federal solution for tech policy, rather than a state-by-state patchwork. But they appear resigned to the fact that states are going to act first and that opposing those efforts is futile.
“If legislators are saying they’re going to do this, we’re going to be there to help them get it right,” Albright said. “We’ve been encouraging policymakers to talk to each other and seek a common approach that’s workable.”