Disruption

AI bills poised to boom again in ’25

State lawmakers introduced 500 more bills this year than in 2023.
The ChatGPT app is displayed on an iPhone in New York, May 18, 2023.(AP Photo/Richard Drew, File)

Five-hundred more artificial intelligence-related bills were introduced this year in state legislatures than in 2023, according to a software industry lobbying group’s new analysis.

BSA The Software Alliance, whose members include major players in the AI arena such as Microsoft, OpenAI and Workday, tracked 693 AI bills across 45 states, including 113 that were enacted. By comparison, 191 AI bills were introduced in states in 2023.

With the technology continuing to gain steam among consumers and with legislators more plugged-in on it, the group says the trend is likely to carry over into 2025.

“What we’ve seen is a wave of AI legislation start to move its way through the states,” said Craig Albright, BSA’s senior vice president of U.S. government relations. “Our expectation would be that the wave would continue to build.”

Read more: Texas lawmaker unveils sweeping AI bill for 2025

Albright compared the AI legislation push to previous examples of states advancing tech regulation, such as data breach and data privacy laws.

The AI bills generally fit into four categories, according to the analysis: regulation of high-risk AI; protections against digital replicas; restrictions on deepfakes; and rules of the road for government deployment of AI.

Albright said BSA’s focused most on legislation to address algorithmic discrimination by AI systems that make important decisions about people’s lives, such as whether someone is hired for a job or qualifies for a loan.

Colorado lawmakers this year passed a first-in-the-nation comprehensive AI discrimination law, and similar efforts are expected in a dozen or more states next year, according to members of a bipartisan legislative AI working group.

While BSA supports the emerging risk-based approach to AI regulations, there is a counter push to apply a product liability framework to AI systems. That is the focus of New York Assemblymember Alex Bores (D) as he prepares legislation for 2025. It is also something for which the Seattle-based Transparency Coalition, an AI safety nonprofit, is advocating.

“We believe there should be a duty of care required in deploying AI systems,” said Rob Eleveld, the coalition’s co-founder. “The best way to introduce duty of care is a long-tested product liability framework.”

BSA pushed back on that idea, saying what should come first is a focus on high-risk uses and requiring companies to mitigate for those risks.

“Then governments can identify how existing law provides for liability, and what gaps should be addressed through additional action,” said Aaron Cooper, BSA’s senior vice president for global affairs.

The tension over how best to corral AI without stifling innovation highlights the fact that AI regulation is still in its infancy.

Asked to predict what 2025 will bring, Albright said it will be an all-hands-on-deck year.

“We are going to sleep over the holidays because we will get less of it when we hit January,” he said.