A push to pass sweeping artificial intelligence laws in several more states this year is facing headwinds, as the Trump administration prioritizes AI competition and some state policymakers worry it’s too soon to start regulating it.
At a global AI summit last month, Vice President J.D. Vance warned that “overregulation” could deter innovation. Silicon Valley venture capitalists and free market advocates have mounted a campaign against “a patchwork of state laws.”
The growing backlash has also ensnared a multi-state working group of lawmakers that has been meeting since 2023 to discuss ways states can regulate AI. Last week, the neutral facilitator of that group announced it was withdrawing in the face of “inaccurate claims” and “misperceptions” about its role.
Virginia is likely to provide the next test for the nascent effort to regulate AI. Gov. Glenn Youngkin (R) faces pressure to veto a Democrat-backed bill that would regulate high-risk AI systems used to make consequential decisions about people’s lives.
“Virginia should not be erecting new barriers as the Trump administration seeks to tear them down,” Steve DelBianco, president and CEO of industry group NetChoice, wrote in a veto request letter to Youngkin.
On the other side, consumer and labor advocates have criticized the Virginia legislation as riddled with “loopholes and exemptions.”
The Virginia bill is a narrower version of a first-in-the-nation Colorado law passed last year that seeks to protect consumers from algorithmic discrimination by automated decision-making systems. Gov. Jared Polis (D) signed that law “with reservations” and called on lawmakers to “significantly improve” it this year.
Read more: Colo. Gov signs AI regulation bill
An interim task force’s attempts to come up with amendments to the Colorado law found some consensus, but also areas of “firm disagreement” that are now the subject of negotiations.
Tensions between Democratic lawmakers and a Democratic governor are also on display in Connecticut, where Sen. James Maroney (D) is trying for the second year in a row to pass comprehensive AI regulations. At a public hearing last week, a member of Gov. Ned Lamont’s (D) cabinet questioned whether the legislation is needed.
“I think we’re too early here,” Dan O’Keefe, commissioner of the Department of Economic and Community Development, told Maroney.
“A lot of people look back at social media and say we wish we had done something then,” Maroney responded.
Another critic said Maroney’s bill runs counter to “national AI policy goals.”
Maroney later described the hearing as “a little spicy.” A previous version of the bill died last year under threat of veto from Lamont.
Another bill whose fate is uncertain is the Texas Responsible AI Governance Act from Rep. Giovanni Capriglione (R), which has been viewed as a potential model for other red states, and even potentially Congress.
Opponents have assailed the measure as a California or European Union-style bill that is at odds with Texas’s pro-business reputation. Capriglione has said his goal is to make the state “a hub for responsible technological advancement.”
Bills targeting high-risk AI systems and algorithmic discrimination have also been introduced this year in California, Hawaii, Illinois, Maryland, Massachusetts, Nebraska, New Mexico, New York, Oklahoma and Vermont.
Supporters of efforts to regulate AI at the state level downplay the mounting opposition and note that 2025 has brought more bills in more states. They also say they are more organized than last year.
“If there are stronger headwinds, I’d argue they are being offset by the stronger thrust being generated by labor, consumer and civil rights groups,” said Matthew Scherer, senior policy counsel at the Center for Democracy & Technology.
Scherer pointed to a coalition letter in support of Connecticut’s bill signed by nine groups, including the Connecticut AFL-CIO, Consumer Reports, the Electronic Privacy Information Center, the Hispanic Federation and the NAACP Connecticut State Conference.
The letter called the bill “a welcome step toward much-needed transparency and accountability” while also calling for changes to “strengthen the law and further protect Connecticut workers and consumers.”
Grace Gedye, a policy analyst at Consumer Reports who tracks AI legislation, said lawmakers in at least four states are considering AI regulations that are “considerably stronger” than what Colorado enacted last year.
“I consider that important progress in the right direction,” Gedye said.
Another group active in state-level AI regulation efforts, the Seattle-based Transparency Coalition, also remains bullish about the role of states, writing in a recent newsletter: “With Congress muted and corporate tech emboldened, state legislatures are now in charge of AI laws.”
The coalition said it’s working with legislators on 15 AI-related bills in 10 states. That includes New Mexico Rep. Christine Chandler (D), who is sponsoring an algorithmic discrimination bill that has been endorsed by Consumer Reports.
Chandler told Pluribus News that a federal solution would be best, but states can’t wait for Congress to act.
“If you allow algorithms to go unchecked … people will be harmed,” Chandler said, noting her state’s high percentage of Native American and Hispanic residents.
She called the criticism that state regulations will stifle innovation “a red herring … to scare legislators.”