Disruption

States maintain lead on AI regulation in Trump era

Donald Trump’s election may even accelerate efforts beyond D.C. to set up guardrails for the technology.
President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right. (AP Photo/Evan Vucci, File)

The state-led efforts to establish guardrails on artificial intelligence are likely to continue unabated when President-elect Trump takes office next year. They could even accelerate if Trump follows through on a promise to repeal President Biden’s executive order on AI.

States have taken the early lead on AI regulation in the absence of congressional action. And a bipartisan working group of state lawmakers has been meeting in anticipation of another round of AI legislating in 2025.

Those efforts could gain fresh urgency when Trump returns to the White House.

“For consumers worried about the everyday harms of AI — transparency, bias, problems with accuracy and more — the election just made states legislators’ efforts that much more important,” Grace Gedye, a policy analyst at Consumer Reports, said in a post-election statement.

How Trump will approach AI regulation is unclear. While he is expected to scrap Biden’s 2023 executive order, he has also expressed concerns about AI’s potential to “go rogue.”

During Trump’s first term, the administration launched a series of AI-related initiatives mostly focused on securing the United States’ leadership in AI, as well as trust- and safety-focused efforts; the director of the Office of Management and Budget issued guidance for regulating AI; and Trump issued an executive order titled Promoting the use of Trustworthy Artificial Intelligence in the Federal Government.

Craig Albright, senior vice president for U.S. government relations at BSA The Software Alliance, an industry trade group, said it is unclear how AI fits into Trump’s priorities for a second term. But with Republicans in total control of Washington, Albright predicted there will be resistance to AI regulation at the federal level.

“That will certainly create incentives and pressure on states to act. But having said that, the states aren’t holding back,” Albright said. 2025 was already shaping up as a busy year for AI legislation in the states, he said.

In a post-election memo, Adam Kovacevich, a self-described pro-tech Democrat who is CEO of the tech industry group Chamber of Progress, predicted that progressive groups focused on tech regulation will shift their attention to blue states.

“This will be especially true of AI policy,” Kovacevich wrote, noting that California enacted 17 AI-related laws this year. “Left-leaning ‘AI safety’ groups have already declared their intent to push those bills in other blue states next year.”

Among the AI bills California Gov. Gavin Newsom (D) signed this year were transparency requirements regarding the data used to train AI systems and a requirement that developers provide a tool that allows users to see if content was created or altered by AI.

The Seattle-based Transparency Coalition wants to export those laws to a half dozen more states in 2025. The nonprofit is also advocating for a “duty of care” approach to AI regulation, similar to how other consumer products are regulated for safety. Transparency Coalition Chairman Rob Eleveld said Trump’s election does not affect his strategy and that the organization’s work in the states is with “a 100% bipartisan lens.”

“We have taken a state-focused approach not because of which administration is in power in Washington, D.C., but because Big Tech is at this point the biggest lobbyist in D.C. and has that town wired with massive lobbying budgets,” Eleveld said in an email. “We have found state level officials much more willing to move out in front in protecting their citizens.”

Since the debut of Chat GPT two years ago, which introduced generative AI to the public, governors and state lawmakers have moved swiftly to both embrace and regulate the fast-moving technology.

Numerous governors have issued AI executive orders and states have created task forces to study AI. Legislators have passed laws to address specific harms such as deepfakes and broader risks including algorithmic discrimination.

Colorado led the way this year by passing the first comprehensive AI legislation in the country focused on high-risk AI systems. Lawmakers in a dozen more states could introduce versions of the Colorado law in 2025. It was based on legislation first introduced in Connecticut by Sen. James Maroney (D), who is retooling his bill for next year.

Maroney, in a text message to Pluribus News, said Trump’s election introduces “a new layer of uncertainty in the role that the federal government will play” in AI governance.

“It was always iffy at best if there would be federal legislation,” which is why state lawmakers have taken the lead, Maroney said.

While Maroney’s bill has emerged as a model for blue states, Texas Rep. Giovanni Capriglione (R) is teeing up legislation for next year that could become a red state template for AI regulation.

Maroney and Capriglione both serve on the steering committee of the bipartisan Multistate AI Policymaker Working Group that has brought together more than 200 state lawmakers from 45 states to study AI policy.

Even as states move to regulate AI, they are also competing to draw AI investment. This week, an AI task force in New Jersey delivered a series of recommendations to Gov. Phil Murphy (D) focused on the ethical deployment of AI as well as the economic opportunities it presents for the state.

“As AI technologies continue to expand and advance at an unprecedented pace, New Jersey remains at the forefront, building up the Garden State as a hub for innovation,” Murphy said in a statement announcing the report.