Disruption

Absent federal action, states consider AI laws

States are assuming Congress won’t act on AI legislation.
The ChatGPT app is displayed on an iPhone in New York, May 18, 2023.(AP Photo/Richard Drew, File)

State lawmakers are considering new ways to regulate artificial intelligence on the assumption that Congress will drag its feet and out of concern that it would be dangerous to delay until the technology is more established.

The burgeoning focus on AI in the states builds upon ongoing efforts to pass comprehensive data privacy laws and signals a new normal in which state legislators are stepping into regulatory voids left by the federal government.  

“I have zero faith in the federal government actually doing anything [on AI] before the legislature in Texas meets in 2025 or in 2027,” said Texas state Rep. Giovanni Capriglione (R) at a Pluribus News Spotlight event on artificial intelligence on Wednesday.

“We have a Speaker of the House, so we’re way ahead of where the federal government is,” Capriglione added, referring to this week’s ouster of Speaker of the House Kevin McCarthy (R-Calif.).

Capriglione, the sponsor of this year’s Texas Data Privacy and Security Act, is among more than 90 state lawmakers from more than half the states who are meeting virtually this fall to learn more about AI. Connecticut state Sen.James Maroney (D), who shepherded his state’s data privacy law to passage last year, is leading the multistate AI working group.  

“It’s something that we’ve learned from data privacy — that we do need to get started earlier — and I think it’s setting out some clear guardrails,” Maroney told the Pluribus audience. “It is incumbent upon the states to lead in the absence of federal legislation.”

State legislators say they do not want to hamper innovation, but also want to take steps to protect the public from the potential harms of AI, including deepfakes, algorithmic bias and trademark infringements.

“We’ve got to work quickly because technology moves at the speed of light and everything else moves at the speed of molasses in comparison,” said Virginia Del. Michelle Maldonado (D), an attorney who has worked in the tech industry who is also a member of the AI working group.

AI regulation in the states could take several different forms, according to a recent primer from the National Conference of State Legislatures. One option for lawmakers is to update existing regulations to incorporate AI. Another is to regulate AI as it applies to a specific sector, such as healthcare. A third is to write rules that cut across several sectors or are sector agnostic. Lawmakers must also decide whether to focus on government or commercial applications of AI, or both.

This year, Maroney shepherded legislation that sets new limits on state government use of AI and creates an AI task force. For next year, he’s teeing up the legislation that he says would create “broad guardrails” for AI focused on accountability and transparency. But Maroney acknowledged that regulating AI will be a multi-year effort.

“This isn’t going to be a one and done type of legislation,” Maroney said.

Capriglione, who sponsored legislation this year to create a state AI advisory council, said he will likely prepare an AI legislative package for 2025 when the Texas legislature next meets in regular session. In the meantime, he said, there might be opportunities to work with state agencies to implement standards for AI.

“We waited on the privacy stuff … and in the process of waiting, hundreds of thousands of data points on every single citizen was collected,” Capriglione said. “It is time for us to start looking at this and planning ahead.”

Capriglione is also participating in the multi-state AI workgroup. 

Maldonado, who led the formation of a technology and innovation legislative caucus, said she sees opportunities for Virginia lawmakers next year to update the state’s 2021 comprehensive data privacy law and a 2022 law on the use of facial recognition by law enforcement. 

Maldonado also wants to identify what she called “high-risk” and “unacceptable risk” categories – such as health care and criminal justice — and prioritize AI regulations in those areas to prevent “disparate impact, unintended consequences and potential discrimination of groups.”

“Because those are places where the data and decision-making have such direct impact on the daily lives of people,” Maldonado said.

In Colorado, state Sen. Robert Rodriguez (D), the recently-elected Senate Majority Leader, said he is especially concerned about the use of AI in employment decisions. But he cautioned that tech regulation is a heavy lift and admitted to some residual fatigue from having carried the Colorado Privacy Act to passage in 2021.  

“I think throwing too many things at the kitchen sink is probably not the best idea,” Rodriguez said. “To try to do it all at once would be a humongous lift.”

Besides regulation, the panel of state lawmakers expressed an interest in developing a future AI workforce that can help the United States compete with other countries, like China, that are also racing to develop AI applications.

“If nothing else, that’s one thing we should all be able to agree on in all the states is how do we improve our workforce and get ready for this,” Capriglione said.

The lawmakers featured at the Pluribus News Spotlight emphasized the need for AI legislation to be flexible and said there are still many unknowns about AI and its potential to complete tasks that have historically been the domain of human minds.

While AI has been in development for decades, it burst onto the scene nearly a year ago with the debut of ChatGPT. That sent state and federal policymakers scrambling to figure out the implications of the technology and start looking at regulations. Several governors have recently issued executive orders on AI.

Already, tensions between industry and consumer advocates are emerging. The global software industry has encouraged policymakers to adopt a “bias risk assessment” approach to AI regulation while some consumer advocates have called for a “zero trust” approach. 

Rodriguez had a message for those on both sides of the emerging debate over AI regulation.  

“Government’s a slow-moving ship,” he said at the Pluribus event. “We’ll make some attempts, don’t freak out because at the end of the day we can always fix it.”