High-risk and generative artificial intelligence systems would be subject to broad regulation under legislation introduced this month in Vermont and Virginia that could establish templates for other states.
The bills would require AI developers to ensure their systems do not discriminate, as well as mandate that they conduct impact assessments to identify potential risks of harm. Deployers of high-risk AI systems would also have a duty to protect users from discrimination.
The legislative push comes amid intense interest from state legislators to better understand and begin to establish guardrails for AI, as the technology rapidly evolves and moves further into the mainstream.
“This is, so far, the first … comprehensive AI legislation [of 2024],” said Tatiana Rice, senior counsel at the Future of Privacy Forum.
By leaping out early, Vermont and Virginia could play a key role in establishing a foundational framework for broad AI regulation in the states. That is what happened with comprehensive data privacy, as legislation that was first introduced in Washington State became a model for other states.
“The first movers are going to be highly influential for other state legislatures for purposes of consistency and interoperability,” Rice said.
Rice compared the significance of the Vermont and Virginia bills to legislation that in California last year to regulate AI-backed automated decision-making tools. That measure did not pass, but Assemblymember Rebecca Bauer-Kahan (D) plans to reintroduce it this year. Similar legislation has been introduced this year in Washington State.
The proposed Vermont and Virginia laws would regulate high-risk AI systems that make “consequential decisions” related to topics such as employment, housing, health care, insurance, access to credit or criminal justice.
Notably, the bills specifically target newly emerging generative AI systems such as ChatGPT that can produce synthetic audio, video, text and graphics.
Virginia Del. Michelle Maldonado (D), a former tech lawyer, said her Artificial Intelligence Developer Act is designed “to be flexible enough to accommodate the evolution of the technology.”
“This is a framework that lets us say: ‘Whatever we do in this space, we do want to make sure we are balancing a business’s ability around innovation and competitiveness balanced with the protections of our citizens,’” Maldonado told Pluribus News.
Vermont Rep. Monique Priestley (D), who has a background in data management, has filed two AI regulation bills for the 2024 legislative session.
One largely mirrors the Virginia measure. The other is a broader consumer protection bill that would impose a duty of care standard on developers of “inherently dangerous artificial intelligence systems” to ensure that their products do not harm consumers.
Priestley plans to prioritize passage of the second bill and said Vermont is uniquely positioned to lead the nation on AI regulation.
“Why Vermont? I think in a lot of ways we’re small enough that we are nimble,” Priestley said. “We can be quick.”
Priestly drafted the second bill in conjunction with the Center for Humane Technology, a nonprofit focused on countering the negative effects of new technologies.
“The thinking behind this bill [has] been informed from a lot of what we’ve seen with social media,” said Camille Carlton, senior policy manager at CHT.
Carlton said the bill represents a risk-based approach to regulating AI and is organized around an AI Risk Management Framework issued last year by the National Institute of Standards and Technology.
“We want to make sure that we’re pushing responsibility onto the largest developers of high-risk products while protecting small businesses, innovators and consumers,” she said.
Carlton said CHT sought out lawmakers in Vermont to sponsor comprehensive AI regulation legislation because the state has a reputation as a leader in consumer rights laws. Vermont was also one of the first states to create an AI task force in 2018. That was followed by 2022 legislation that created a Division of Artificial Intelligence within the state’s Digital Services agency along with an AI advisory council.
“We’re a cool incubator,” Priestley said, pointing to Burlington as a top tech hub.
The tech industry is watching closely as AI bills emerge in statehouses.
BSA, a software trade group that represents enterprise software developers, said it has identified more than 140 AI-related bills so far this year. It released an AI regulatory framework in 2021 that urged the adoption of laws that delineate between developers and deployers and require impact assessments for high-risk AI systems.
“We see aspects of these approaches in the Virginia and Vermont bills, which is a positive step, but we believe these bills can be further improved to ensure they work in practice,” Matt Lenz, senior director for state advocacy at BSA, said in a statement.
Comprehensive AI regulation bills are likely to appear in more states in the coming weeks.
Connecticut Sen. James Maroney (D) told Pluribus News he plans to formally introduce legislation next month, “because we won’t see full adoption of AI until people feel safer.”
Maroney convened a bipartisan working group of state legislators from around the country in the fall to learn about AI and come up with regulatory frameworks. Maroney also shared drafts of his legislation with Priestley and Maldonado as they were writing their bills.
“It’s important that we all work together because this isn’t a one-state issue,” Maroney said.