A bipartisan working group of state legislators studying how to regulate artificial intelligence has swelled to more than 200 members from at least 45 states.
The Future of Privacy Forum, a Washington, D.C.-based think tank, formally announced the working group’s existence on Monday and launched a dedicated web page to share updates.
Connecticut Sen. James Maroney (D) first told Pluribus News of his plans to convene the group in May 2023. An initial cohort of more than 60 lawmakers from more than half of states began meeting that September.
Since then, the working group has more than doubled in size as state lawmakers’ interest in AI and how to establish guardrails without stifling innovation continues to grow.
“People are excited to work together,” Maroney said Monday. “When you work together and you include more voices, you come up with a better product.”
Maroney chairs the working group’s bipartisan steering committee, which includes lawmakers from Alaska, Colorado, Florida, Maryland, Minnesota, New York, Texas and Virginia.
The Future of Privacy Forum, which gets financial support from corporations and foundations, is serving as the group’s convener.
“As a forum that brings together industry, academics, consumer advocates, and civil society to discuss emerging technologies and privacy protections, FPF is uniquely positioned to support the group’s mission of promoting the safe and equitable use of AI,” Tatiana Rice, the Future of Privacy Forum’s deputy director for U.S. legislation, said in a statement.
The group’s meetings toggle between briefings from AI experts and closed sessions with lawmakers “to have candid discussions about issues, approaches, and challenges,” according to the Future of Privacy Forum.
Besides state lawmakers, the working group is also open to legislative staff and other state-level public officials. It meets bimonthly.
The rapid growth of the working group’s membership reflects the urgency state lawmakers feel to act to regulate AI in the absence of comprehensive federal laws.
This year, California, Colorado, Illinois and Utah were among the early adopters of AI regulation, while dozens more states moved to regulate AI-generated deepfakes.
Next year is expected to bring an even more intensive round of AI regulation in the states, including bills modeled on Colorado’s first-in-the-nation law to regulate high-risk AI tools used to make consequential decisions about people’s lives.
Of particular concern, Maroney said, is the potential for large language models to perpetuate discrimination. He cited a recent Lehigh University study that found Black mortgage applicants were more likely to be turned down for a home loan than white applicants despite having identical financial profiles.
“We’re seeing examples of age discrimination and racial discrimination and other forms of discrimination being picked up [by these models], and so we want to make sure that we aren’t going to be perpetuating those biases,” Maroney said.
Maroney attempted this year to pass a comprehensive AI regulation bill. It failed but became the model for Colorado’s new law. Maroney plans to introduce a revised version of his bill for the 2025 session and expects lawmakers in several other states will introduce similar legislation.
Maroney previously told Pluribus News that the AI working group grew out of earlier efforts by states to pass comprehensive data privacy laws. He said lawmakers have heard “loud and clear” from the tech industry that it doesn’t like a patchwork of state laws.
By collaborating early on, he said, states can coalesce around concepts and develop common definitions while still allowing lawmakers to tailor legislation for their individual states.
Separately, the National Conference of State Legislatures has a taskforce on AI, cybersecurity and privacy. It produced last year a primer on state regulation of AI. Several members of that task force are also participants in the working group.