Disruption

Conn. Senate Dems make AI bill a 2025 priority

Sen. James Maroney’s comprehensive legislation died this year after a veto threat.
Connecticut Sen. James Maroney (D) explains a far-reaching bill that attempts to regulate artificial intelligence during a debate in the state Senate in Hartford, Conn. on Wednesday, April 24, 2024. (AP Photo/Susan Haigh)

Connecticut Senate Democrats announced Friday that a comprehensive artificial intelligence regulation bill authored by Sen. James Maroney (D) will be a caucus priority in 2025, after a similar bill died this year under a veto threat.

Separately, Maroney said in an interview Friday that a bipartisan multistate working group will unveil a model AI regulation bill early next month. That will be followed by a Jan. 13 live-streamed public forum to receive feedback on both that bill and a proposal from Texas Rep. Giovanni Capriglione (R) that could become a model for red states.

These latest developments add to the growing momentum for AI legislation in the states next year. Earlier this month, more than 60 lawmakers from 32 states published an op-ed pledging to “craft meaningful state-level AI laws.”

Colorado passed a first-in-the-nation comprehensive AI regulation law this year. The updated Connecticut bill will focus on transparency and accountability, worker training, and penalties for unauthorized intimate images, according to the Senate Democrats’ announcement. 

“It is without a question we need to be next in passing legislation that will work to fight digital discrimination,” Maroney said in a statement. “As AI continues to evolve, it’s crucial that we implement thoughtful regulations to ensure its development aligns with ethical standards, safeguards privacy, and minimizes potential harm.”

Maroney’s bill would require developers and deployers of high-risk AI systems to take “reasonable care” to protect consumers from algorithmic discrimination in areas such as employment, housing, lending and health care. It would place a series of obligations on developers and deployers to disclose information about their technologies, conduct impact assessments, and implement risk management policies.

Deployers of high-risk AI systems would have to notify consumers when AI is being used to make decisions about them, provide an explanation of any adverse decision, and give them an opportunity to appeal. Synthetic digital content created by AI would need to be labeled. And the bill would update the revenge porn statute to include nonconsensual nude images made using generative AI tools. 

The bill would also create retraining opportunities to prepare Connecticut residents to work in the AI field. “We want to do massive outreach to upskill,” Maroney said.

The proposed law’s requirements would take effect on Oct. 1, 2026, and be enforced by the state attorney general, according to Maroney.

The office of Gov. Ned Lamont (D), whose veto threat torpedoed Maroney’s AI bill this year, did not have an immediate comment on the new legislation. Maroney said he has spoken with members of the governor’s staff about his bill for next year. He said he’s “hopeful that we’ll arrive at a consensus about how best to protect Connecticut residents while also incentivizing innovation.”

Maroney said most of the updates to his bill are intended to clarify its intended scope. One key change is an exemption from some of the requirements for deployers who use off-the-shelf AI systems.

A draft of the bill has not yet been made public.

Maroney, who chairs the General Law Committee, is a national leader on AI regulation. He is vice chair of the National Conference of State Legislatures’s task force on AI and was a key figure in the launch of the multistate working group, on which he serves as a member of its legislative steering committee. Colorado’s 2024 AI law is based on Maroney’s bill.

Comprehensive AI regulation bills are expected in at least a dozen states in 2025. Maryland Sen. Katie Fry Hester (D) told Pluribus News she is preparing five AI-related bills for next year, including a comprehensive measure.

Other bills will address election-related and intimate deepfakes. Hester also wants to pass a first-in-the-nation law to expand Maryland’s forgery statute to include the unauthorized distribution of a deepfake depicting another person. She gave the example of an AI-generated audio recording earlier this year that purported to show a Maryland school principal making racist comments.

“This goes to the basic values statement — it’s not acceptable … to put it out there and purport that it’s actually happening,” Hester said.

Her fifth bill will address AI’s use in schools. 

State-level efforts to regulate AI have brought both cooperation and pushback from industry. At a recent panel discussion at the Council of State Governments conference in New Orleans, Matt Perault with the Silicon Valley venture capital firm Andreessen Horowitz warned that regulation could stifle AI startups.

Hester said lawmakers are seeking to balance AI regulation with encouraging innovation.

“It’s really important that we have the best AI in the world, in the United States, for national security,” Hester said. “Let’s also protect our people so it doesn’t run over us.”