Disruption

Colo. leader rolls out bill to revise year-old AI law

Sen. Robert Rodriguez said it was informed by feedback from AI developers and deployers, public interest groups, and public officials.
Colorado Senate Majority Leader Robert Rodriguez (D). (Courtesy of Colorado Senate Democrats)

Colorado Senate Majority Leader Robert Rodriguez (D) is proposing sweeping revisions to his state’s first-in-the-nation comprehensive artificial intelligence regulation law that was passed last year.

Rodriguez introduced the much-anticipated bill late Monday, aiming to address concerns from Gov. Jared Polis (D) and industry leaders that the law could stifle innovation. The proposed changes, if adopted, would significantly narrow the existing law’s scope.

“We’ve taken a lot of feedback on this bill from the local venture capitalists, from the small mom and pop developers in the state of Colorado and tried to focus on what’s good for Colorado,” Rodriguez said at a Tuesday morning weekly meeting with reporters.

The Colorado AI Act, as last year’s law is known, aims to ensure that AI systems do not discriminate against people in high-risk areas like employment, housing and health care. Similar bills were introduced this year in more than a dozen states.

In signing the bill last May, Polis said he was doing so “with reservations” and urged lawmakers to “significantly improve” the law before key provisions took effect in 2026.

Over the legislative interim, Rodriguez chaired an AI task force that issued a report in February but failed to come to agreement on key points of contention. Rodriguez then convened a smaller team of negotiators drawn from industry and civil society to hammer out a compromise.

In a memo, Rodriguez said the new 36-page bill was informed by feedback from AI developers and deployers, public interest groups, and public officials. It dramatically scales back several of the provisions in the existing law while increasing the number of smaller businesses that are exempted.

One of the most significant concessions to the tech industry is a proposed change in the requirement that deployers of high-risk AI systems have a risk management plan, conduct annual impact assessments on the tool, and give consumers the right to appeal a consequential decision made about them. Under the new legislation, those mandates would only apply to systems that make consequential decisions and where there is no meaningful human review.

Another key change boosts the size of AI deployers who are exempted from the law from those with fewer than 50 full-time employees to those with fewer than 500 employees. The number would ratchet down to companies employing 250 people in 2028 and 100 in 2029. The new bill also includes a four-year “on-ramp” for smaller AI developers.

Other proposed changes would create additional protection for small businesses that use AI tools for hiring and reduce the opportunity for consumers to appeal adverse decisions. Several other provisions aim to shield AI companies from liability if they are proactive about addressing issues or make their source code publicly available.

For consumers, the legislation includes language to ensure that they receive meaningful information when AI is used to make consequential decisions about them. It would also require companies to specifically analyze whether a high-risk AI system is likely to run afoul of state and federal consumer protection, labor and privacy protections.

The effective date of the obligations on companies and the law’s enforcement would be delayed to 2027 to give more time to develop rules for the law’s implementation.

Despite efforts to “thread a needle” and address every issue, Rodriguez acknowledged that some in the business community feel the changes do not go far enough.

Meghan Pensyl, Business Software Alliance’s director of policy, said BSA appreciates Rodriguez’s efforts to refine the law and suggested more that could be done.

“The amendments preserve key obligations that help address risks from AI and build trust in technology, while also making some of its provisions more workable in practice,” Pensyl said in a statement. “As Colorado lawmakers consider these amendments, they should consider further changes to ensure that obligations in the law reflect companies’ role in the AI supply chain.”

Consumer, labor and privacy advocates offered mixed reviews.

“The changes in the bill are tilted heavily towards placating tech companies and other industry groups,” said Matthew Scherer, senior policy counsel at the Center for Democracy & Technology. “But it could be worse, and I get that it’s hard for a state to withstand an onslaught from venture capital and Big Tech like Colorado has faced.”

Grace Gedye, a policy analyst at Consumer Reports, said the new bill represented “tough compromises for both sides,” but said consumers would benefit from a more detailed notice requirement to consumers about when AI is used to make high-stakes decisions about them.

Kara Williams, a law fellow at the Electronic Privacy Information Center, said: “It is unfortunate that these amendments weaken the law in key areas, such as by limiting the right to appeal and expanding exemptions, but ultimately, Coloradans are still better protected from AI harms with this law than without it.”

Time is short to pass the revisions; the Colorado legislature is scheduled to adjourn on May 7.

This story has been updated.