The chair of the California Assembly’s privacy committee will try for a third time to regulate AI-powered automated decision-making systems with the goal of ensuring they do not discriminate.
Assemblymember Rebecca Bauer-Kahan (D) announced her latest bill Thursday, flanked by supporters from the tech accountability and labor sectors.
“As our world moves to more consequential decisions in our life being made by AI tools, by algorithmic decision-makers instead of humans, it seems common sense to me that we move to a world where we ensure that those tools don’t discriminate,” Bauer-Kahan said.
A draft of the revised bill was not immediately available.
Bauer-Kahan has tried to pass automated decision-making legislation since 2023. Her bill last year earned early support from Microsoft and Workday, but ultimately lost their backing as the measure was amended. Bauer-Kahan later pulled the bill after the Senate pared it back to only cover employment-related discrimination.
Bauer-Kahan said when she first introduced the legislation, she hoped it would be a partnership between those who develop the AI tools and those who are subject to them, and that the bill would be passed easily. That didn’t happen.
“The difference today is that we’re doing this in partnership with civil society,” Bauer-Kahan said before introducing representatives of the Service Employees International Union and the TechEquity Collaborative.
“It is critically important that the people who experience these decisions everyday are at the table, that they are being heard,” Bauer-Kahan said.
Bauer-Kahan has said legislation is needed even though California’s privacy agency is working to finalize rules governing automated decision-making tools. She told Pluribus News last year that she did not think the rules were robust enough.
The California bill is part of a larger effort across multiple states to pass laws governing the development and deployment of AI systems used to make consequential decisions about people’s access to housing, education, health care and other critical services.
Bauer-Kahan said when she first introduced her bill it was a first-in-the-nation proposal. Since then, Colorado enacted the nation’s first AI anti-discrimination law governing the development and deployment of high-risk, automated decision-making systems. Similar bills have been introduced in at least eight other states this year as momentum builds to establish guardrails around the fast-deploying technology.
Sponsors of the bills worry that the algorithms behind those systems could discriminate and, therefore, should be tested before they are deployed. Most of the measures also include transparency measures so individuals know when AI is working in the background and allow for appeals of adverse decisions.
Bauer-Kahan said most Fortune 500 companies now use AI to screen applicants for jobs. Automated tools, which she called “legacy AI,” are also used in sectors such as housing, health care and lending.
“These algorithms left unchecked can produce biased results making everyday people vulnerable to automated discrimination without them even knowing about it,” said Samantha Gordon, TechEquity’s chief program officer.
The state-level push to regulate AI developers and deployers has brought key industry players including Amazon, Google, Microsoft and Workday to the negotiating table. But gaining steam recently is an opposition campaign led by venture capital firm Andreessen Horowitz and free market advocates who warn that regulation on AI developers will stifle innovation.
Bauer-Kahan said she still hopes to work with the tech industry on the details of her bill. She added that President Trump’s recent repeal of a Biden-era executive order on AI puts more of an onus on state policymakers to regulate the industry.
“We want our tech partners to be our partners in a future free of bias but what we believe is that we need some guardrails to ensure that happens.