California Assemblymember Rebecca Bauer-Kahan (D) is preparing to again introduce legislation to regulate the use of automated artificial intelligence systems in areas such as housing, employment and lending to ensure those systems don’t discriminate.
The Privacy and Consumer Protection Committee chair’s first attempt was in 2023. Bauer-Kahan reworked the bill and reintroduced it this year with support from Microsoft and Workday, but she ultimately pulled it after it was amended in the Senate to only cover employment-related discrimination.
Bauer-Kahan spoke with Pluribus News about her revised bill for next year, the fight against algorithmic discrimination, her philosophy on AI regulation, and the role she sees California playing in the absence of congressional action. The interview below was edited for length and clarity.
Pluribus News: It seemed like your automated decision-making bill had some momentum this year. What happened in the end, and what are you planning for 2025?
Rebecca Bauer-Kahan: It’s critically important that we get it right. There were some definitions that we were still trying to nail down at the 11th hour despite two years of work. Part of the difficulty had been that folks didn’t want to engage with us in a constructive way until they saw the momentum.
Once we started to get that engagement, we really wanted to get it right and felt like it made sense to give it another year. We know the rest of the country [watches California], and the last thing we wanted to do was do a messy bill that needed a lot of fixing. That’s not how I do policy. We will be back with a cleaned-up version that we are optimistic more people will be able to get behind.
PN: Can you give a sense of what will be different?
Bauer-Kahan: Clarity around developer-deployer responsibility. That’s something we really continue to home in on: ensuring that the right party is responsible for the testing, such that we have capable entities doing the testing so it’s meaningful.
PN: You had Microsoft and Workday support at the beginning of last year. They backed off after the amendments. But some civil society groups were pleased with the changes you made. Where do things stand in trying to build a coalition of support?
Bauer-Kahan: If everybody’s a little happy and a little sad, I’ve done a good job of finding the middle. The changes I made were consistent with what I had said my intent was from the beginning. The fact that I lost business entities as a result of those changes was actually fascinating to me.
I continue to believe that if I’m some apartment owner and I’m using [an AI tenant screening] product off the shelf, I’m not capable of testing this tool. If I use it the way it was intended, I should be able to rely on the testing of the developers to ensure that I’m not using a biased tool.
PN: Are you still working with Workday and with Microsoft?
Bauer-Kahan: We continue to be open to everybody’s feedback, but I’m not ever going to make policy that I don’t think is most protective of the people I represent just to keep somebody on board.
PN: The director of Utah’s new state office of AI told me recently that in many cases existing anti-discrimination laws, such as employment discrimination, are adequate for addressing potential algorithmic discrimination by AI systems, and that AI is not going into a completely unregulated realm.
Bauer-Kahan: He’s not wrong. If you discriminate on housing or lending, all of that is illegal. [But] when I looked about a year ago, courts were fairly divided on what that means for the developer and the deployer. So, if I use this tool and it discriminates in any of these settings, is the developer on the hook? Is it still just the hiring manager that’s using that tool?
Do we want to set the standard for that? Do we want to decide what the role of the developer and the deployer is in a world where there used to be one actor and now there’s two?
I do think that the developer of the software has some responsibility to ensure there’s no bias. That’s the clarity we’re bringing to the law that I think is really important. It’s important not just because you should be preventing discrimination as much as possible, but because in the world today you have to discriminate against a lot of people usually before you’re found responsible and there’s a change. And that’s not the world I want to live in.
If we can live in a world where I can test a tool in advance and ensure that everyone has an equal shot at the opportunity that’s being provided, that’s a much more exciting world.
Private companies are already doing this in many cases. That’s why you saw Workday and Microsoft supporting this — because they’re doing this impact assessment work. Bringing everyone up to that standard to protect people and to ensure more diverse workforces or more diverse housing opportunities or lending opportunities is the world I want California to be.
PN: Several California AI bills were enacted this year. Others such as the high-profile AI safety bill, SB 1047, were vetoed. What’s your broader sense of where California is in the iterative process of trying to get some guardrails around this emerging technology?
Bauer-Kahan: We’ve really started to lay the groundwork for what I think is good AI policy.
One of the things that nobody talks about — but I think is incredibly important and non-controversial — was defining AI in statute. Having sat on the privacy committee for six years and watched us not be able to move any AI legislation because nobody could agree what AI was, putting a consistent definition into law really allows us to move forward in a consistent way.
We saw some of what I would call the low-hanging fruit — the election integrity issue, some of the sexually explicit materials updates — passed that are critically important to protecting society.
The question becomes how do we expand out from there in the ways that people are being harmed — bias being one of them, but there are definitely other areas.
We really started to lay the groundwork to treat AI as we treat other systems, which I think is really what needs to happen.
PN: The expectation is President-elect Trump will repeal President Biden’s AI executive order. How will that affect AI policymaking in states such as California?
Bauer-Kahan: There was a lot that was being done under the executive order, especially at the Department of Commerce as it related to transparency. But given the fact that it was an executive order, it only had so much reach and so I don’t know that it’s going to be game-changing. It still remains that the states were and will be responsible for protecting society from AI as it expands.