Disruption

Q&A: The Virginia lawmaker behind AI anti-discrimination legislation

‘We are focusing first on developers.’
Virginia Del. Michelle Lopes Maldonado (D). Photo credit: Virginia General Assembly

For the second year in a row, Virginia Del. Michelle Lopes Maldonado (D) has introduced a sweeping artificial intelligence bill aimed at ensuring that AI models do not discriminate when used to help make consequential decisions about people’s lives in areas such as housing, healthcare and employment. The measure is part of a package of AI-related bills that Maldonado and her colleagues hope to advance this year.

Maldonado, an attorney and consultant who has worked in the tech industry, was one of the first lawmakers to file this type of comprehensive legislation last year, legislation that has since become a model for other states.

In recent months, Maldonado says she has worked with industry, civil society and labor groups to refine her bill with the goal of getting it across the finish line during Virginia’s lightning fast 45-day session. The legislation has been endorsed in concept by the legislative Joint Commission on Technology and Science.

Maldonado, founding chair of the Virginia Technology & Innovation Caucus, spoke with Pluribus News about the bill, the balance she is trying to strike and what she thinks its chances are this year. The interview below has been edited for length and clarity.

Pluribus News: The name of this bill is the “High-Risk Artificial Intelligence Developer and Deployer Act.” What is the focus?

Michelle Lopes Maldonado: We are focusing first on developers. The first step, the first layer is making sure developers do this in a responsible way.

In tech, it’s standard practice to do quality assurance on your technology to make sure that it is running properly, that it is producing the appropriate output. We’re asking them to create a process to assess whether the algorithms they’re using are marginalizing people or creating some kind of disparate impact. That’s critically important.

The next level is when they give [their product] to a deployer. If [the deployer makes] any substantial modifications [to the AI model], they take on the responsibilities of a developer by also conducting an impact assessment to make sure that any of those modifications did not change what was happening.

PN: What are you trying to guard against?

MM: We’ve already seen the harms take place. We have seen women or people of color stripped out of the [algorithmic] output that is served up to the human resources folks to determine which candidates [for a job] are viable. We have seen it being used in housing lotteries where there are unintended consequences. So, we know the harms already exist.

What we’re trying to do is make sure that when you create these [AI screening systems] that you are running a diagnostic, if you will, to make sure you can look at the output and say, “Oh my gosh, there are no x or y in this output.” It doesn’t mean that that’s wrong, it means that you need to go and figure out why to see if it makes sense. 

PN: Some say the cat is out of the bag, that this technology is already deployed and the focus instead should be on holding companies accountable if their products discriminate under existing anti discrimination laws. Why isn’t that sufficient?

MM: We have a bad habit of thinking about things in binary terms. I don’t subscribe to the argument that it’s either-or. I think it’s both-and.

PN: Your point is that this technology, and its potential to do harm, is such that regulating it on the front end rather than just being prepared to penalize on the backend makes sense?

MM: We have to be very responsible with this technology and here’s the reason why we know we have to take action on the front end: We have seen the consequences of not taking action on social media. We cannot afford to do that [again]. We know what it looks like when we try to wait on the back end to course correct. We should be working in parallel as the technology is developed and evolves.

PN: Why not take more of a product liability approach which is being advocated by groups like the Center for Humane Technology?

MM: The technology is too new to take a liability approach. Making sure that we are doing human-centric, trustworthy and responsible AI up front before you release it is the right thing to do for humanity.

PN: Virginia has a very short legislative session. What are the chances of getting this bill across the finish line and do you think Gov. Glenn Youngkin (R) would sign it?

MM: I’ve been in conversation with fellow legislators to help them understand the bill, so I think we’ll be okay in that sense. I can’t speak to what the governor will or won’t do. I did hear [that] he had heard some concerns from industry, but those are the folks I’m [already] working with, and we are working out those issues. 

I am hopeful that he will see that it is important for Virginia to continue to be a leader [in tech policy]. We were the second state to pass a privacy data law and we should be leading on a number of these issues and it is my hope that he will see that. 

PN: A criticism of Virginia’s privacy law was that it was too industry-friendly. Does your bill run the risk of that sort of criticism too?

MM: Nobody is 100% happy with this bill. Because we don’t have a private right of action [allowing individuals to sue to enforce the law], people will always say that’s too industry friendly. But I have worked really hard to incorporate other items [important to civil society and consumer advocates].

PN: What do you say to critics who worry this type of regulation could stifle innovation and harm Virginia’s AI competitiveness? 

MM: This isn’t telling people they can or cannot use artificial intelligence. All it is saying is you need to check before you make it public to make sure that it’s operating properly.

We have a history in this country — whether it’s financial institutions and redlining or our housing industry with people having access to rental properties — of [discrimination] happening because people don’t check on the front end to make sure that our process is equitable and fair.

This is not a mechanism to stifle innovation because we’re not saying you can’t do something. We’re saying here’s a piece to plug into your quality assurance process. That should not ever be grounds for stifling innovation or competition. 

PN: You serve on the lawmaker steering committee of the Multistate AI Policymaker Working Group, a bipartisan forum for lawmakers to learn about and work on AI policy. What do you expect will come out of the working group this year?

MM: We recognize that every state operates differently, yet we have very similar concerns. I believe that in 2025 you will see a number of states bringing forward similar [algorithmic discrimination] legislation. 

The next step is we’re looking at other issues where we might want to create model legislation. I don’t think that’s far off. 

PN: You’re doing this in large part because Congress isn’t?

MM: That’s exactly why we’re doing it. We wish we didn’t have to because this is a national issue that our federal government should be taking the lead on, but they are not. 

PN: Anything more that you want people to know or understand about this effort?

MM: There’s a lot of discomfort around whether we regulate something like this. That is not a reason to not at least provide some baseline guardrails that help protect us as technology evolves over time. The guardrails have to be breathable and flexible, but not so [flexible] that they don’t hold folks accountable.