Utah legislators this year established a first-in-the-nation Office of Artificial Intelligence Policy with a mission to foster innovation, protect consumers, and observe and learn how the technology works.
The approach has inspired model legislation from the conservative American Legislative Exchange Council.
The office has two primary tools to achieve its goals: a “learning lab” approach that allows for intensive study of key issues, which can then lead to legislative recommendations; and regulatory mitigation, which allows the office to enter into temporary agreements with AI innovators to remove legal or regulatory barriers.
The office recently completed a study of the implications of using AI to assist with delivery of mental health services and is proposing draft legislation for the 2025 session.
The AI office’s director, Zach Boyd, a network and data science researcher in the mathematics department at Brigham Young University, recently spoke with Pluribus News about his role and how Utah is approaching AI innovation and regulation.
Pluribus News: We’re seeing a lot of efforts, especially in blue states, to regulate high-risk, AI-backed decision-making systems to get ahead of the potential for algorithmic discrimination in areas such as housing, health care and employment. How would you compare Utah’s approach and the establishment of this first-in-the-nation Office of AI Policy against efforts to pass comprehensive regulation laws in other states?
Zach Boyd: I would say that there are a few fundamental differences and outlook here. Some states are interested in turning AI into basically a regulated industry, which I think is probably not the right thing to do. We think that’s premature and kind of overreach at this stage.
We’re not looking to introduce comprehensive regulation; we’re looking to be agile. Sometimes we do want to stop certain business practices that seem harmful. We’re certainly wary of what happened with social media — where it got so big that now it’s really, really hard to govern. So, we want to be proactive and intervene at the right time.
But other times we view our role as getting rid of obstacles to deployers. I think that’s the most important difference between Utah’s approach and other states.
PN: Lawmakers in other states make the case that AI is already being used to screen résumés, set rent rates, perhaps also in health care settings. They say the horse is out of the barn and if they don’t move quickly, they’ll never catch up.
Boyd: I’m sympathetic to that, absolutely. I think in many cases existing laws are fairly adequate — there are laws against employment discrimination. So, it’s not going into a totally unregulated realm. The places they’re trying to regulate are already regulated.
I’m a scientist, and I very much sympathize with the concern of killing the baby in the cradle. [AI] is out there, but it’s also very immature compared to what it’s becoming. I just think the measured approach is to do what our lab is going to do, which is be agile but also take your time to really get the details right as we go.
PN: You have just completed your first intensive study of how AI could be used to improve mental health access and treatment. What did you learn?
Boyd: We came back with three problems that we saw developing in this industry.
One was a lack of trust around the use of data and algorithms, which was hindering legitimate businesses and harming consumers. Two was the lack of clarity around how licensed professionals could use this technology as part of their practice. And the third was the lack of clarity around direct-to-consumer products that were overlapping with licensed scope of practice of mental health therapists.
PN: What are you recommending the legislature do?
Boyd: For the lack of trust, we recommended some enhanced consumer protections. These are pretty basic: restrictions on the sale of data from our most intimate conversations; restriction on targeting of advertisements and disclosure requirements around advertising; and then informed consent.
We don’t think new laws are needed in the licensed professional realm. We think that the Department of Occupational and Professional Licensure is very happy to work with us to develop a guidance document and formal rules around the deployment of AI in these professional contexts.
We [also] recommended that the state clarify the kinds of liabilities that these companies can be subject to, [and create] a regulatory safe harbor that encodes the laundry list of best practices that the smartest people we talked to said you would want to deploy these kinds of technologies responsibly. If you can document that you’ve done those things, you get a presumption of non-negligence.
PN: Why was mental health the first issue you tackled?
Boyd: We wanted something that was the right size to bite off before the legislative session. We’re also learning to be regulators, and so we wanted time to do something that was nice and narrow. We didn’t want to go too broad and govern hundreds of industries that we don’t really understand.
Also, it’s just an important issue. If you’re going to pick one high-risk application where companies are making incursions, mental health is one of them.
PN: Do you see this as a rolling process where each year you tackle a couple of areas of interest and then make recommendations to the legislature?
Boyd: We’re going to be moving faster than the scale of years. We’re considering a general safe harbor for AI developers that are meeting responsible standards, that might take a little more time. If we do comprehensive privacy overall, that would be an incredibly complex task.
We’re looking to be as agile as possible, but I think a typical learning agenda might be on the scale of two to four months. Taking [our[ time but also realizing if you’re taking a year and a half to develop your recommendations, the industry has already moved on anyway.
The governor has said that he’s open to tackling some of these issues in interim [legislative] sessions, which tend to happen a few times a year in Utah. We’re still figuring out how we’re going to navigate that.
PN: On the innovation side, every state is talking about encouraging, recruiting, promoting AI as an industry. How do you see this first-in-the-nation office helping to make Utah an attractive place to set up shop and create AI companies and technology?
Boyd: We want people in Utah to experience the benefits of AI first, while other governments are still trying to figure out their path. We can make it OK for you to practice your business in the state over the course of a few days, not over the course of a year of lobbying. I think that’s an attractive proposition.
PN: Have any other states inquired about this approach?
Boyd: Yes, this is getting a lot of attention. There’s a chance several more of these offices will pop up over the next year. I think, especially, the regulatory mitigation mechanism is inspiring to some legislators as a way to responsibly navigate [innovation].
It’s not just a pro-business stance. We have a mandate to design the regulatory mitigation in such a way that we’re going to learn from what these businesses are doing. One of the benefits is for the state to observe what they’re doing. Eventually the regulatory mitigation times out and we tell the legislature what we think a permanent solution would be. We get to make data driven decisions based on actual outcomes in our state.
PN: Anything else you’d like to add?
Boyd: It’s been fantastic to work with a stakeholder-driven model. We are a small state agency, and I think the states are always going to have [the] problem of not having the expertise that exists at the federal level in terms of big budget agencies.
But I think being first has been really inspiring to the [AI] community, and we’ve seen so many super smart people locally, nationally, internationally come out of the woodwork to contribute.
We’re way overpowered compared to most state agencies, just because of the people wanting to participate and contribute their expertise without a fee, which is amazing.