Disruption

Q&A: Nishant Shah, Maryland senior adviser for responsible AI

He discussed his role and opportunities for the state to use AI.
Nishant Shah, Maryland senior adviser for responsible AI, in an interview with Pluribus News.

 

Nishant Shah was appointed last year as Maryland’s first senior adviser for responsible artificial intelligence.

Gov. Wes Moore (D) issued an AI executive order last month that will guide Shah’s work, calling for the responsible and productive use of the rapidly emerging technology.

Shah recently spoke with Pluribus News about his role and the opportunities he sees for the state to use AI to improve service delivery and increase efficiency. This conversation was edited for length and clarity.

Pluribus News: Why don’t we start with a definition of responsible AI. What does that mean?

Nishant Shah: It’s a great question. It’s one of those questions where it’s sort of like the three blind men and the elephant, where what AI is and what responsibility is within AI sort of depends on where you’re looking from. There’s a lot of definitions, some are more philosophical, some are more technical. In terms of the state of Maryland and the way we’re looking at this, our goal really is to figure out a way to leverage AI that is safe and ethical and essentially adheres to a set of principles that we’ve adopted. And so we consider that approach responsible, and then productive, so that we get good at using AI over time and build our AI muscle.

PN: You previously worked at Meta and at Amazon. How is that work informing the work you are now doing for Gov. Moore in Maryland?

Shah: Quite a bit in that the responsible AI team at Meta was charged with a similar set of things, which is: You have a set of principles, and how do you ensure that responsibility as defined by those principles is baked in across the machine learning development lifecycle? That encompasses a lot of different things, from risk assessments before you deploy a technology to building out frameworks — like can you measure a harm if it’s coming and can you mitigate it.

I was exposed to lots of different methodologies in the responsible AI space — got to connect with absolutely brilliant humans that are research scientists, lawyers, engineers, product managers across the entire AI spectrum. It was a fantastic grounding in how you take these really amorphous concepts and ground them into something real and then actually build tools and mechanisms and processes that ensure that you’re adhering to that set of principles.

PN: There’s a lot of conversation happening at the federal level and in the states about how to regulate AI and how to protect the citizenry from the potential downsides. Having worked in the private sector and now being responsible for thinking intentionally about how to deploy this for Maryland, how much should we trust the companies that are creating this technology?

Shah: There absolutely needs to be third-party oversight and government regulation. A lot of these companies would be one of the first folks to tell you that. I think a lot of the disagreements come on exactly what it is that is regulated and how it’s regulated. From my perspective, I think there’s a lot of amazing guidance coming out from places like NIST (National Institute of Standards and Technology) with its risk-management framework, the AI Bill of Rights, the EU AI Act has a lot of super interesting reference documents.

Globally, there’s a lot of thought leadership and natural experiments happening in the wild as to what sorts of regulations work, what don’t. I think we’re really early into that process. But, yeah, regulation is needed, and I think generally companies also want clarity to what the rules of the road are.

PN: There are already states and state legislatures leaping into this. There are proposals we’re seeing around the country for comprehensive AI regulation, certainly lots of bills to address deepfakes. Just curious to know if you are thinking the time is right for state legislatures to start to build these guardrails or if you think it’s too soon?

Shah: We’re working quite closely with the legislature as it figures out the next steps here: what should be regulated and what we should be waiting on and ensuring that some of the best practices get a little bit more mature before codifying it into legislation. I won’t speak for our legislature, but I can say from my own personal perspective data privacy is really important, having bills that ensure that for consumers is one of those foundational things that isn’t necessarily entirely related to AI but is sort of the foundation on which AI can be built.

There’s criminal sorts of approaches where you have things like deepfakes and revenge porn and sharing those things without consent, and those are having impacts right now on real people. I think legislators should act quickly on those, and I think they can start doing that now.

On internal use, like within agencies and within government itself, there’s a lot of work happening to figure out what is best practice in different spaces. There’s a lot of guidance out there but much less on the actual operationalization of it. I do believe that there’s some high-level things that we can do in the AI governance space and internal use, and in Maryland we’re having those conversations.

PN: The governor recently issued an executive order on AI, calling for the responsible, ethical, beneficial and trustworthy deployment of AI in Maryland state government. Where do you begin with a mandate like that?

Shah: It’s a big one. The EO really sets out the task for us, and we’re now determining the best ways to execute on it. The key there is what do these principles actually mean fully realized and fleshed out? If we had faith that fairness and equity issues were not a thing in our use of AI in states, what set of things would we have done to yield that outcome?

That’s sort of what we’re thinking through right now. The state already has lots of processes in place and ideally, we take these best practices and guidance documents, and incorporate it with elements that already exist so that we make it real much more quickly within the state.

PN: What do you think is the timeline for this comprehensive action plan and for operationalizing the state’s AI principles — is this a one-year, two-year, five-year process?

Shah: I think it’s a forever thing. This EO essentially builds momentum and gives the set of things that we need to do. And then our next step is to say, ‘OK, for each principle how long would it actually take to establish?’ You could imagine for privacy, when you’re just starting, there’s a low-maturity and then a medium- and high-maturity that correlates to time and to sophistication.

So, what does that roadmap and that strategy look like? And, in the meantime, folks want to adopt AI now, and so what are the low-risk places where we can be doing that without those more mature governance standards in place? Essentially, what are the stopgaps as we move forward with AI?

PN: Have you started inventorying state agencies to see how they’re already deploying AI?

Shah: That is part of one of the mandates in the executive order: to establish a canonical inventory. And then what is the set of information we need to be collecting on each set of solutions that we adopt to ensure explain-ability and transparency and proper governance?

PN: Are there some use cases that have emerged yet?

Shah: The way that we think about this is, what are proven mechanisms that aren’t safety- or rights-impacting that would have high utility right now? One of these is translation. We have a diverse set of residents in Maryland [who] speak many different languages. Can we improve the accessibility of services we provide in different languages using tools that have a high percentage accuracy in machine translation?

Accessibility is another: helping residents cut through bureaucracy. Chatbots are one approach to that. Can you leverage a chatbot on certain websites where it has really complicated data underlying it, but that data is perceived to be clean and accurate? If we do, can we create some sort of natural language interface that makes it much easier for residents to get the information they need from state websites?

PN: Longer term, do you see an opportunity for AI to help with more consequential decisions like determining if somebody’s eligible for social services?

Shah: We have to be very careful there. It’s the sort of thing where only once we’re significantly more sophisticated in our governance approaches — and we know that decisions are dependable and valid and we have mechanisms in place to make sure bias and discrimination aren’t part of the models that are being used to train that — then we can start thinking about that.

In all those cases, those higher risk buckets, we need to have a human in the loop. I wouldn’t see it as a decision an AI makes and then is just implemented, it would be something that’s a part of process that makes that process much easier to do over time, but there’s still a human involved that makes a final decision.

PN: Would you also say the state needs to have in place regulations around ensuring on an ongoing basis that the algorithms are not discriminatory?

Shah: That’s exactly what an AI governance framework is. It’s essentially a set of things that you put in place — they’re either tools, contract language, processes or algorithms themselves — which detect and help us make a determination whether there is bias in the system or not. Regulations can help point the way to saying, ‘OK, you’re required to do this.’ That’s one approach. There’s lot of different ways to get to standing these things up in the first place. I think regulations have a big part to play here.

PN: You have said that the next two years in this AI space will have implications for the next two decades. Why do you say that?

Shah: AI is one of these general-purpose technologies that can be used in a million different ways. We don’t yet know all the ways that it’ll manifest and how it will be used. The technology is changing quickly, so the work that we’re doing right now is essentially the work that builds those foundational infrastructure and safety mechanisms and puts it all in place in a way that’s actually executable for the state.

That is the key part: taking these high-level things and making real impact on our services to residents. If we do see AI go in the direction where it ends up being a technology like electricity — and who knows if that’s actually the case or not — then it’s really in the next few years that we’ve set out a lot of those safeguards and the way that we think about it in the first place.

We see this as an incredible sense of responsibility and urgency here to get right.