NEW ORLEANS — State-led efforts to regulate artificial intelligence could stifle innovation and cripple startups, a representative of a leading Silicon Valley venture capital firm warned policymakers here Thursday at the Council of State Governments national conference.
Matt Perault, the new head of AI policy at Andreessen Horowitz, also known as a16z, cautioned against enacting risk-based regulations that target AI models and called instead for a cost-benefit analysis approach.
“Putting an enormous regulatory burden on the development of models, I think, just has the impact of slowing innovation and … also has the impact of slowing competition,” Perault said.
Perault was one of several featured experts at a series of panel discussions on AI policy at this week’s conference. The speakers expressed a range of views about state government’s role in establishing guardrails for the fast-moving technology, absent congressional action. Some urged caution while others encouraged lawmakers to step into the regulatory breach.
But it was Perault, who said he represents “little tech,” who offered the most strident case against comprehensive regulation. He said lawmakers should first determine how existing laws apply to AI and then seek to fill gaps. He pointed to model legislation from the conservative American Legislative Exchange Council as a potential roadmap for lawmakers.
Perault previously directed the Center on Technology Policy at the University of North Carolina at Chapel Hill, and before that served as head of global policy development at Facebook. Perault’s move to Andreessen Horowitz, a titan in the venture capital world, signals the firm’s growing focus on state AI policymaking and how those efforts could affect the early-stage companies it funds.
Earlier this year, Andreesen Horowitz was a key force of opposition to a high-profile AI safety bill in California that targeted the most powerful models of the future. Gov. Gavin Newsom (D) ultimately vetoed the legislation.
Large tech companies and industry trade groups have so far played the most prominent role in trying to influence the outcome of state-level AI legislation, including in some cases supporting broader regulatory frameworks. But the burgeoning effort to write rules of the road for AI is also increasingly activating early-stage companies and the venture capital firms that fund them.
The growing tension between “little tech” and Big Tech over AI regulations in the states is also on display in Colorado. Gov. Jared Polis (D) has tasked lawmakers there with rewriting his state’s first-in-the-nation comprehensive AI regulation bill.
Polis, a former entrepreneur who reluctantly signed the law in May, has expressed concern about its effects on AI innovation in his state, despite attempts by the authors to exempt smaller developers of AI models. At a task force meeting in October, one Colorado CEO said smaller companies are “existentially scared for their lives,” according to the Colorado Sun.
Perault said the risk-based assessments required in Colorado’s AI law, which other states are looking to replicate, “significantly inhibits” AI startups’ ability to “enter the market and compete with larger platforms.”
That warning was unpersuasive to Vermont Rep. Monique Priestley (D), who attended the panel discussion and is planning to introduce for the 2025 legislative session a comprehensive AI regulation bill modeled on Colorado’s law.
“We’re always told from industry that they don’t need to be regulated, and we’re doing it wrong,” Priestley said. “They want us to see the demonstrated harms before we regulate. But we did that with social media, we did that with data privacy. We have to get in front of it.”
Priestley is a member of a bipartisan multistate working group on AI policy, which has a similar sense of urgency that lawmakers cannot afford to delay regulation.
The effort is expected to spawn several proposals in 2025 that borrow from Colorado’s law. Texas Rep. Giovanni Capriglione (R) has drafted a comprehensive bill that could serve as a model for red states.
At the conference in New Orleans on Thursday, Yale Law School professor Femi Cadmus told state lawmakers to adopt what she called an “adaptive regulatory framework” that can adjust as AI evolves.
Cadmus said lawmakers should prioritize transparency and accountability with AI systems while also creating regulatory sandboxes to allow AI developers to test their products in a less regulated environment.
“It is an adventure, and you as a regulator have to be comfortable with ambiguity,” she told the policymakers.