Disruption

Opposition to AI regulation is getting louder

The tension over regulating developers and deployers intensified last week with DeepSeek.
Google I/O event in Mountain View, Calif., on May 14, 2024. (AP Photo/Jeff Chiu)

A state-led effort to regulate artificial intelligence models faces escalating opposition from Silicon Valley and free market advocates who warn that restrictions will stifle innovation and competition.

It’s a backlash to lawmakers in both red and blue states who are considering a range of approaches to protect against potential AI harms. 

Legislators in at least eight states so far have introduced bills modeled on Colorado’s first-in-the-nation AI regulation law that was passed last year. Its focus is on preventing algorithmic discrimination by AI systems used to make consequential decisions about people’s lives. 

The tension over regulating AI developers and deployers intensified last week with news of a powerful new Chinese AI chatbot known as DeepSeek, which quickly became the No. 1 app downloaded on Apple’s App Store.

Marc Andreessen, co-founder of the venture capital firm Andreessen Horwitz, called it a “Sputnik moment,” a reference to the Soviet Union’s successful 1957 launch of an earth-orbiting satellite that kicked off the “Space Race.”

Matt Perault, head of AI policy at Andreessen Horowitz, followed up this week with a blog post warning against “a patchwork of state laws that regulate AI development.”

“When we set out to win the space race, America’s space policy was not set by Texas or California,” Perault wrote. “Faced with a similar technological challenge today, Congress and the Administration should take the lead in setting national AI policy.”

Adam Thierer, a senior research fellow at the libertarian R Street Institute, made a similar argument about innovation in testimony last month opposing a high-risk AI regulatory bill from Virginia Del. Michelle Maldonado (D).

“There are better ways for Virginia to address concerns about AI systems that would not involve a heavy-handed, top-down, paperwork-intensive regulatory system for the most important technology of modern times,” Thierer said in prepared remarks.

The Virginia House narrowly passed Maldonado’s bill Tuesday.

Some of the strongest pushback has been in response to a bill from Texas Rep. Giovanni Capriglione (R) that is widely viewed as a test case for red state AI regulation.

The Texas Responsible AI Governance Act would require developers and deployers of high-risk AI systems to use “reasonable care to protect consumers from … algorithmic discrimination” in areas such as education, employment and housing. To spur innovation, the bill would create a regulatory “sandbox” to encourage testing of AI models.

In December, a coalition of more than 20 groups, including Americans for Prosperity, the Competitive Enterprise Institute and the Taxpayers Protection Alliance, sent Texas lawmakers a letter opposing his legislation.

The letter called the bill a “blue-state model that hampers innovation” and warned Capriglione not to follow in the path of California and Colorado “where interventionist policies limit growth and innovation.”

In a statement, Capriglione defended his approach to AI regulation as safeguarding constitutional rights in the digital age.

“From algorithmic biases to censorship concerns, there are valid reasons for Texans to demand accountability in the way AI impacts our economy and society,” Capriglione said.

Capriglione serves on the steering committee of a bipartisan Multistate AI Policymaker Working Group that has drawn more than 200 lawmakers and other participants from 45 states. 

Members say one purpose for working together is to develop common definitions to ensure that state AI laws are “interoperable,” a lesson learned from past fights over consumer data privacy laws.

In a December op-ed, more than 60 lawmakers from 32 states asserted their prerogative to regulate AI, especially in the absence of federal action. Connecticut Sen. James Maroney (D), who helped spearhead the op-ed and led the formation of the multistate working group, insists clear AI guardrails will help promote innovation.

“These bills are not punitive,” Maroney said. “It’s important to build these products right the first time, and in the long run that’s cheaper for companies.”

Maroney drafted a sweeping AI bill last year that inspired Colorado’s law. It wasn’t passed, but he is trying again this year with a revised version.

New York Assemblymember Alex Bores (D), who’s in the working group, also defended the right of states to regulate AI. He pointed to a newly released international report on AI safety that finds general-purpose AI systems “frequently display biases.”

“I agree that a patchwork of state action is less than ideal, and this should be done at the federal and international level, but we don’t currently have an administration that is interested in that,” Bores said.

Bores introduced an algorithmic discrimination bill this year and is also working on legislation related to AI safety, liability and authentication.

Industry opposition to states taking the lead on AI regulation is not universal. Major companies such as Google and Microsoft have signaled support for Colorado’s AI law, while also asking for changes. Both companies along with Amazon also recently provided formal feedback on the Connecticut and Texas bills.

“Amazon is supportive of targeted and reasonable regulation to ensure that organizations adopt governance-based safeguards where AI is being used in high-risk ways,” Nicole Foster, director of global AI policy for Amazon Web Services, said at one of the feedback sessions

Perault at Andreessen Horowitz argues that AI startups, or “Little Tech,” lack the resources to handle complex legal frameworks.

Where Perault sees risk, Matt Scherer at the Center for Democracy and Technology sees opportunity. Scherer, who also testified at the recent feedback sessions, rejects the suggestion that multiple state laws are bad for business.

Instead, he wants to see states experiment with a variety of AI regulatory models to find out “what works best.”

“The patchwork argument is a way of saying, ‘We don’t want the laboratories of democracy to do their thing,’” Scherer said.