Disruption

AI ‘doomsday’ bill survives crucial Calif. test

The bill won approval from the Assembly’s Appropriations Committee, with amendments.
California state Sen. Scott Wiener (D) at the Capitol in Sacramento. (AP Photo/Rich Pedroncelli, File)

A controversial California bill to regulate the most powerful artificial intelligence models with the goal of avoiding doomsday scenarios cleared a key legislative hurdle as it moves toward a vote in the state Assembly.

Sen. Scott Wiener’s (D) Frontier Artificial Intelligence Models Act advanced out of the Assembly’s appropriations committee Thursday with several fresh amendments designed to calm industry critics. An earlier version passed the California Senate on a 32 to 1 vote.

If ultimately approved and signed by Gov. Gavin Newsom (D), which is not certain, it would be a first-of-its kind law aimed at preventing AI systems from unleashing mass casualty events or devastating attacks on infrastructure.

“With Congress gridlocked over AI regulation … California must act to get ahead of the foreseeable risks presented by rapidly advancing AI while also fostering innovation,” Wiener said in a statement after the vote.

Wiener’s proposal has drawn national attention and fierce pushback, including from members of Congress. On Thursday, eight Democratic members of California’s congressional delegation sent a letter to Newsom raising “serious concerns” about the bill.

“[W]e are concerned that SB 1047 creates unnecessary risks for California’s economy with very little public safety benefit, and because of this, if the bill were to pass the State Assembly in its current form, we would support you vetoing the measure,” the letter said.  

Newsom has previously signaled wariness about AI regulations that could stifle AI investments in his state. California is currently home to 35 of the world’s 50 largest AI companies.

Throughout the legislative process, Wiener has made changes to the legislation to address concerns and criticisms. In the lead up to the appropriations committee vote on Thursday, he agreed to several new amendments based on feedback from AI experts, including San Francisco-based Anthropic, which describes itself as an AI safety and research company. 

Last month, Anthropic sent Assembly Appropriations Chair Buffy Wicks (D) a letter outlining its concerns with the legislation and proposing several changes, according to reporting by Axios.

“While the amendments do not reflect 100% of the changes requested by Anthropic — a world leader on both innovation and safety — we accepted a number of very reasonable amendments they proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” Wiener, who represents a San Francisco district, said in his statement.

On Friday, a spokesperson for Anthropic said it was reviewing the bill language but had no further comment.

Casey Mock, chief policy and public affairs officer at the Center for Humane Technology, a nonprofit that warns about the dangers of AI, said in an email that his organization was also reviewing the amendments but added, “we remain concerned that the bill’s scope is too narrow and accountability measures too performative relative to the compliance burden imposed.”

The recent changes to the bill include the elimination of criminal penalties for perjury and a clarification that companies would face civil penalties only if a harm or threat to public safety occurs. Another amendment seeks to exempt developers who “fine-tune” an open-source AI model to use for a specific application.

“These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation,” Wiener said in his statement. 

The revised legislation also no longer includes language creating a new state regulatory body known as the Frontier Model Division, which is likely to reduce the cost of the bill, a key consideration given California’s budget woes.

Wiener’s proposal has split the AI community with some praising it as necessary to safeguard against “critical harms” and others criticizing it as overkill that will squelch innovation.

The battle has prompted dueling letters between AI innovators and Wiener who has sought to defend his approach against what he has described as “false claims.”

Read More: California AI regulation bill advances amid Silicon Valley pushback

Among the bill’s supporters are Geoffrey Hinton and Yoshua Bengio who are often described as the “Godfathers of AI.”

But Fei-Fei Li, a Stanford University computer scientist known as the “Godmother of AI” criticized the bill in a recent op-ed in Fortune, writing, “This well-meaning piece of legislation will have significant unintended consequences, not just for California, but for the entire country.”

Support for the measure, which targets AI models that cost more than $100 million to train, has been tied to the effective altruism movement which believes in taking action now to reduce future risks that could lead to human extinction.

Some critics of the legislation have said the focus on hypothetical worst-case scenarios is misplaced and ignores current harms of AI, like algorithmic discrimination.

“Our experience with social media companies shows that without accountability today for harms happening today, we cannot incentivize tech companies to change their behavior,” said Meetali Jain, director of the Tech Justice Law Project who has worked on social media regulation legislation. “California’s lawmakers would be better served spending their time on policies that would help their constituents today, rather than hypothetical future scenarios.”

Wiener’s office did not have an immediate response. But the bill’s lead sponsor, the Center for AI Safety Action Fund, says that the bill “has been designed to be light-touch, common-sense legislation that will protect the public interest while ensuring developers can continue to innovate.” 

“The new amendments reflect months of constructive dialogue with industry, startup, and academic stakeholders,” the Center’s executive director Dan Hendrycks said in a statement. “As it stands today, SB 1047 will drive safety and innovation in frontier models, improving the ecosystem for all—from everyday citizens to sophisticated developers.”

Correction: A previous version of this story inaccurately stated that bill language requiring frontier model AI systems to have safety kill switches had been removed in the amending process. That language is still included.