California Gov. Gavin Newsom (D) on Sunday vetoed one of the year’s most closely watched artificial intelligence regulation bills, saying he wants to take a different path to develop safeguards for generative AI.
Sen. Scott Wiener’s (D) bill would have imposed safety standards on the most powerful AI systems of the future to protect against mass-casualty incidents, including a requirement that there be a way to shut them down completely in the event something went wrong.
In a statement announcing the veto, Newsom called the bill “well-intentioned” but suggested it was not targeted enough.
“Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Rather than implement the bill, Newsom said that AI expert Fei-Fei Li of Stanford and other leaders in the field agreed to participate in an effort to develop “workable guardrails.”
“We have a responsibility to protect Californians from potentially catastrophic risks of GenAI deployment,” Newsom said in the statement. “We will thoughtfully — and swiftly — work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good.”
Newsom on Sunday also announced a new effort to explore ways to use generative AI in the workplace and promised to work with the legislature on the issue of AI safety in the next legislative session.
The veto, which was anticipated, followed a lobbying blitz by both opponents and proponents of Wiener’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
The first-in-the-nation bill sought to put guardrails on AI models that cost more than $100 million to train and meet certain computational thresholds. The goal was to prevent “the creation and the proliferation of weapons of mass destruction.”
It would have required pre- and post-deployment testing of large AI models; prohibited the deployment of models found to be unsafe; mandated that developers be able to fully shut down a system if something went wrong; and called for the creation of a public cloud computing cluster to foster research and innovation in AI “that is safe, ethical, equitable, and sustainable.”
Wiener, in a statement posted to X, called the veto “a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.”
“This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way,” Wiener said.
Newsom previously signaled wariness about legislative efforts to regulate the burgeoning technology out of concern it would harm California’s front-runner status as home to 32 of the world’s 50 largest AI companies.
Wiener’s effort to pass the landmark bill split the AI community and drew national, and even international, attention. It also drew celebrity endorsements from Hollywood actors, including Mark Ruffalo who posted a video to X urging Newsom to sign the bill.
“Gov. Newsom, please do the right thing, don’t bow to the billionaires and protect us from the worst harms of AI,” Ruffalo said in his video.
Ruffalo later signed a letter from more than 125 entertainment industry insiders calling on Newsom to approve the measure. SAG-AFTRA and the California Federation of Labor Unions also signed on in support.
Proponents touted polling that showed broad public support for the bill.
Chamber of Progress, a tech industry group that opposed the bill, created an AI-generated song that called on Newsom to veto the measure. The lyrics included: “Newsom hear the people’s cry, don’t let this bill just pass us by. Innovation’s edge at stake, California’s future we can’t forsake.”
Opponents included leading AI companies, Silicon Valley venture capital heavy hitters, and Democratic members of California’s congressional delegation.
In response to industry feedback, Wiener made several amendments to the bill, which he called “light touch” regulation. But it was not enough to quell much of the opposition.
Among the bill’s supporters were a pair of AI researchers called the “godfathers” of AI, employees of frontier AI labs, and tech accountability groups.
The Future of Life Institute, a nonprofit that warns “frontier AI is currently being developed in an unsafe and unaccountable manner,” ran print and digital ads urging Newsom to sign the bill. The group Accountable Tech said it submitted a national petition with more than 7,000 signatures to Newsom in support of the measure.
“The governor’s veto of Senate Bill 1047 is incredibly disappointing,” Anthony Aguirre, Future of Life’s executive director, said in a statement Sunday. “This veto leaves Californians vulnerable to the considerable risks caused by the rapid, unregulated development of advanced AI systems, including cyberattacks, autonomous crimes, and biological weapons.”
Others praised the veto.
“The California tech economy has always thrived on competition and openness,” said Todd O’Boyle, senior director of technology policy at Chamber of Progress. “The end of SB 1047 means that California can continue to lead the world on innovation.”
Newsom’s office said Sunday that the governor signed 17 bills dealing with the deployment and regulation of generative AI, which it called “the most comprehensive legislative package in the nation on this emerging industry.”
Among the new laws are regulations of election-related and pornographic deepfakes and protections for performers from having their likeness or voice replicated by AI without consent.