Disruption

N.Y. lawmaker set to unveil AI safety bill

It borrows elements from controversial California legislation that was vetoed.
New York Assemblymember Alex Bores (D) (Courtesy of the Bores campaign website)

New York Assemblymember Alex Bores (D) this week plans to introduce a highly anticipated artificial intelligence safety bill that targets the most powerful AI systems of the future with the goal of preventing them from unleashing catastrophic harm.

The bill, known as the Responsible AI Safety and Education Act, borrows elements from controversial California legislation last year.

“It’s a simplified and streamlined bill that is nonetheless focused on the same problem,” Bores said in an interview.

The bill targets so-called frontier models that cost more than $100 million to train and meet a certain mathematical threshold. Bores, who has a master’s degree in computer science with a specialization in machine learning, said he wants to ensure that frontier models cannot be hijacked to produce biological, chemical or nuclear attacks, or cause critical harm defined as the serious injury of 100 or more people or $1 billion in damage.

California Sen. Scott Wiener (D) introduced a similar bill last year called the Frontier Artificial Intelligence Models Act, a first-in-the-nation measure that drew opposition from Silicon Valley and was vetoed by Gov. Gavin Newsom (D).

Wiener announced Friday he’s introducing much narrower legislation that’s aimed at encouraging the development of large-scale AI systems while also empowering whistleblowers to “sound the alarm” if they see something that could go awry and cause serious harm.

“We are still early in the legislative process, and this bill may evolve as the process continues,” Wiener said in a statement. “California’s leadership on AI is more critical than ever as the new federal Administration proceeds with shredding the guardrails meant to keep Americans safe from the known and foreseeable risks that advanced AI systems present.”

Wiener said he’s monitoring an AI working group that Newsom established after vetoing his bill, as well as rapidly advancing AI technology developments.

Bills aimed at frontier models have also been introduced this year in Illinois and Massachusetts.

The New York bill, a draft of which Pluribus News reviewed, would require frontier model developers to implement a written safety and security plan and share it with New York’s attorney general before deploying the technology.

Companies would also have to record and retain testing data and implement safeguards to prevent the “unreasonable risk of critical harm.” The proposed law would require annual safety and security protocol reviews and annual third-party compliance audits.

Developers would have to provide the attorney general with the total “compute cost” of their model and disclose any safety incident within 72 hours. Employees of AI firms would be protected from retaliation if they report to the attorney general concerns that the model poses “an unreasonable or substantial risk of critical harm.”

Violations could bring fines equivalent to 5% of the cost to train the model and 15% for repeat violations.

In a memo outlining the legislation, Bores writes that while AI is “driving groundbreaking scientific advances,” it also poses “significant risks … including massive cyber attacks, or even human extinction.” He makes note of a 2023 letter to more than 1,000 tech leaders calling for a pause on frontier model development. And he cites a 2023 survey that found one-third to a half of the top AI researchers say there is a 10% chance an AI system could go rogue and imperil humanity.

“While 10% might seem modest under many circumstances, when extinction is at stake, it is an unacceptable level of risk,” Bores writes.

Bores writes that his bill is the “bare minimum that New Yorkers expect” and compares requiring an AI safety plan to requirements placed on daycare centers. He also said the bill does not address other potential harms from AI, including algorithmic discrimination, which he said should be the domain of separate legislation. “The bill just tries to reduce the chance that AI, intentionally or unintentionally, kills us all,” Bores writes.

Bores, who engaged major AI labs in the development of his bill, said in an interview that he sought to address criticism that Wiener’s proposal received last year, including that it would hamper AI innovation. He said he removed the most controversial provisions, exempted universities, and built in more protection for smaller companies with the goal of simplifying compliance and focusing on “where we think the real risk is.”

The bill also does not require, as the California measure did, that frontier models have kill switches so that the AI can be shut off in an emergency.

“I’m hoping to win over many of those [critics],” Bores said, adding that there is more urgency to act this year because AI is developing so quickly.

“We are in a different environment than we were when [Wiener’s] bill was introduced,” he said.

In addition to the AI safety bill, Bores is sponsoring a raft of other AI-related legislation. They include bills to protect consumers from algorithmic discrimination, to hold companies strictly liable for harms, and to require watermarking to help distinguish authentic content from synthetic.