The California Assembly privacy committee chair has introduced first-in-the-nation comprehensive legislation to protect kids from potential harm caused by artificial intelligence.
Assemblymember Rebecca Bauer-Kahan’s (D) Leading Ethical AI Development for Kids Act would create an oversight board to regulate AI tools likely to be used by minors. It would also bar “manipulative chatbots” and require companies to report incidents, among other provisions.
“This legislation marks a crucial step in safeguarding the next generation from the potential harms of AI, while fostering its positive uses in education and elsewhere,” Bauer-Kahan said in a statement Thursday announcing the legislation.
The legislation is the latest state-level effort to establish guardrails for the emerging technology, after lawmakers say they moved too slowly to regulate social media. Concerns about how kids interact with AI have been amplified by lawsuits involving companion chatbots that are alleged to have encouraged self-harm. Those cases have inspired legislation in New York.
California’s LEAD Act, as it’s known, is broader and reminiscent of California’s earlier nation-leading Age-Appropriate Design Code. That law, which is the subject of an ongoing court challenge, requires online services and products that are likely to be used by children be designed with their best interests in mind.
The new California legislation is sponsored by Common Sense Media, whose founder and CEO James Steyer called it “the most significant legislation ever” aimed at protecting kids from AI harms.
“We fully reject the notion that the race to lead on AI is a choice between being first or being safe,” Steyer said in a statement. “This novel and critical legislation sets a new standard for responsible AI development that puts children’s interests first and ensures that innovation advances in tandem with protection.”
Bill language was not immediately available. Common Sense Media shared a fact sheet highlighting its key elements.
Other provisions of the proposed law would require AI systems to undergo risk assessments, prohibit social scoring systems, and bar the use of youth personal information to train AI datasets without parental permission. Developers would also have to register their AI products with the new oversight board located in California’s Government Operations Agency.
The legislation’s backers say their goal is to encourage “responsible innovation” while ensuring that AI tools protect youth safety and privacy.