The California Senate Judiciary Committee on Tuesday advanced a bill that would place guardrails on companion chatbots, the latest target in lawmakers’ efforts to address youth online safety.
Companion chatbot bills were also introduced this year in Minnesota, New York and North Carolina.
“Technological innovation is crucial, but our children cannot be used as guinea pigs to test the safety of new products,” Sen. Steve Padilla (D), the bill’s author, said at a press conference before the hearing. “To allow vulnerable users to continue to access this technology without proper guardrails in place, to ensure transparency, safety and accountability would be highly irresponsible.”
Padilla appeared at the press conference with the mother of a Florida teenager who died by suicide last year after reportedly becoming emotionally attached to a chatbot.
The targeted chatbots are artificial intelligence-powered characters that can serve as human-like proxies for friends, romantic partners or even counselors. There is growing concern about their addictive properties and effect on youth mental health.
Much of the controversy has been focused on San Francisco-based Character.AI, after a pair of headline-grabbing lawsuits alleged harm to teens, including the case involving the 14-year-old who died in Florida. U.S. Sens. Alex Padilla (D-Calif.) and Peter Welch (D-Vt.) sent letters last week to several chatbot companies including Character.AI to demand information about their safety protocols, according to CNN.com.
Character.AI, which describes itself as an “interactive entertainment platform,” says it has taken steps to improve safety for teen users and to give parents more control of their teens’ accounts. That includes providing minor users with a different experience on the platform than adults via a separate model.
“As a company, we take the safety of our users very seriously,” a spokesperson said in an email. “Our goal is to provide a space that is engaging and safe. … We welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space.”
New York Assemblymember Clyde Vanel (D) and Sen. Kristen Gonzalez (D) announced the first companion chatbot bill of the year in January. The bill, which is likely to be amended in the coming weeks, would require parental consent for minors to access companion chatbots. Companies would also have to block minors from chatbots for three days and provide suicide hotline information if they expressed thoughts of self-harm.
“We’re targeting some of the worst harms like self-harm, suicide, sexual content — definitely making sure there’s proper guardrails with respect to these chatbots,” Vanel told Pluribus News in January.
Read more: N.Y. lawmaker targets companion chatbots to protect teens
The California bill would restrict companion chatbot platforms from deploying design methods to encourage user engagement; require regular alerts to users that they are interacting with AI; and require companies to adopt protocols if a user expresses thoughts of suicide or self-harm. Victims could bring lawsuits for alleged violations of the law.
The measure is backed by Common Sense Media, the California State Association of Psychiatrists, the California chapter of the American Academy of Pediatrics and the National AI Youth Council.
“One would think that we’d learned our lessons from the rise of social media, which has been causing harm to kids and teens for over a decade, and there will be blood on our hands if we continue to do nothing,” Mikey Hothi, Common Sense Media’s director of California kids and tech policy, said at the news conference in Sacramento.
The measure is opposed by TechNet, the Computer & Communications Industry Association and the California Chamber of Commerce, who describe the bill as “overbroad” and “vague.”
“With the current definitions, [the bill] imposes unnecessary and burdensome requirements on general purpose AI models,” the groups wrote in a letter of opposition this month. “Requiring these types of models to periodically remind a user that it is an AI and not human is unnecessary.”
The Electronic Frontier Foundation, a digital rights nonprofit, also opposes the bill on the grounds that it runs afoul of the First Amendment.
“I don’t see the First Amendment issue, personally,” Padilla said when asked about that concern at the news conference.
Legislation introduced by Minnesota Sen. Erin Maye Quade (D) and Rep. Kristi Pursell (D) would go further by barring minors from accessing chatbots “for recreational purposes.” Violations could cost companies up to $5 million.
Minnesota Attorney General Keith Ellison (D) recommended age limits for chatbots in his second annual report on emerging technology and its effects on kids, which was released in February. The report warned that chatbots “do not have appropriate safeguards for young people, who use them widely.”
“Given the epidemic of loneliness in society, care needs to be taken in introducing vulnerable youth and adults to products that may appear to fulfill an immediately social need, but where acute harms have already begun to surface,” the report said.
North Carolina Sen. Jim Burgin (R) filed an even more sweeping chatbot safety bill last month. It would impose on chatbot designers a “duty of loyalty” to serve the best interests of all users. The requirements would include that a chatbot platform prioritize a user’s safety and well-being in an emergency and take steps to prevent a user from becoming emotionally dependent on a chatbot.
“We hope for it to be a first step in ensuring AI companions are safe and aligned with humanity’s interests,” said Sam Hiner, executive director and co-founder of Young People’s Alliance, which supports the bill. “Without regulation like this, we’re preparing a generation of children to become isolated and addicted to these bots.”