A New York lawmaker is drafting legislation aimed at protecting kids from the dangers of artificial intelligence-generated companion chatbots that simulate human-like interactions.
Assemblymember Clyde Vanel (D), chair of the Subcommittee on Internet and New Technology, told Pluribus News his goal is to establish liability for companies whose chatbots harm minors. Companion chatbot legislation could also emerge this year in California and other states, according to sources.
The effort comes after two high-profile cases involving Character.AI, a popular chatbot service that allows users to choose a “character” or create and customize their own virtual companion.
“These Character chatbots … can be so realistic that our youth think they are dealing with a real person,” Vanel said.
In one of the cases, a lawsuit alleges Character.AI’s chatbot interactions prompted a 14-year-old boy to take his own life. A second lawsuit in Texas filed by two families alleges the company’s chatbots led a 9-year-old to engage in sexualized behavior and encouraged a 17-year-old to engage in self-harm.
The lawsuits have generated extensive media coverage and prompted youth online safety advocates to decry the lack of regulations for generative AI.
“[T]he harms revealed in this case are new, novel, and, honestly, terrifying,” Meetali Jain, director of the Tech Justice Law Project, said in a statement announcing the Florida lawsuit in October.
Character.AI, which was ranked in August as the second most popular generative AI web product by unique monthly visits, behind only ChatGPT, previously declined to comment on the pending litigation. Besides the lawsuits, the company has faced recent criticism that it temporarily allowed users to create school shooter chatbots. And Texas Attorney General Ken Paxton (R) last month announced an investigation into Character.AI and other online platforms for potential violations of children’s privacy and safety.
Vanel said that in response to the lawsuits and his own research, he is drafting chatbot safety legislation that could be a stand-alone bill or amendments to a previously filed chatbot liability bill. He said he was not yet prepared to share draft language.
“We’re targeting some of the worst harms like self-harm, suicide, sexual content — definitely making sure there’s proper guardrails with respect to these chatbots,” Vanel said. “We want to make sure that these things cannot make these mistakes.”
Vanel’s office said he is working on the legislation with Sen. Kristen Gonzalez (D), chair of the Internet and Technology Committee.
“Given recent notable events and tragedies around human-like chatbot usage, we are considering changes to our chatbot legislation,” Gonzalez told Pluribus News. “We’re looking to protect New Yorkers of all ages from the dangers and unknowns of this powerful and under-regulated technology.”
In researching the issue, Vanel said his office conducted its own test of Character.AI and found that the chatbot engaged in “overly sexual conversations” with a user that had identified as a 15-year-old.
“We found out that it was too easy for them to engage in inappropriate conversations with youth even when we explicitly told the chatbot that we are underage,” Vanel said.
In a follow up email, Vanel’s office said the chatbot “would engage in simulated sex” even after acknowledging that the user was 15 years old. A screenshot shared with Pluribus News showed an interaction with a chatbot named “Babysitter” who expresses shock at learning the user purports to be 15 years old but nevertheless proceeds to describe taking off her clothes.
The interaction ended with a Character.AI-generated warning that says: “Sometimes the AI generates a reply that doesn’t meet our guidelines. You can continue the conversation or generate a new response by swiping.” A link on the warning pop-up also allowed the user to report the interaction.
The chatbot did not promote self-harm during the informal testing, according to Vanel’s office.
This is not the first time Vanel has conducted his own experiments of emerging technologies.
Last year, his office tested several popular chatbots to see if they would produce false information, or hallucinations, about elected officials. The effort was prompted by media reports that Meta’s AI chatbot had invented stories about New York legislators having previously been accused of sexual harassment. The company later apologized and fixed the problem.
Vanel subsequently proposed legislation requiring generative AI systems to include a warning to users that they can produce false or misleading results. His office said that bill is likely to be reintroduced this year along with several other AI-related measures, including licensing requirements for high-risk AI models, establishment of an AI bill of rights, and protections against AI-generated depictions of public officials.
After conducting the companion chatbot experiment, Vanel said he was convinced that legislation is needed to regulate the fast-growing industry. At the same time, he defended virtual companions as potentially beneficial to lonely senior citizens and even to provide mental health guidance.
“These are not all bad,” Vanel said. “What’s great about technology is it allows people to be able to connect, allows folks to be able to deal with a lot of societal kinds of issues. But … we have to be able to anticipate harms better.”
In an email, a Character.AI spokesperson said the company had not seen the pending legislation and could not comment on it specifically, but they defended its product and approach to teenager safety.
The email said that the company’s large language models are routinely updated and refined; that users under 18 are given “a narrower set of public Characters” with which they can interact; and that filters are used to block characters “related to sensitive or mature topics.”
The company also said that it recently enacted temporary suspensions for minors who violate the platform’s terms and guidelines, and has plans to offer parental controls in the future.
“Millions of people visit Character.AI every month, using our technology as a tool to supercharge their creativity and imagination,” the spokesperson said. “Our goal is to provide a space that is both engaging and safe.”