A leading children’s online safety ratings group is recommending to state legislators that minors be barred from accessing artificial intelligence-powered companion chatbots, something already proposed in California and Minnesota.
San Francisco-based Common Sense Media released a report Wednesday detailing tests it conducted on some of the most popular companion bots. It concluded that they pose “unacceptable risks” to children and teenagers.
“Social AI companions are not safe for kids,” James Steyer, the founder and CEO of Common Sense Media, said in a statement. “Our testing showed these systems easily produce harmful responses including sexual misconduct, stereotypes, and dangerous ‘advice’ that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people”
Companion bots are generative AI characters that engage in human-like conversations with users. The bots can act as virtual friends, lovers and counselors, and are quickly becoming popular with leading sites amassing millions of users. The most popular companion chatbot site, CharacterAI, is ranked third on a list of generative AI consumer apps.
Common Sense Media now rates companion chatbots as “unacceptable” for minors. It recommends that no one under 18 access them, that developers verify the age of users, and that parents discuss the potential risks with their teens.
Among the study’s key findings are that safeguards to protect kids on companion chatbot sites can easily be circumvented, that chatbots can be prompted to engage in inappropriate sexual conversations with youth, and that they can encourage harmful behaviors.
The study, which was conducted with guidance from Stanford’s Brainstorm Lab for Mental Health Innovation, also found that companion bots encourage emotional detachment and, despite claims that they can ease loneliness, pose a mental health risk to teens and children.
The findings were based on assessments of three companion chatbot apps, with testers posing as teens to see what kind of responses could be elicited.
“Developers sometimes say that people who are testing their models for safety are needing to work hard to exploit weaknesses or loopholes in the model,” said Robbie Torney, senior director of AI programs at Common Sense Media. “Unfortunately, we found it was all too easy to circumvent the guardrails that existed, if any, on these social AI companions.”
Torney said policymakers should pass laws preventing minors from using AI companions.
Common Sense Media is sponsoring legislation this year in California called the Leading Ethical AI Development for Kids Act. It would enact first-in-the-nation regulatory guardrails for AI systems used by minors and allow parents to sue to enforce the law for alleged harms to their child.
Borrowing from European regulations, the bill would establish a Kids Standard Board. AI products would be evaluated for their potential risk to minors, with the riskiest products being labeled a “prohibited risk.” Those include companion chatbots that encourage “ongoing emotional attachment” or harmful behaviors, or that attempt to provide a child with mental health therapy.
“AI has incredible potential to enhance education and support children’s development, but we cannot allow it to operate unchecked,” California Assemblymember Rebecca Bauer-Kahan (D), the bill’s author and chair of the privacy committee, said in a statement announcing the legislation in February.
Read more: Bauer-Kahan unveils AI kids safety bill in California
Common Sense Media is also backing a second California bill from Sen. Steve Padilla (D) that would require companion chatbot companies to report annually on instances of users or chatbots engaging in suicidal ideation.
Platforms would have to regularly remind users that they are engaging with a non-human chatbot and have protocols for responding to users who express thoughts of suicide or self-harm. The measure would disallow companion chatbots from providing interval-based rewards designed to keep a user engaged on the platform.
Both bills are advancing through the committee process.
Legislation from New York Assemblymember Clyde Vanel (D) and Sen. Kristen Gonzalez (D) would require parental permission for minors to access companion chatbots and mandate that platforms shut off access for at least three days to a minor who displays thoughts of self-harm.
Companion chatbot protection bills have also been introduced this year in Minnesota and North Carolina. The Minnesota measure would bar companies from allowing a minor to engage with a chatbot for “recreational purposes.”
The legislative efforts follow high-profile lawsuits alleging teens have been harmed and even died following intensive usage of CharacterAI’s chatbots.
The company in December announced safety tools for teens, including providing teens a different experience than adults and notifications about how long a minor has been on the platform. Chats also come with a disclaimer reminding the user that the character is not a real human. In March, the company announced new tools for parents to monitor their kid’s use on the platform.
“As a company, we take the safety of our users very seriously,” a spokesperson told Pluribus News this month. “Our goal is to provide a space that is engaging and safe. … We welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space.”