New York became the first state to enact regulations for artificial intelligence companion chatbots when Gov. Kathy Hochul (D) signed the protections into law this month as part of the state budget.
Platforms must now provide recurring reminders to users that they are not communicating with a human. They must also identify when a user expresses thoughts of suicide or self-harm and refer them to a crisis service provider.
The new requirements will “ensure that vulnerable users are protected from the risks of digital companionship over direct support from real-people they can trust,” Sen. Kristen Gonzalez (D), chair of the Internet and Technology Committee, said in a statement.
Chatbot regulation bills have also been introduced this year in California, Minnesota, New Hampshire and North Carolina. Minnesota’s bill would bar youth from accessing companion chatbots altogether. One of two California bills, sponsored by Common Sense Media, would prohibit minors from accessing chatbots found to encourage harmful behavior or harmful emotional attachment.
Lawmakers have been galvanized by recent cases in Florida and Texas alleging that teenagers have been harmed and even died by suicide due to their interactions with companion chatbots.
“It’s great to see that lawmakers are being proactive about thinking about AI companions,” said Sim Hiner, co-founder and executive director of the Young People’s Alliance. “It’s critical that we quickly and clearly set guidelines about the role we want AI to play in our lives so that we can lead in innovating AI that gives users more agency, not less.”
Hiner’s youth-led nonprofit is backing the North Carolina bill, which would require AI chatbots to prioritize the psychological well-being of users under a “duty of loyalty” clause. It would also establish licensing for health-related chatbots. Hiner said comprehensive regulations are needed to protect users from becoming emotionally dependent on AI companions.
Character.AI, the company whose chatbots are implicated in the Florida and Texas lawsuits, said in a statement that it already does “much of what” the New York law requires.
“For example, we have pop-ups that direct users to the National Suicide Prevention Lifeline if they discuss self-harm, and we have a disclaimer that tells users the AI is not a real person and they should treat what it says as fiction,” a spokesperson said. “We will continue to evolve our trust and safety processes and to work hard to keep our users safe as we grow.”
Character.AI is the third-most popular generative AI consumer app, according to tracking by the venture capital firm Andreessen Horowitz.
Companion chatbots are often marketed as virtual friends that can get to know a user and respond in human-like ways. Some studies have shown that AI companions can reduce loneliness, especially among the elderly.
But a recent report from Common Sense Media, a leading online safety rating nonprofit, concluded that companion chatbots pose “unacceptable risks” to children and teens and urged lawmakers to enact youth bans.
“Our research has found that these tools can produce harmful content, engage in sexual misconduct, provide dangerous advice, manipulate users of any age emotionally but with great affect to teens and kids,” said Amina Fazlullah, head of tech policy advocacy at Common Sense Media.
Read more: Safety group to legislators: Ban companion chatbots for kids
Fazlullah praised New York’s restrictions as “an essential first step” but said “there has to be more done to ensure that kids are protected.”
The language contained in New York’s budget was pulled from broader legislation sponsored by Gonzalez and Assemblymember Clyde Vanel (D). Other elements of their bills, which were not incorporated into the budget, would require parental permission for minors to use an AI companion chatbot and create liability for chatbots that promote self-harm.
Read more: N.Y. lawmaker targets companion chatbots to protect teens
In a statement to Pluribus News, Vanel said “there is still more work to do, and we look forward to continuing this work, specifically related to companion chatbots, in this session and the next.”
Gonzalez on Tuesday announced a suite of seven AI-related bills that she will prioritize in the closing weeks of the legislative session, including one that would impose liability on companies if their chatbot impersonates a licensed professional such as a mental health counselor.