Disruption

Legislators move to rein in AI therapy

A new Utah law allows mental health chatbots but provides guardrails.
The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. (AP Photo/Michael Dwyer, File)

State lawmakers are moving to regulate mental health chatbots, as artificial intelligence ushers in a new era of virtual companions and therapists.

Utah Gov. Spencer Cox (R) signed the nation’s first such law in March. Bills were also introduced in California, Illinois, Nevada, New Jersey, New York, North Carolina and Texas.

The proliferation of legislation reflects growing concern over unregulated chatbots that either pose as therapists or are designed to deliver virtual therapy

“Generative AI systems are not licensed health professionals, and they shouldn’t be allowed to present themselves as such. It’s a no-brainer to me,” California Assemblymember Mia Bonta (D) said in February.

Bonta introduced legislation cosponsored by the California Medical Association to bar chatbots from impersonating a licensed medical provider, including a mental health professional. A second California bill would prohibit minors from accessing companion chatbots that attempt to provide mental health therapy. 

The American Psychological Association has called on the Federal Trade Commission to establish “firm safeguards” and investigate “deceptive practices” by companion chatbots that represent themselves as licensed mental health providers. 

The association says chatbots “grounded in psychological research and tested by experienced clinicians” could help alleviate a shortage of clinical providers in the United States. But it also warns that unregulated direct-to-consumer chatbots could harm vulnerable users and people in crisis.

“We don’t support an all-out ban,” said Vaile Wright, a psychologist and senior director of health care innovation at the American Psychological Association. “Our position is AI can be used responsibly and ethically … but that being said it’s important that consumers understand what the limitations are.”

Utah’s law explicitly allows mental health chatbots but provides guardrails. It bars companies from selling or sharing an individual’s identifiable health data and places restrictions on advertising products to users. The law also requires notification to users that they are interacting with AI. Companies that adhere to a set of best practices can earn safe harbor from regulatory enforcement by the state. 

Wright called Utah’s law “a positive step in the right direction.” 

The law grew out of a study by Utah’s Office of Artificial Intelligence Policy that examined how AI could be used to improve mental health access and treatment. The office also entered into an agreement in December with a Utah company to test a mental health chatbot with teenagers in school. Utah last month sent licensed practitioners a 54-page guidance letter on using AI in their practices. 

“We think the benefits go to those who enable responsible experiments so that the best practices can develop and mature, instead of trying to push it away,” said Zach Boyd, director of Utah’s AI office. 

Read more: Q&A: Zach Boyd, director of Utah’s Office of Artificial Intelligence Policy

While Utah’s approach emphasizes innovation, other states are seeking to clamp down on AI therapists. 

Legislation from Illinois Rep. Bob Morgan (D) would place tight restrictions on the use of mental health chatbots, including a prohibition on using them for therapeutic communication or to detect a person’s emotions or mental state. The bill, which is close to achieving final passage, is backed by the Illinois Chapter of the National Association of Social Workers.

“If we don’t take action, AI therapy will continue to spread, and the integrity of our profession will be at risk,” the group said in an action alert

Legislation that has been passed by both chambers in Nevada would bar AI from operating as a virtual counselor; a New Jersey bill would prohibit advertising an AI system as a licensed mental health counselor; and a proposed New York law would make it illegal for chatbots to impersonate a licensed professional.

A North Carolina bill would require health-related chatbots to be licensed. As part of that process, developers would have to provide detailed information about the chatbot’s technical architecture, data collection practices and testing procedures. The Texas bill would require state approval of therapeutic chatbots and supervision by a licensed mental health professional. 

The state-level bills regulating mental health chatbots are part of a broader effort to rein in AI technology that mimics human interactions. Lawmakers are especially focused on protecting youth from companion or “buddy” chatbots, which Common Sense Media said in a recent risk assessment are not safe for anyone under 18 to use. 

Read more: N.Y. enacts companion chatbot guardrails

A recent clinical trial of a generative AI chatbot called Therabot found that “chatbots hold promise for building highly personalized, effective mental health treatments at scale,” although the study’s authors cautioned that more research is needed. 

A spokesperson for Character.AI, one of the most popular chatbot platforms, said user-created characters that employ words such as “psychologist,” “therapist” or “doctor” come with disclaimers that say users should not rely on them for professional advice. The spokesperson also said the company has created a separate experience for under-18 users that includes the ability to detect conversations about self-harm, which triggers a pop-up with information about crisis services.

Some tech industry-backed groups have cautioned against states “over-regulating” chatbots. In a recent publication, Alex Ambrose, a policy analyst at the Information Technology & Innovation Foundation, wrote that AI companions “can serve as a therapist, helping users process emotions, identify patterns in their thinking processes, and offer coping strategies.”

“Policymakers need to better understand the positives of AI companions and chatbots before addressing the negatives,” Ambrose wrote.