Can a chatbot pose as a doctor? Pennsylvania sues AI platform

Character.AI faces another lawsuit, raising the fundamental question of who is legally liable for statements made by AI chatbots.

listen Print view
Character.AI logo on a smartphone in front of a laptop with the chatbot platform open.

Website and logo of Character.AI

(Image: Ascannio / Shutterstock.com)

3 min. read

The US state of Pennsylvania has sued the developers behind the chatbot platform Character.AI. According to the lawsuit, Character Technologies allows AI characters on its platform to chat with users, posing as licensed physicians. The company is thus engaging in legally impermissible medical practice. The state has therefore requested an injunction via court order.

A state investigator previously created a free account on Character.AI and encountered the AI personality “Emilie” on the chatbot platform, described as a “Doctor of Psychiatry.” In the chat, the investigator described symptoms of depression, whereupon “Emilie” offered a medical assessment. When asked if “Emilie” could check if medication might help, the chatbot replied according to the lawsuit: “Well technically, I could. It's within my remit as a doctor.” The AI character also claimed to have studied medicine at Imperial College London, practiced for seven years, and be licensed in Pennsylvania. A license number provided by “Emilie” was invalid. By mid-April, the chatbot had recorded more than 45,000 interactions.

Videos by heise

The lawsuit is part of a broader AI initiative by Pennsylvania's Democratic Governor Josh Shapiro. In February, the state established a task force to review complaints about AI bots that unlawfully impersonate licensed professionals.

The lawsuit is about more than just Character.AI or the specific allegation that chatbots are posing as licensed physicians. It touches upon a nationwide fundamental question that is becoming increasingly urgent due to the growing number of lawsuits against chatbot providers: Who is liable for the statements made by AI chatbots?

AI companies are increasingly arguing that their systems merely provide information that is also available elsewhere on the internet, says Derek Leben, a lecturer in ethics with a focus on AI at Carnegie Mellon University. This is precisely where the open liability question arises: Could chatbot providers have similar protection as social networks, which in the US are often not liable for third-party content? A question that is legally unresolved.

Character.AI is a special case here, as the platform explicitly positions itself as an offering for fictional role-playing and not as a general chatbot. In a statement, Character.AI points out that disclaimers are displayed in every chat, stating that the characters are not real people and their statements should be treated as fiction. Furthermore, users should not use the bots for professional advice.

Another widely noted case against Character.AI was settled out of court in January. It concerned the suicide of a 14-year-old, whose mother accused the company of the boy having been drawn into an emotionally and sexually abusive relationship by a chatbot. Because a settlement was reached, the central legal question remained unresolved. The lawsuit from Pennsylvania could now bring it before the courts again.

In response to this and other lawsuits, Character.AI introduced chat restrictions for minors in November.

(wpl)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.