US Attorney investigates AI chatbots from Meta and Character.ai
AI chatbots can be particularly harmful to adolescents, according to a public prosecutor from Texas. He has launched an investigation.
(Image: Fabio Principe/ Shutterstock.com)
Because the AI chatbots from Meta and Character.ai can be potentially dangerous, the US Attorney General Ken Paxton has launched an investigation. Specifically, it concerns the safety of adolescents and supposed health tips given by AI models. Paxton is known to be critical of AI and social media.
There have already been reports of children and young people receiving strange or even dangerous advice from chatbots. In the USA, Character.ai is being sued by several parents. The company's AI personas are said to have driven children into isolation, with one mother claiming that the AI contributed significantly to the suicide of her 14-year-old son. Character.ai has already announced that it will take further safety measures in response to these allegations. These include giving parents the opportunity to monitor their children's accounts.
A press release from the Attorney General now states that an investigation has been launched into chatbot platforms "for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools." According to Paxton, AI chatbots mimic healthcare professionals and can give fatal advice. The public prosecutor also sees a problem in the fact that users are disclosing sensitive information, some of which is being used by providers for other purposes, for example to display personalized advertising, but also to develop algorithms.
The investigation will now clarify whether the companies are violating the consumer protection laws of the state of Texas. This prohibits fraudulent claims and false data protection information, which means that Paxton assumes that the companies are making false statements. Information from Meta AI, i.e. Meta's chatbot, should not be used for advertising purposes, for example, according to Meta.
Videos by heise
Silicon Hills Austin and big tech
Paxton writes: "By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care." They are presented with generic answers "engineered to align with harvested personal data and disguised as therapeutic advice."
Numerous tech companies have recently moved from California and Silicon Valley in particular to Texas. Meta recently relocated its US moderation department to Texas. Mark Zuckerberg spoke of returning to the roots of free speech, which is therefore easier under Texas law than in California. He also gave notice to fact checkers. X, Apple, Amazon and Google also have large offices in Texas. Tesla has set up its headquarters there, and Elon Musk also brought X to the state following the takeover of Twitter. The capital city of Austin is now known as Silicon Hills. However, the Attorney General there apparently welcomes companies with less than open arms.
Meta is also facing a lawsuit from Missouri. This is also about the guidelines for conversations between AI chatbots and minors. A leaked document shows that flirting and even hints of sexual interest are permitted, but the description of explicit sexual acts is not.
(emw)