US authority investigates chatbots for child safety

The US authority FTC inspects internal documents of several data companies. How do their chatbots protect children?

listen Print view
Children on a bench, all looking at their smartphones

(Image: BearFotos/Shutterstock.com)

3 min. read

How do US data companies measure, test and monitor their public AI chatbots for their negative impact on children and young people? The US Federal Trade Commission (FTC) is investigating this question. To this end, it has requested internal documents from Alphabet, Character Technologies, Meta Platforms (including Instagram), OpenAI, Snap and X.Ai.

The FTC is particularly concerned about cases in which a chatbot is not used briefly for a single request, but when it acts as a companion for a longer period of time. According to the press release, the official investigation is intended to find out what steps the companies have taken to evaluate the safety of their chatbots when used as a companion. This is done to restrict their use by children and young people and prevent potential negative effects on them. It also aims to find out how they inform users and their parents of the dangers posed by their bots. It also wants to review compliance with US child data protection law (based on the COPPA law) and obtain information about the business models.

The latest occasion is the internal Meta Platforms guidelines for the training and operation of generative AI (GenAI: Content Risk Standards), which were leaked in mid-August. They allow racism, false medical claims and lewd chats with minors. It was already known that Meta's AI chatbots flirt with teenagers or engage in sexual role-playing. What is new is the proof that this was not a mistake, but complied with Meta's explicit guidelines.

Following an inquiry by Reuters, Meta has removed the section that allows flirting and romantic chats with children. You have to believe Meta: the data company is keeping the new guidelines under wraps. So it's no wonder that the authorities are taking a closer look. One US senator has already called for an investigation into Meta, while another wants to allow American AI companies to experiment freely for a decade without legal restrictions.

Videos by heise

In the USA, several lawsuits have already been filed by bereaved parents against operators of generative AI chatbots for allegedly driving children to suicide, providing them with instructions and encouragement and/or failing to organize help. In August, the ministers of justice from 44 US states put the rod in the window of the AI industry. "You will be held accountable if you knowingly harm children," reads an open letter sent by the National Association of Attorneys General to various data companies. There is already evidence of structural and systematic dangers posed by AI assistants for adolescents.

(ds)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.