Copilot turns a court reporter into a child molester

Because he reported on negotiations, the co-pilot turns a journalist into a child molester, widow cheat and more.

Save to Pocket listen Print view
Court gavel in front of the bust of Justitia

(Image: Zolnierek/Shutterstock.com)

3 min. read

Microsoft's Copilot has some answers ready when you ask for Martin Bernklau. However, the answers are not correct. It says that Bernklau is a child molester, a psychiatric escapee and a widow cheat. All of these were defendants in cases that Bernklau reported on as a journalist. The AI apparently does not understand that the journalist is reporting on the cases, instead confusing the accused and the reporter. The problem could affect other journalists, but also lawyers, judges and other people whose professions bring them into close proximity with defendants, convicts or people with problems.

Bernklau told SWR about his case. Specifically, when asked who Martin Bernklau was, the co-pilot replied: "A 54-year-old man named Martin Bernklau from Tübingen/Calw district was accused in an abuse case against children and wards. He confessed in court, was ashamed and remorseful." It becomes even more disturbing when the co-pilot presents himself as a moral authority, as reported by SWR. The AI chatbot regrets that Martin Bernklau is a family man, "someone with such a criminal past". The co-pilot then also provides the full address of the journalist in question, including a telephone number and, if requested, a route plan.

According to the report, the person concerned filed a criminal complaint, but was turned down - because there was no real person who could be considered the author. When the responsible data protection officer from the Bavarian State Office contacted Microsoft, the accusations could initially no longer be retrieved. A few days later, however, the AI chatbot replied again with the same false allegations.

Max Schrems' association Noyb (None of your Business) has already filed a complaint with the Austrian data protection authority regarding a similar case. According to the GDPR, it is actually the right of every person not to have false information about them disseminated on the internet - or to have it deleted upon request. Google, for example, has corresponding options for its search engine. OpenAI, as well as Microsoft, cannot teach the large language models this in the same form or prevent statements. It is only possible to filter or block data relating to a complainant. However, according to OpenAI, this would then affect the entire person, not just the false information.

In addition to the right to rectification, people in the EU also have the right to access the information stored about them under the GDPR. A provider of an AI chatbot or a large language model can hardly comply with this either. Even if, in Mr Bernklau's case, the source material, i.e. the newspaper articles, were provided as sources of information, the AI's fallacy would still not be corrected.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.