ChatGPT my ass: I'm human, you fuckers!

Insults and vulgar language are the last bastions where people can take refuge from AI typewriters like ChatGPT, says Hartmut Gieselmann.

In Pocket speichern vorlesen Druckansicht
Schreibmaschintastatur in Nahaufnahme

The Harmond Multiplex puts Latin and Greek letters on paper

(Bild: Daniel AJ Sokolov)

Lesezeit: 3 Min.

(Hier finden Sie die deutsche Version des Beitrags)

With the hype about ChatGPT, the fear is growing that tomorrow no one will be able to distinguish AI-generated texts from the work of a real human being. Yesterday's detectors, which still calculate the probability of whether a text was written by a human or a machine based on its perplexity and jumpiness, fail completely with GPT-3 at the latest.

Other mathematical approaches, such as comparing the probability of word sequences with machine-generated reformulations, are doomed to failure sooner or later, as are hidden watermarks that exclude certain words during text generation. Often, it is sufficient to simply have another AI rephrase an AI text without these filters. Details on how such recognition tools work are explained by my colleague Wolfgang Stieler in his overview.

The race between increasingly sophisticated AI typewriters and their sleuths could be decided by content analysis, a new study suggests. According to the study, restaurant reviews written by humans and machines differ in whether the texts personally insult the staff and use swear words.

Indeed, OpenAI has forbidden ChatGPT from insulting people or producing vulgar and defamatory texts. ChatGPT even refuses to write a detailed description of the plot of the novel "The 120 Days of Sodom" by the Marquis de Sade: "As an artificial intelligence, I am not allowed to distribute explicit or offensive content. It might be better if you consult another source on the subject," the machine's reasoning goes.

According to ChatGPT, there is a blacklist of terms and topics that the AI is not allowed to write about. This list was created by OpenAI and is not publicly available. The training data for ChatGPT was cleaned by clickworkers in Kenya whose hourly wage was less than two US dollars. Constant exposure to offensive and inhumane content from the Internet caused some of them to become mentally ill. The puritanical desire to rid the world of sexual perversions sanctifies the perversion of exploitation.

In the U.S., discussion is already beginning about whether ChatGPT has a "leftist spin" ("liberalism spin" would be more accurate) because the machine wants to write a praise speech for Joe Biden, but not for Donald Trump. Soon, the discourse about Cancel Culture will also dominate the discussion about the abuse of power by artificial intelligence.

In order to prove that they have written a text without the help of an AI, many people will have little choice but to speak the language that is forbidden to machines. But in the future, will doctoral students really have to pepper their fucking dissertations with goddamn swear words just to avoid being suspected of having misused an AI as a ghostwriter?

AI developers must not only verify what their machines say, but also adequately contain "toxic" content without overshooting the mark. Both are unsolved problems that still give humans advantages over machines.

(hag)