Wikipedia responds to AI flood with new rapid deletion policy

Wikipedia is introducing a new policy to deal with the flood of low-quality AI content. It allows the quick deletion of AI-generated articles.

listen Print view
Wikipedia lettering on a smartphone

(Image: Allmy/Shutterstock.com)

3 min. read

The Wikipedia community has adopted a new policy to counteract the growing flood of low-quality, AI-generated articles – so-called “AI slop” –. This was reported by the online magazine 404 Media. This is intended to enable administrators to delete obviously machine-generated and unchecked articles without the usual one-week discussion phase if they meet certain criteria.

Normally, deleting a Wikipedia article requires a consensus to be reached during a seven-day discussion. However, a quick deletion procedure already exists for clear-cut cases such as vandalism or pure advertising. This is now being expanded to include criterion A9 (“Unreviewed machine-generated content”). According to the report, an article can be deleted quickly if it fulfills one of two conditions: Either the text contains typical wording of an AI responding to a user query, such as “As a great language model…” or “Here is your article about…”. According to the authors of the guideline, this indicates that the submitter has not even proofread the text.

The second condition for a speedy deletion is the use of obviously false or invented sources. This includes references to non-existent books and studies or links that lead to thematically completely inappropriate content. As an example, the guideline cites the citation of a scientific paper on a beetle species in an article on computer science.

Ilyas Lebleu, founding member of the “WikiProject AI Cleanup,” described the sheer volume of quickly generated false information to 404 Media as an “existential threat” to Wikipedia's processes, which are designed for lengthy discussions. Previous attempts to regulate AI content often failed due to the uncertainty of assigning a text to an AI without any doubt. Although the new, specific regulation is only a “band-aid” that covers the most obvious cases, it sends a clear signal: unverified content taken from an AI is incompatible with the spirit of Wikipedia.

Videos by heise

The new policy to combat AI errors comes at a time when the online encyclopaedia is being criticized for quality deficiencies. An investigation by the Frankfurter Allgemeine Sonntagszeitung of 1,000 randomly selected German-language entries also revealed that more than one in three articles is problematic. Around 20 percent of the pages examined contained outdated information, and almost as many were based on demonstrably false information.

(mack)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.