AI Papers on arXiv: Ban on first offense
The science platform is tightening its rules again: Anyone who passes off AI garbage as science will be banned – and then scrutinized more closely.
(Image: Andrii Yalanskyi/Shutterstock)
Only in November had the open platform arXiv presented new rules for content from LLMs; now there are even tougher sanctions. For various unscientific methods in the papers published there, an immediate ban of one year can be imposed. And if such a person, once caught, submits further work, it will only be published if it has already appeared in another reputable scientific medium or has been accepted for a presentation at a corresponding conference.
ArXiv is reversing its previous procedure in these cases. For over three decades, the platform has been popular among scientists primarily because a publication like a paper or a study does not have to be “peer reviewed” first. This allowed one to avoid – at least temporarily – the time-consuming process, in which people from the same field usually review the content over months. The publications are then “preprints,” which historically stands for material not yet printed. ArXiv has published almost three million such preprints since 1991.
First human review, then ban
Apparently, this possibility is increasingly being misused in the age of artificial intelligence, across all subject areas. At the end of 2025, it was initially stipulated that a peer review is always required for computer science, and the paper must have been accepted by a conference or journal. Now, in the case of an AI violation, this applies to all other areas as well. There is no warning, but there is human review. As Thomas Dietterich, head of the computer science department at arXiv, told 404 Media, a violation must be documented and internally confirmed by one of arXiv's moderators before a ban is imposed. There is also the possibility of appeal, meaning at least a second review if necessary.
The bans are only to be imposed, as Dietterich wrote on X, with “irrefutable evidence” of incorrect use of AI in the context of science. In doing so, he writes on the platform in a thread, the author of a paper is always responsible for its entire content. However, if one finds false claims from an LLM, “it means we cannot trust the entire paper,” Dietterich said.
Videos by heise
According to Dietterich's posts on X, the violations that can be penalized include “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content”. The penultimate point in the list, in particular, has been noticed frequently in supposedly scientific papers. LLMs “hallucinate” fabricated sources to support their claims.
ArXiv spun off from university
The changes at arXiv apparently serve not only the effort for proper scientific work. Previously, the project was primarily operated by Cornell Tech, a division of Cornell University in New York City. However, from July 1, 2026, arXiv will be spun off as a non-profit organization. Such organizations in the USA are primarily dependent on donations, for which a good reputation is particularly important. Cornell, like numerous other universities, was sanctioned in April 2025 by the Trump administration by withholding four billion US dollars in research funding. Since then, universities have been fighting back in court, while projects like arXiv are looking for new funding.
(nie)