Court: ChatGPT in school is deception even without an explicit ban
AI-generated texts risk a "fail" grade. This applies even if school rules don't explicitly name the tool. Court ruling sets precedent.
(Image: CHUAN CHUAN/Shutterstock.com)
In a ruling on December 15, the Hamburg Administrative Court clarified that the use of AI tools such as ChatGPT, Gemini, or Claude for unassigned homework and performance records can be considered an attempt to deceive (File No.: 2 E 8786/25). The judges thus rejected the urgent application of a ninth-grader who had sued against the grading of a reading log with a grade of 6.
In the case, a student at a Hamburg grammar school was supposed to write a summary of a book read in English class. The assignment could be partially completed at home. After submission, however, the subject teacher noticed a discrepancy: while the reading log showed exceptionally good grammar and expression, the student only achieved a satisfactory performance in a supervised written test on the same topic.
When asked, the person concerned admitted, according to the ruling, to having used ChatGPT to create the "Reading Log." The school subsequently graded the work as an attempt to deceive with a "fail." The boy's father legally contested this. He argued, among other things, that there were no clear, written rules regarding AI use at the educational institution.
However, the Administrative Court did not agree. The 2nd Chamber emphasizes that the principle of independence generally applies to academic performance. Anyone who uses a tool that significantly influences this requirement must obtain prior approval. Since ChatGPT takes over central examination aspects such as sentence structure, word choice, and grammar, its use is comparable to receiving help from a third party or copying.
Videos by heise
Signaling effect for everyday school life
The judges also clarified that an attempt to deceive exists even if no explicit AI ban has been issued. The instruction to complete the assignment in one's own words ("use your own words") is sufficient to exclude the use of generative AI. Furthermore, "conditional intent" is sufficient for a deliberate act of deception. The student therefore only had to consider it possible and accept that his actions were impermissible.
The Chamber emphasizes: "In the eighth grade, it can also be assumed that even vehemently expressed opinions of parents – here regarding the allegedly 'legally compliant' use of artificial intelligence in school examinations – should be critically questioned and not accepted as correct without consulting teachers or other sources."
The ruling is likely to have far-reaching consequences for schools and universities. Teachers who suspect the use of an AI system in cases of noticeable performance improvements and intervene can largely feel on the safe side. While the burden of proof lies with the school. However, a confession after being confronted with a substantiated suspicion provides a sufficient basis for sanctions. Students and parents should be aware that, in case of doubt, any form of AI support is subject to notification and approval if it concerns part of the graded performance. This also applies, for example, if an AI system is "merely" intended to help improve phrasing.
(vbr)