Missing Link: The GPT-fication of university studies

Page 2: Thesis 3: The dialog with the machine is annoying.

Contents

Conversations with artificial intelligence turn out to be tedious and frustrating. You ask a question, wait for the answer, check it, ask more questions – a tedious back and forth. Just typing or even dictating a long text requires a lot of concentration. "And then to discuss topics such as mathematics and mechanics." Suddenly, the classic web search becomes more attractive.

However, this stress caused by the dialogue with AI could be a self-made problem for people my age who have internalized AI and especially the speech recognition of old: formulate precisely, speak cleanly, make no typos or slips of the tongue – but that's no longer necessary! If you are just starting out with AI, you can feed the machine without hesitation: y''+5y=sine or basic switching gain 20 (with typos). ChatGPT & Co. can now also digest this.

At the touch of a button, the AI delivers lecture notes with the desired content and in the desired style. If you don't like the result, you tell the machine what to change:

Succinct! Practically relevant examples!

My new scripts for the current semester come subsection by subsection from ChatGPT-4o and Claude 3 Opus, albeit with manual changes. Technology does not stop at scripts: the first providers such as Simpleshow and Synthesia produce lecture videos.

The amount of work saved, especially with videos, is likely to be enormous - just think of error corrections and updates. But this automatic production signals to students that they should check off their presentations and assignments in the same way. The fact that some professors might invest the time saved with AI in a didactically useful way is likely to be overlooked.

Perhaps it is psychologically effective to *not* produce scripts and videos using AI. This is now becoming an expensive signal of appreciation, a home-baked, somewhat misshapen cake, so to speak. Will students respect such materials more and engage with them more? Research at least recognizes"teacher enthusiasm" as effective in motivating learning.

The learning platforms commonly used at universities, such as ILIAS and Moodle, are notorious as PDF slingshots. They are used not least to hide copied textbook pages from the public in compliance with copyright law. But now AI is creating such materials rights-free, customized and in Arabic, Ukrainian or plain language.

A student of mine has discovered that you can ask Bing Copilot for a math exam in my style. The other big AIs are still playing coy (even though ClaudeBot, for example, has already visited my website thousands of times). The collections of tasks that have been painstakingly created on learning platforms will have a hard time against an AI that provides detailed evaluations and explains solutions.

Google has now reached two million tokens in terms of context length, i.e. several times Tolstoy's "War and Peace". Consequently, the AI can process the entire course of study to date in a single query – all scripts, all assignments and the examination regulations with the official curriculum. The entire course of study could take place in a single, continuous chat. Where previous learning platforms rarely use *learning analytics* to maintain a rough *learner model* at best, the AI becomes a partner that accompanies you throughout your studies. Some GPTs from the OpenAI store and Google's LearnLM-Tutor project show where the journey is heading.

That sounds like a data protection drama. However, the data would not have to be learned; it would only be in the chat history. However, the EU's AI  Act has created a new hurdle: AIs that are used to "control the learning process" are considered high-risk systems and are subject to special requirements.

Learning platforms are a way to sink millions of euros because, despite featuritis, they do not keep pace with the times and require massive effort to bring old content onto new systems. Perhaps it will finally become official, as it always should have been and always has been for students: the learning platform is the Internet. Always has been.

When a new topic comes up, political circles react reflexively: "A new school subject is needed!" Accordingly, students are expected to deal with the technical details of AI. But does this really make sense?

"Prompt engineering", i.e. getting language models to give optimal answers through clever input, is an over-hyped art. Will what works with GPT today still work with Claude tomorrow? Meanwhile, the tricks are becoming abstruse, for example starting with the Enterprise logbook or hiding intermediate thought steps as comments in Python code. Moreover, previous tricks such as thinking with intermediate steps are built into the language models. What are students supposed to learn about such tricks that is not sooner or later built directly into the software – or learned from it?

The basics of neural networks also make little sense outside of computer science courses. What a single artificial neuron calculates tells you nothing about the interaction of billions of them. Activation functions, backpropagation and convolution may be exciting, but they don't help you in application. There is a good reason why the redox reactions in motors or batteries are not included in the driving test.

Perhaps Excel will soon be obsolete as everyone's cumbersome and confusing data processing tool because you can have programs tailored by AI. Basic knowledge of Python would then be useful to understand what the machine is programming. Even the fine-tuning of models is now feasible for non-experts. But when will we entrust such tasks entirely to the machine anyway?

The current flood of (dis)information is growing into a tsunami thanks to language models: sockpuppets (one person appears under many fake names) no longer have to be operated manually and the output of research work is exploding.

You can try to cast out the devil with Beelzebub and have the (dis)information checked by AI. But who trains this surveillance AI and with what data? Who owns it? Who pays for it and how? The classic educational ideal, on the other hand, is that of the responsible citizen. Of course, they are not immune to mistakes and certainly not to indoctrination by schools and universities. However, it operates in different time dimensions than those in which the internet economy operates, which perhaps makes it more robust.

No new skills are required, but rather the principles of traditional scientific work: proceed in a targeted and methodically controlled manner; strive for objectivity, validity and reliability; substantiate statements; make results transparent and verifiable.

However, the classic question "Who wrote this?" has no chance in the age of AI: When it comes to calculations or discussions, the AI itself is the source. And if the AI refers to a verifiable human source, you still don't know how much you can trust "that one". The fact that Einstein wanted to set the cosmological constant to 0 does not mean that this is true.

What we need more than ever before is the ability to test statements as such - because of the flood, at least roughly and/or with random samples in tricky places: estimating in engineering, the "sanity checks" in programming, a trained intuition "Looks plausible!" or "Something can't be right!", which can then be followed up on. However, this is – especially if it happens intuitively instead of with long pondering and googling – classic expertise. It requires practice and experience.

Complaining about AI is complaining at a high level. Five years ago, could you have imagined having discussions with a machine about the content of your studies, including code and formulas? In more and more areas of application, AI is crossing the line into usefulness. To use an analogy: It's technically a small step from an airplane design that only almost takes off to one that really does – a small step with massive consequences.

As a team with AI, I write more programs than ever before. It uses software libraries and code refinements I've never seen before. The results are not perfect, but they are a solid foundation. AI helps me tremendously with writing scripts and course planning documents, as well as inventing assignments. This article was written using Whisper 2 to transcribe a lecture and ChatGPT-4o to convert it to written language. Although I rewrote every sentence, it was still much faster than starting with a blank page.

I enjoy beautiful illustrations and discussing Chinese grammar with the machine. In general, the AI is a babel fish: I dictate to it in the seminar, and it writes my dictation in a linguistically clean form including formula characters on the projector image – with translations for non-native speakers.

Without a crystal ball, it is difficult to say in which areas AI will next cross the line into usefulness and what this means for higher education. Perhaps many areas of application will open up in one fell swoop, for example by AI building better world models and using them to reason and plan, or by learning through observation because learning data is now becoming scarce.

(nie)