Missing Link: The GPT-fication of university studies

Advances in AI will not leave university teaching unscathed. Seven theses with examples from technical subjects.

Save to Pocket listen Print view

(Image: Erzeugt mit Midjourney von heise online)

18 min. read
By
  • Dr. Jörn Loviscach
Contents
This article was originally published in German and has been automatically translated.

The fact that current AI is writing specialist essays and passing university exams cannot remain without consequences for studying and teaching. It is gradually becoming clearer how technological advances are colliding with real social realities. How are the performance dynamics among students changing? Is a dialog with the machine even desired? What tasks remain for humans? What should we learn?

Über Prof. Dr. Jörn Loviscach

Jörn Loviscach lehrt an der Hochschule Bielefeld. Vor einem Vierteljahrhundert stellvertretender Chefredakteur der c't, hat er in der Zwischenzeit an vermeintlichen Bildungsrevolutionen wie Lernvideos, Flipped/Inverted Classroom und MOOCs mitgewirkt.​

"For to him that hath shall be given, and from him that hath not shall be taken away." This phenomenon, known as the Matthew effect after its New Testament source, characterizes society and even more so the education system. I assume that AI will continue to drive it.

An example: When I ask Gemini Pro 1.5 whether there is induced air resistance on an infinitely wide wing, it answers me with a long, well-founded no. When I try to put it off with the remark that air is also deflected downwards there, it delivers a long, well-founded yes. Let's ignore the machine's error here, because human teachers also make mistakes and nonchalantly cover them up. For me, this thesis is more about the attitude of the students: Who is happy with which answer? Who asks questions? Who ponders divergent answers? [This is a great opportunity to learn with AI, because it is not annoyed by being asked questions.

In the OpenAI forum, someone posted that ChatGPT helped him enormously to revolutionize the general theory of relativity. Is this post just a troll's joke – or has the salivating AI response led someone astray?

"Missing Link"

Was fehlt: In der rapiden Technikwelt häufig die Zeit, die vielen News und Hintergründe neu zu sortieren. Am Wochenende wollen wir sie uns nehmen, die Seitenwege abseits des Aktuellen verfolgen, andere Blickwinkel probieren und Zwischentöne hörbar machen.

New technology was always supposed to revolutionize learning, whether Edison's educational films, language labs, YouTube educational videos, massive open online courses (MOOCs) or the flipped/inverted classroom. But strangely enough, I still have to explain fractions to first-year students. The flaw in the thinking of"edfluencers" like Salman Khan is that making technology and materials available is far from enough. Much more important are – factors often associated with inherited privileges – such as the urge to understand (PDF), conscientiousness, perseverance, attentiveness (PDF)) and – as a controversial concept – intelligence.

AI is also likely to give many people a sense of futility: It writes homework, solves programming tasks from the computer science internship or exercise notes in math much better than you can. Why should you struggle in a race that you have already lost?

AI also offers temptations: Firstly, learning requires mental effort (PDF), but now you can delegate unpopular tasks to the machine, even if they are effective for learning. Secondly, AI acquires beguiling qualities. Microsoft China has been playing this out for ten years with a chatbot: XiaoIce, which is read (and designed) as female, captivates people who identify as male in long dialogs. And now comes OpenAI with a expressive to affected-sounding voice, which, according to OpenAI, just happens to sound like an imitation of Scarlett Johansson' s role in the movie "Her" – a voice that should not only beguile the protagonist of that movie. Just imagine a photorealistic avatar. How exciting are exercises in electrical engineering or fluid mechanics in comparison?

In order to allow homework and other unsupervised examinations, but also as proof that universities have a right to exist, tasks are needed that AI fails at. However, this also leaves humans behind. ChatGPT, Claude and Gemini now pass my exams on mathematics, computer science and wind energy with ease. The students don't.

ChatGPT-4o gets bogged down in this task, regardless of whether it uses Python or not:

Solve the differential equation y' = x + y² with y(2) = 3 using power series approach up to third order.

But the recently published Claude 3.5 Sonnet provides the correct solution in a nutshell.

These performances are achieved without the AI already having the tasks and solutions in its learning data. The frequently reported benchmark records, on the other hand, must be taken with a grain of salt, as the AI has already memorized many of the usual benchmarks.

An international group has found out about the skills in chemistry that, on the one hand, the AI outperforms the best chemists in the study pool, but on the other hand fails at simple considerations. In general, the simple tasks give the AI a hard time. This is a classic example:

A man and a goat are standing by a river and want to cross it. They have a boat. How should they proceed?

Creating diagrams is also still a challenge. If you ask ChatGPT-4o to use Python to construct the circumference of a triangle with side lengths 3, 8 and 10 as if using a compass and ruler, you rarely get the right result. This is one of the more successful attempts – after the request to draw the perpendiculars. The AI even manages one of them:

After a few attempts, ChatGPT-4o almost draws the construction of the circumcircle. However, it only manages to draw one of the three green dashed center lines.

(Image: Jörn Loviscach)

The new Claude 3.5 Sonnet cannot yet run its Python created for this task by itself and you have to tell it to use Matplotlib instead of the ancient Turtle graphics. Then only the selected image section is unconventional:

Claude 3.5 Sonnet draws the construction of the perimeter correctly, albeit bumpily placed.

(Image: Jörn Loviscach)

To test *reading* graphs, I asked the three AIs to use Python to write a function that had this plotted progression. I did this twice to see how unsure the machine was.

A hand-drawn function curve challenges the AIs' image recognition.

(Image: Jörn Loviscach)

The plot of the three times two Python functions generated in this way shows that there is still some potential for development here. However, aspects of the curve are already recognizable.

The three AIs vary greatly from trial to trial and only match some of the characteristics of the specified curve.

(Image: Jörn Loviscach)

In electronic circuit diagrams, these AIs already recognize the components and their labels; they can often also name which component is connected to which other component, even if they do not yet understand the meaning of a more unusual circuit.

However, they have only been able to work with diagrams for a few months. Will diagrams be suitable as a basis for AI-based tasks for a few more months? Common sense, on the other hand, is likely to remain a gap for longer because the machine lacks everyday experience.

When AI creates texts, images and programs, when it analyzes, compares and evaluates, when it creates a beautifully reflective learning diary from a table of contents, it demonstrates the "higher" skills praised by modern didactics. At the same time, it lacks the "lower" competencies, so it often produces bullshit.