Missing link: Which AI do we want?

Page 2: Weak AI, strong AI

Contents
From: Hans-Arthur Marsiske
To: Johanna Seibt

What worries me is the observation that such fundamental questions are mostly ignored in the public debate and in political decision-making processes, and are often even denounced as speculative and irrelevant in favor of short-term economic interests. In its first AI strategy, for example, the German government did not even mention so-called "strong AI" - i.e. a general AI similar to human thinking - but instead focused exclusively on application-related "weak AI". Weak AI promises short-term profits, but strong AI could have far more serious consequences in the long term.

As you rightly point out, it forces us to ask ourselves who we are and who we want to be. However, such questions are often avoided by simply changing the terms: Then the talk is no longer to be of "intelligence", but of "algorithm-driven systems" or "data-driven warfare", for example. However, this feeds the illusion that these are purely technical issues, while the historical dimensions of the upheaval we are experiencing are completely lost from view.

This also applies to the example of the soccer referee mentioned at the beginning: we talk about "video assistant referees" and thus overlook the fact that we are gradually handing over control of the game to AI. Interestingly enough, this process started with sensor technology: initially, it was the slow-motion replays from different perspectives that put TV viewers in a better position to assess critical situations than the referees on the pitch.

However, it didn't stop there. The positions of players and the ball are now recorded automatically and can be processed into three-dimensional models in a matter of seconds to check compliance with the offside rule with centimeter precision. You don't have to call this AI yet, but there is clearly more intelligence in this system than in a pure slow-motion replay. The logical end of this development would be a soccer match completely controlled by AI. I don't think it's too early to think about whether we want that or not. Because the later we turn against it, the more difficult it is likely to become.

From: Johanna Seibt
To: Hans-Arthur Marsiske

You are quite right - we lack a public discourse on what forms of AI we can actually want. But that's because we don't yet know what we're dealing with: will AI systems like GPT always remain "stochastic parrots" (Emily Bender) or are we heading towards "general" intelligence comparable to human intelligence, are there signs of conceptualization?

This question should be examined without economic interests, but we are in a peculiar discourse situation where power and myth are politically paired: The power of the tech giants driving each other is accelerating the production of a technology that none of the manufacturers can describe exactly how it works - the incomprehensible complexity of AI systems suddenly shifts the discourse from rationally divisible argument to subjective interpretation.

This is best illustrated by the fierce AI debate in the USA, where a cult of personality has formed around "tech gurus" who make lurid, oracular predictions ("AI will solve all of humanity's problems"), but without offering explanations that the public can understand. The risk assessments of the tech companies remain completely unclear (OpenAI claims to have "involved psychologists and ethicists" in the development of GPT-4, but how remains unclear).

Unfortunately, the UN forum "AI for Good", which organizes webinars and conferences, has also joined this form of interaction of self-celebrated techno-mythology, which is not interested in a serious interdisciplinary exchange of knowledge with non-technical disciplines.

The Robophilosophy conferences were intended to provide a counter-impulse here and to remind the humanities (humanities, cultural and social sciences) of their mission to form public opinion and to facilitate interdisciplinary, technically informed, rational discussions of cultural and existential issues. Whether and how the discursive foundations of democracy can be saved is the central question.

From: Hans-Arthur Marsiske
To: Johanna Seibt

I like the expression "stochastic parrots". It very succinctly captures the way the large language models work, which use statistical methods to estimate what the next words of a text should be, and at the same time devalues them by implying that they are simply chattering away without any knowledge of the world that the texts are about.

This seems to me to be a common strategy to counter the uncertainty caused by AI: Whenever it outperforms humans at a task, be it chess, Go or recognizing traffic signs, it is often said that this is "just mathematics", not real intelligence. Behind this, I sense an urgent desire that human thinking should remain a unique mystery.

Perhaps this is something like the flip side of the "techno-mythology" you quoted, a kind of "myth of the human"? Is the public discourse possibly also so difficult because we are currently experiencing another deep insult after Nicolaus Copernicus pushed man out of the center of the universe, Charles Darwin put him on an equal footing with the animals and Sigmund Freud cast doubt on the power of the ego?

From: Johanna Seibt
To: Hans-Arthur Marsiske

Yes, the long history of the production and reception of "automata", self-moving machines, which goes back to antiquity and continues in modern robotics and artificial intelligence research, has so far been characterized by a pleasurable amazement at the way of human engineering: What humans can't do!

Today, as our achievements are overtaken by blind algorithms, we, the citizens and recipients of technology, are struck by a sense of insult, an attack on our own dignity. (This ambivalence between seduction and despair in the encounter with the robotic doppelganger is played out with artistic and technical sophistication in the play "Replik.A", which will be performed as part of the conference).

I do not believe that the loss of value of human labor can be stopped - for example, in analogy to "handmade", with a seal of value "produced by human intelligence". Unfortunately, however, international technology producers are still dominated by an unbroken enthusiasm for their own creative power - the parable of the sorcerer's apprentice is set aside with references to global economic competition or open "transhumanism".

(mki)