Missing link: Which AI do we want?

Johanna Seibt, organizer of the Robophilosophy conference, on the robotics moment in human cultural history. A correspondence.

Save to Pocket listen Print view

(Image: MikeDotta / Shutterstock.com)

14 min. read
By
  • Hans-Arthur Marsiske
Contents
This article was originally published in German and has been automatically translated.

Johanna Seibt

(Image: prívat)

Johanna Seibt is Professor of Philosophy at Aarhus University in Denmark, where the Robophilosophy conference, which she co-founded, will be held for the sixth time from August 20 to 23. In a correspondence with Hans-Arthur Marsiske, she explains some of the issues that will be discussed there.

"Missing Link"

What's missing: In the fast-paced world of technology, we often don't have time to sort through all the news and background information. At the weekend, we want to take this time to follow the side paths away from the current affairs, try out other perspectives and make nuances audible.

From: Hans-Arthur Marsiske
To: Johanna Seibt

Dear Ms. Seibt,
During the European Football Championship, which has just ended, I was struck by the extent to which computer technology is now being used as a matter of course to detect possible breaches of the rules. We are obviously prepared to accept the mistakes of the players on the pitch, but not the fallibility of the human referee. Is this possibly a pattern of how artificial intelligence is gradually - and almost unnoticed - acquiring decision-making powers?

Kind regards,
Hans-Arthur Marsiske

From: Johanna Seibt
To: Hans-Arthur Marsiske

Dear Mr. Marsiske,
That is an interesting observation. There is already a lengthy discussion in the "Philosophy of Sport" on the question of the extent to which technology should support refereeing decisions, whereby a distinction is made between two questions:

First, should we support refereeing decisions by enabling (via "instant replay" from different camera positions) the referee to make decisions based on more accurate data? Secondly, should AI's pattern recognition algorithms be used to enable referee-independent assessment of the data (as an infringement or not)?

I imagine - but this would have to be investigated empirically - that many soccer fans would probably answer both questions in the affirmative: soccer, like many other sports, is about a performance that should be measured as accurately as possible and evaluated as "objectively" as possible if it is to be fair. But this is because the parameters of a "fair" decision are comparatively simple here because they lie in one dimension.

The situation is different for decisions with multiple dimensions - in the ethics of AI, this is referred to as the "parity problem". Here is a current example: A project in which ethicist Walter Sinnott-Armstrong is involved is investigating whether moral decisions on the allocation of transplant organs can be supported by AI systems.

Very different factors play a role in such decisions (e.g. general health, age, family situation of the patient), which lie in different parameter dimensions and which therefore cannot simply be "offset" against each other (the parameters are "incommensurable"): Would you give a kidney to the 35-year-old family man who is, however, a chain smoker, or to the 65-year-old star pianist living alone with otherwise excellent general health?

Such decisions with incommensurable parameters require what philosophers call "ethical judgment" (phronesis). Whether it is possible to automate moral decisions has been debated for more than 10 years, and there are some convincing proposals to do so (e.g. by John Sullins, in the book I am editing).

Nevertheless, to come back to your question, I believe and hope that we will put our trust in human rather than machine decisions, at least for a while. Partly because of the still inadequate performance of AI-supported decision-making systems, but also because it's an existential matter: if I imagine that I *wouldn't* be given a kidney transplant, I would prefer to "owe" my demise to the (possibly wrong) decision of a human. But the generation of our grandchildren who will grow up with robots will perhaps see things very differently.

Best regards,
Johanna Seibt

From: Hans-Arthur Marsiske
To: Johanna Seibt

You touch on many aspects in your reply that would be worth discussing further. But first let me pick up on your last sentence, because it underlines what you write in the announcement of the sixth Robophilosophy Conference. It says: "We are experiencing the robotics moment in human cultural history in which we have to determine who we are and who we want to become - and have now entered a decisive phase." Can you be more specific about what makes this phase special and what might be decided there?

From: Johanna Seibt
To: Hans-Arthur Marsiske

The description of the Robophilosophy 2024 conference takes up a quote from the world-famous techno-anthropologist Sherry Turkle, who will also be speaking at our conference. Turkle has long referred to this "robotic moment" and has described in several books how social media and in particular so-called "social robots" threaten authentic interaction.

Social robots often have a humanoid form and are programmed in such a way that they can act according to the norms of human social interaction. Until now, they have mainly been research tools and it was questionable whether the robotics industry's announcement that it would be able to develop social robots for all areas of human life was at all realistic.

But now the advances in AI - especially multimodal AI, which can integrate information from text, images and sound and generate concepts - seem to be making context-appropriate social action possible. Several tech giants (Google, Tesla, Amazon) are already working specifically on the production of humanoid "universal" robots with such a practical "common sense". Their design is inspired by the "droids" of Hollywood science fiction films. They are initially to be used as workers in warehouses, but are also described as a way out of the "care crisis".

Should this worry us? Or are we just learning - through the intense discussion of text-processing AI - how to deal with artificial social agents of all kinds, on screen or physically, without threatening core human values?

The Robophilosophy conferences have been the largest events for social science and humanities research on human-robot interaction for 10 years. They are unique in that the possible existential consequences are also specifically discussed here on the basis of empirical research.

Motivated by the progress of AI, this time RP2024 ("Social Robots With AI: Prospects, Risks and Responsible Methods") will focus on the question of how the new level of functionality (simulation of intelligent action) will change the way we deal with social robots: Will we still manage to attribute no feelings, thoughts, no "mental inner life" to robots? Or, conversely, at what level of simulation of human capabilities should we do so? When should we attribute something like consciousness or at least rights to AI? Should we deliberately limit AI in robots that our brains automatically classify as "social agents" so that the abilities of a human partner are always experienced as superior in social interaction? These are key questions that the 122 presentations at the conference and, in particular, eight outstanding plenary speakers will address.