Super-Eliza or sociopath? On the dangers of AI anthropomorphization

In an interview with heise online, psychologist John G. Haas explains why AI companions can help, but will never replace therapy.

listen Print view
Woman in a dark room on the bed with a lit smartphone. Woman looks unhappy.

(Image: Shutterstock.com/DimaBerlin)

12 min. read

The current discourse on the role of generative AI shows that therapy and companionship – i.e. applications in which AI acts as a conversation partner, companion or supporting instance – are now considered the most important use cases, ahead of traditional productivity or creative goals.

This shift was highlighted in the Harvard Business Review's "Top 100 GenAI Use Case Report" and is based on usage data from freely available AI tools: Many people today use AI systems primarily to support them with psychological, social, or emotional issues and less for writing, designing, or programming.

"Missing Link"
Missing Link

What's missing: In the fast-paced world of technology, we often don't have time to sort through all the news and background information. At the weekend, we want to take this time to follow the side paths away from the current affairs, try out other perspectives and make nuances audible.

That is why the US Federal Trade Commission (FTC) launched an investigation in September that for the first time specifically targets AI-powered chatbots that act as "companions" (i.e., friends or companions). The aim is to clarify whether these systems manipulate users, access intimate data, or could be exploited, especially in psychologically vulnerable situations. The investigation authorizes a commission to conduct far-reaching studies – The companies affected are:

  • Alphabet
  • Character Technologies
  • Instagram
  • Meta Platforms
  • OpenAI OpCo
  • Snap and
  • X.AI

The increasing use of chatbots in a therapeutic context is also an issue in Europe. John G. Haas is deputy head of the Digitalization and Mental Health working group in the Professional Association of Austrian Psychologists (BPĂ–) and represents the BPĂ– at European level in the European Federation of Psychologists' Associations (EFPA) working group on digitalization.

John G. Haas is a media psychologist who studies how AI affects cognition, behavior, and mental health.

(Image: Haas)

Heise online spoke to Haas about the human-machine relationship with chatbots, which are constantly available as a companion in your pocket and are also used by some people as a substitute for a therapist. In this interview, Haas explains why chatbots seem helpful, but cannot be a substitute for human therapy and what "AI psychosis" is all about.

What do you think of ChatGPT as a therapist in your pocket?

People are social beings and feel better when someone or something takes care of them. There is an effect that this human-machine relationship that has developed here creates a feeling that an entity is taking care of them. – in this case, a machine – cares about you.

It is of course a competitive advantage if a depressed user turns to this machine at 2:30 in the morning, perhaps with a clouded state of mind, to ask for advice. We can't expect that from a human being as a contact person. The Digitalization and e-Mental Health working group, of which I am deputy head, has been looking into the role that artificial intelligence can play in the field of treatment for several years now.

Videos by heise

Large language models (LLMs) are conversational machines that respond to requests, i.e. respond to an input with an output. And perhaps the answers from LLMs are sometimes more comprehensible than the "therapist's German" of some psychologists or psychotherapists due to the modeling. But the machines do not have as much authority and competence as their elaborate language might suggest. And these technical systems lack important factors such as being at home in a body (embodiment), they lack emotion, intuition, spirituality and perhaps the most important factor: they also lack willpower. I don't even want to talk about the lack of awareness.

When it comes to steering the course of a therapy, asking specific questions, comparing these with evidence-based interventions and therapeutic experience and intuition, an LLM certainly can't keep up at the moment. ChatGPT as a general purpose LLM is in principle not suitable as a therapist, it is not an expert system and there is no evidence, i.e. scientifically proven effectiveness, for it.

In Europe, there is still no general psychology or psychotherapy AI that has received regulatory approval or whose effectiveness has been proven. The current position is that AI technology can certainly offer supportive benefits, but that it cannot replace human treatment in the foreseeable future.

The therapeutic relationship is more than half the battle for successful treatment. And the human factor must always retain the lead in therapy, as therapy also involves two human entities interacting in a highly complex process and in an equally complex environment. Often and in the long term, the success of therapy will be greater with a human counterpart, even though digital companionship will play an increasingly important role.

Does the Eliza effect work more strongly today because the machine is so eloquent, so can ChatGPT be described as a kind of "super Eliza"?

The common denominator between Eliza and ChatGPT, Gemini, Claude and co. is definitely language. However, elaborate language or language that seems appropriate, helpful and correct to us creates an image of authority, possibly even immeasurable authority. The error rate and the type or degree of inappropriate responses have become much lower with LLMs.

However, I do not believe that a machine itself has developed a Theory of Mind, but that it presents a pattern through the extensive data set and through the complex forms of processing and output. This allows us to infer a Theory of Mind.

The model change to GPT-5 caused an outcry on social media in August when users found the new model too "cold" or "sober".

This led to a subjectively perceived change in the nature of the machine's identity. This is then like a human counterpart who has perhaps taken a substance, such as a stimulant or drugs. Especially without any strong justification and without even knowing it beforehand. It just happens. When people change their minds, there are visible signs that the situation or the person's condition will change, but with the machine we are ultimately at the mercy of the operator's tuning.

However, I would like to expressly warn against the so-called anthropomorphization of technologies. We cannot impute a character trait to a machine, nor can we compare machine outputs with human actions, as neither the fundamentals nor the processing are comparable. This anthropomorphization may make many things easier for us to explain, but it also makes us more susceptible to fallacies. Namely, that we transfer the expectations we have of humans to machines.

What do you think about ChatGPT being described as a sociopath in your pocket?

It is quite possible that a general language model is behaving quasi-sociopathically because in some way or due to some setting, it is now behaving in a way that we interpret as sociopathic.

What about triggering psychological crises, such as so-called "AI psychosis" or "AI psychosis"?

When a GPT architecture meets people who may already have a predisposition to superstitious behavior or a tendency to schizophrenic disorders, something like the "AI psychosis" mentioned in the media can develop. – which is not a technical term – develop. However, I consider the reporting on "AI psychosis" to be more media hype and not the greatest danger per se.

Of course, GPTs can promote psychotic states, but so can human communication. In the case of delusional disorders, new technologies have always encouraged the creativity inherent in delusional themes. And this has had its precursors in more recent times with radio, TV and the internet. I believe that what I call "the psychotic potential" in the general population will perhaps shift more towards machines, i.e. the delusional themes will change, the mirror of the delusional themes or the composition. However, I believe that the number of cases will not increase exorbitantly. This is because the incidence and prevalence of schizophrenic disorders and delusional disorders have not increased in recent years.

Do you think ChatGPT is a resonance machine?

I see all GPTs as machines that ultimately produce self-similar relations based on language input, the representation of the construct inherent in language in high-dimensional vector spaces, because we have set out to do so. GPTs are even forced to do so because of their orientation, even if there are degrees of freedom that can be influenced.

In a second step, it is then due to the orientation, i.e. the human tuning of the respective GPT instance, how informative, beneficial, pleasing, or even entertaining it then acts. And if a machine does this, and this is a question of tuning and not the core of the machine, then it could be described as a resonance machine.

But I don't necessarily see resonance as having a positive connotation here; instead, I see effects coming that already exist in social media, with the algorithmic ranking of posts and prioritization of content. If it resonates too strongly, the machine doesn't ask enough critical questions, doesn't create diversity or allow diversity of thought or communication. It will simply lead to millions of individual human-machine bubbles, which ultimately won't benefit the interests of the users, who are always customers, as much as the interests of the providers. And in the end, there are perhaps only two or three major providers of GPTs worldwide, even if Europe is working on its own model ("OpenEuroLLM").

The competition has been going on for a long time and we have the big seven, who are already dividing up the market among themselves and, of course, trying to deliver attractive products. The aim is for these products to be used as much, as long and as intensively as possible, because their use provides the providers with feedback and information for product improvement and therefore a competitive advantage.

Are we on the way to a new era of human-machine interaction?

Since 2022, when ChatGPT and other forms of generative AI such as Midjourney emerged, we have entered a new era of the human-machine relationship.

We are in the new era because we are interacting with machines at a high level in a powerful language – based on what is now the world's largest programming language, human language. In which we also think or which can at least be considered equivalent to our thoughts. In this respect, a dream has come true.

In fact, many people use these LLMs or generative AI in general, and we are virtually in the midst of a development in which we must actively take a stand on what role we want to assign to these entities in decision-making, in knowledge acquisition, but also in our inner structure. And time may be passing more quickly than we think, because these developments leave little room for veto or modification.

(mki)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.