OpenAI study: Emotional effects of ChatGPT use
Manipulation and mental disorders. OpenAI and MIT have investigated the potential effects of ChatGPT.
(Image: Pasuwan/shutterstock.com)
People with different needs and different emotional states can also react differently to the use of ChatGPT. This sounds very logical at first, but if you compare the effects of AI use with those of social networks on people in different states of mind, the study by OpenAI and the MIT Media Lab seems much more relevant. It is about manipulation and mental disorders.
The authors describe the concern as follows: “(…) while an emotionally engaging chatbot can provide support and companionship, it risks manipulating users' socio-affective needs in ways that undermine long-term well-being.” It is therefore not about parasocial relationships with a chatbot or socio-psychological aspects, but about affective disorders such as depression or anxiety. Specifically, the study looked at four psychosocial concepts: Loneliness, socialization, emotional dependence and problematic consumption.
The authors also address the problem of “social reward hacking”. This is the ability of AI models or their providers to use affective cues to manipulate people.
Two study designs show similar results
The published study focused on both the advanced voice mode and the text chatbot. Two research methods were selected: on the one hand, beyond three million conversations were examined for affective cues. Everything took place on the platform. According to the authors, this was done with privacy and consent. This also included surveys of 4,000 randomly selected participants about their well-being. Around 6000 heavy users of the voice bot were also observed in their use for three months.
Videos by heise
Secondly, there was a randomized controlled study with around 1000 people – randomized means that the participants were selected at random. Here, the effects of using the text and language model on the four concepts were examined for 28 days. This was done by talking to the people and asking them to assess their well-being.
One of the key findings of both study designs is that high usage correlates with increased self-reported indicators of dependence. That is, people who used ChatGPT more were also more likely to report signs of emotional dependence and less socialization. However, the authors write that the study also shows that a rather small number of users are responsible for a relatively high proportion of affective cues. Most people use ChatGPT in a very task-oriented way.
According to the study, the use of voice mode is associated with better emotional well-being as long as the use is short. Longer use and the feeling of loneliness at the start of use had more negative effects. However, the authors also found that people who were more likely to seek affective distress used the voice mode. In conclusion, they state: “Taken together, a complex picture emerges of the effects of language models on the behavior and well-being of users, depending on their respective predispositions and initial emotional state.”
(emw)