AI in psychotherapy: Usage increases, concerns grow

A new survey by the American Psychological Association shows a significant increase in the use of AI among psychologists. This is not just happening in the USA.

listen Print view
Two people sitting opposite each other in conversation

(Image: BlurryMe / Shutterstock.com)

5 min. read

Artificial intelligence has also arrived in therapeutic practice. However, the more familiar psychologists become with AI, the more aware they become of its potential risks. Nine out of ten psychologists expressed concerns about the use of AI. This was revealed by the annual survey by the American Psychological Association (APA) (Practitioner Pulse Survey), for which over 1,700 psychologists were surveyed in September 2025.

According to the survey, 56 percent of psychologists used AI tools to support their work last year – a significant increase from 29 percent in 2024. Almost one in three psychologists (29 percent) even use the technology monthly.

According to the study, acceptance is almost exclusively focused on administrative and support activities. Instead of taking over core therapeutic tasks, the technology primarily serves to make daily work more efficient, according to respondents. The most common use cases are assistance with writing emails and similar tasks (around 52 percent), summarizing clinical notes or professional articles (32 percent), content creation (33 percent), and assistance with note-taking (22 percent).

Only a small minority of AI-using psychologists employ the technology for sensitive clinical tasks such as diagnostic support (8 percent) or as chatbot assistance for patients (5 percent).

Videos by heise

Around 92 percent of the surveyed psychologists expressed concerns. At the top of the list is the worry about data privacy violations (67 percent, up from 59 percent the previous year). All other concerns have also increased, such as the fear of unpredictable societal damage (64 percent, up from 54 percent the previous year) and biases in algorithms (63 percent). The concern about inaccurate outputs, so-called "hallucinations," has also risen from 44 percent last year to 60 percent.

A report by the Berlin-based therapist platform "It's Complicated" confirms similar findings. In their survey, around half of therapists (51.2 percent) also reported experimenting with AI, mainly for content creation and summarizing research. Their main concerns are similar to those of the APA respondents, with concerns about the accuracy of AI tools being paramount (around 71 percent), followed by the risk of losing the connection with the client in therapy. Around 60 percent fear data privacy violations. One therapist expresses concern that AI models like ChatGPT, Claude, or Gemini are designed to capture and hold user attention at all costs. He notes that they act unpredictably and are controlled by a small group of actors who are hardly regulated or accountable.

The platform also surveyed patients' perspectives. According to the survey, over half of respondents (52.4 percent) already use AI for their mental health, mostly in the form of general chatbots like ChatGPT. Reasons include preparing for sessions, sorting thoughts, or expressing difficult feelings. The fact that people use ChatGPT as a therapist is primarily due to the AI's special effect as a "resonance machine." Psychologists like Michal Kosinski point out that modern language models develop a kind of "Theory of Mind" by analyzing text patterns – the ability to recognize and mirror human emotions, motivations, and intentions. While the AI doesn't "feel," it can calculate with astonishing precision what people might feel or think in certain situations, which enhances its effectiveness as a conversation partner.

At the same time, clients share the experts' concerns. Their biggest worries are faulty advice (82.4 percent) and data privacy (73 percent). According to the survey, they want AI to supplement therapy, not replace it. Tools that support between sessions, for example by encouraging journaling or helping to find the right therapist, are particularly in demand.

The authors of the report derive recommendations from this, such as the human being responsible for decisions. A large portion of the medical community also holds this view, and it is regularly debated whether, with the increasing use of AI, automatisms are not also creeping in; in this context, "computer paternalism" has also been mentioned in the past.

Furthermore, the authors consider informed consent to be an important instrument. At the same time, politics is heading in a different direction, for example with the opt-out solution for the electronic patient record. Further legislative initiatives, such as the planned Medical Register Act, solidify the intention to transmit further datasets to the Health Research Data Center in the future as a rule – not, for example, genetic and wellness data – automatically.

Similarly, the founders of "It's Complicated" consider transparency about which data is used, how, and why to be important. The fact that insured people do not know what happens to their data, which is to be made available to the Health Research Data Center is also regularly criticized. Finally, the report also highlights the relevance of privacy and data security.

(mack)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.