AI Chatbot: Ello aims to question thought patterns, not just agree

Ello is a psychological AI companion designed to improve mental well-being. It aims to question its users' thought patterns.

listen Print view
Woman holding a smartphone

(Image: Fabrizio Misson / Shutterstock.com, Bearbeitung heise medien)

7 min. read

HelloBetter, a provider of digital health applications (DIGA), has developed the AI chatbot Ello. It is aimed at people who want to strengthen their mental well-being. In an interview with heise online, Dr. Elena Heber, Chief Clinical Officer and co-founder of the company, explains what distinguishes Ello from ChatGPT and co.

Heber has been responsible for digital mental health care products at HelloBetter for years, which has since launched six DiGAs. She is currently focusing on the development of the AI-powered companion, which aims to make evidence-based psychological support easily accessible.

heise online: Why did you launch Ello?

Elena Heber: For years, we have observed that people increasingly talk to language models about everyday stresses or relationship problems. This is often unsafe because the whereabouts of the data are unclear. Furthermore, these models tend to agree with the user frequently and validate everything. With Ello, we want to take a different approach: we also want to question thought patterns and behaviors. The chatbot aims not only to agree but to constructively question behaviors.

Dr. Elena Heber is responsible for HelloBetter's psychotherapeutic content, leads clinical research programs, and as managing director, steers strategic initiatives for innovative care solutions.

(Image: HelloBetter)

What opportunities and risks do you see, especially regarding possible addiction?

An AI companion naturally creates a different distance than a human coach. There is little research on this yet, but we want to change that. We know there is a difference. The 24/7 availability is a great opportunity, because Ello is there even when other support services are not available, for example at two in the morning. If Ello recognizes that someone is struggling heavily with loneliness, for instance, the chatbot tries to use psychological techniques to encourage the person to reconnect with their social network, such as friends and family. So, we use technology to integrate people back into their everyday lives. To achieve this, we use proven strategies from psychology.

Videos by heise

For me, the goal in product development is always the primary objective: improving well-being. Whether a certain regularity, which could be described as "addiction," has a negative or positive influence is a fluid area. If we saw that someone was withdrawing due to usage, Ello can address that. In the randomized controlled clinical trial, which we will begin in the first quarter of next year, we will also assess the relationship with Ello. Using a validated questionnaire that includes an item like "I feel dependent on…", we will be able to make a reliable statement about how this relationship develops.

KI-Begleiter "Ello" soll fĂĽr ein besseres Wohlbefinden sorgen (4 Bilder)

Ăśberblick ĂĽber vergangene Sitzungen in der Ello-App.

Does Ello consciously refrain from overly personal address? How is the chatbot's language designed?

We designed Ello to build empathetic contact. From a scientific perspective, we have defined conversational criteria for Ello to operate by. These include clarity, empathy, trust, safety, and the ability to remember conversation content. Conversations are evaluated to distinguish between helpful and less helpful exchanges, thereby continuously improving conversation quality.

Are there safety mechanisms that intervene if a conversation drifts in a critical direction or if a person obviously needs more help?

Yes, we use a multi-layered architecture in which we combine different language models in a multi-agent structure. This also includes integrated safety agents. If a conversation goes in a direction for which Ello is not designed, safety measures are triggered. Ello then recognizes that the person needs more support and suggests in the conversation that the topic be discussed with a human. In more critical situations, other support services are displayed directly, so that the person can contact someone immediately.

What language models are behind it?

We opted for a two-pronged approach: we work with open and closed-source models as a basis and are developing our own model in parallel. For us, it was important to offer a secure alternative to universal models in a timely manner, which would not have been possible by developing a completely proprietary model from scratch. In the medium term, however, we plan to develop our own language model to achieve even better performance and relevance through targeted optimization.

At OpenAI, users were recently upset because a new version was perceived as less empathetic. How do you ensure that Ello's quality does not suffer after updates?

That is a very important point. Our modular, agent-based architecture allows us to evaluate individual system components in isolation during model updates. We also have an evaluation framework that we use to check, based on conversations, whether the system performs better or worse after a change. Experts evaluate the conversations using various scales, thus ensuring that exactly that does not happen.

How do you handle user data? Are conversations used for evaluation?

As a medical device manufacturer, we at HelloBetter have extensive experience in handling health data. All data we collect is subject to the same security structure as our digital health applications (DiGA). People naturally give their consent for the data they provide. Evaluation is carried out exclusively anonymously, and data transmission is end-to-end encrypted. Everything is located on European servers. We see this as a major advantage over providers like OpenAI, where data ends up in the USA. The data is not shared with any third parties and is used solely for the anonymized improvement of Ello's quality.

Is it planned to offer Ello as a digital health application (DIGA) in the future?

DiGAs are intended for diagnosed mental illnesses. Ello is designed for the well-being sector, where there is no mental diagnosis yet. Whether this technology is also suitable for people with a diagnosed depression, we cannot yet foresee. We are currently gathering experience in the well-being area. To offer a DIGA, one would have to develop a corresponding medical device.

What is the pricing model for Ello?

Our strategy is to offer the tool through employers or health insurance companies, who in turn make it available to their employees or policyholders. So, we enter into contracts with companies and insurers, allowing people to use it for free. Although we have also made it available in the App Store (addendum: for 19.99 Euros per month), the primary strategy is not direct sales to the end customer.

Many health insurance companies already offer prevention programs. Do you see a need there at all?

With traditional offerings, we often see that the usage rate (adherence) is not very high or that they only appeal to a specific target group – often middle-aged women. We have received very positive feedback from health insurance companies and will soon be launching a pilot project with one to clarify these very questions. The prevention market is very large, and the potential is far from exhausted.

(mack)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.