Confer: Signal founder launches secure AI chatbot

Moxie Marlinspike has developed a privacy-friendly AI chatbot. He warns about AI providers and their handling of data.

listen Print view
Young woman with reddish-blonde curls and an orange sweater holding a smartphone in a living room.

(Image: insta_photos/Shutterstock.com)

3 min. read

Conversations with common chatbots end up on the providers' servers. Even if the content is explicitly not used for training further AI models, it is not end-to-end encrypted. This means that OpenAI, Anthropic, Google, and Meta can read along if necessary. Now, the founder of the messenger Signal wants to offer a chatbot where precisely that does not happen. Confer is intended to protect conversations from the insights of others. “Confer is a service that allows you to pursue ideas without having to expect that they will one day be used against you.”

To use the chatbot, you must register and create a passkey. This can be protected, for example, by Face ID, fingerprint, or a device PIN. Further keys are derived from this, but they remain on the device. This means that Confer cannot see or use them either. Chat requests are therefore encrypted locally.

Confer is structured like the usual AI chatbots.

(Image: Screenshot Confer)

However, an AI model must run on a server with GPUs. Someone has to operate the server. Accordingly, this person or organization has access. Therefore, Confer relies on Confidential Computing and a Trusted Execution Environment (TEE). The code is executed in this hardware-supported, isolated environment. The source code is available on GitHub.

It is questionable which AI model Confer uses. There is no definitive answer to this. One can only assume that it is one of the open models. These are available, for example, with Llama from Meta, Gemma from Google, or from the French Mistral.

In a blog post, Marlinspike explains how the encryption works. In another post, he also warns against using AI chatbots or their providers. You are revealing your thoughts. The providers will definitely store them, use them for AI training, and above all, certainly monetize them. It is no secret that, for example, OpenAI is toying with the idea of introducing advertising in ChatGPT to earn money from it. Google and Meta have the advantage of being able to offer advertising in other services and thus having sufficient revenue to also develop and offer AI services.

Videos by heise

But they too, Marlinspike fears, will use the information we give chatbots to play personalized advertising based on it. He goes so far as to say that providers will convince us that we need certain things – they could use the entire contextual knowledge about us, our thoughts and worries, for this. “It will be comparable to a third-party paying the therapist to convince us of something.”

He also warns of authorities and law enforcement reading along – in addition to the providers. Marlinspike writes: “You get an answer; they get everything.”

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.