Vivaldi does not want AI agents in the browser

If you use AI agents in your browser, you let Big Tech decide what you see, says Vivaldi. You are just a passive viewer.

listen Print view
A,Person's,Head,Covered,By,An,Ai-labeled,Dark,Cloud

(Image: photoschmidt/ Shutterstock.com)

2 min. read

All common browser providers make AI agents available. Some AI companies develop browsers that are based on AI agents. Except Vivaldi. The Norwegian company, which develops the browser of the same name, is bucking this trend.

"We don't want you to be reduced to a passive viewer," reads a statement sent out by CEO Jon von Tetzchner. In it, he explains that browsing should help people to discover things, work out ideas and make their own decisions. However, as soon as AI assistants sit between the user and the web, the providers – usually decide big tech – what you see and what you don't see. "Your decision is outsourced."

Vivaldi believes that AI in the browser is hype. People are opting for people instead of hype. The internet will be much less exciting if there are no more discoveries. And Vivaldi repeatedly emphasizes that they are fighting for a better web.

Videos by heise

Vivaldi had already declared a year ago that it did not want to integrate any large language models into the browser – quasi the predecessors of the AI agents. At the time, there was talk of plagiarism, copyright infringements and violations of people's privacy and that LLMs were not suitable as conversation partners. LLMs generate plausible-sounding lies, Vivaldi wrote. Since then, the output of numerous AI models has improved significantly, but it remains the case that entire answers or parts of answers from an AI chatbot or agent can also be simply wrong.

In fact, OpenAI's CEO Sam Altman recently said that the ChatGPT agent should only be used to a limited extent – because it makes mistakes for which there is currently no solution. It is just a chance to try out the future, but obviously not yet ready. Specifically, however, Altman does not warn against hallucinations, i.e. false information, but against malicious actors trying to trick the AI agent – and thus gain access to information such as emails, account data and more.

All AI services are affected by potential attacks using prompt injections, for example. Anthropic has also just published a report on how cybercriminals are abusing the AI chatbot Claude.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.