AI agents: Where trust matters more than speed

AI agents are set to revolutionize the software market. But in regulated industries like finance and law, human trust remains paramount.

listen Print view
Robot shaking hands with a person

(Image: Willyam Bradberry/Shutterstock.com)

3 min. read

Given their development, it is considered a foregone conclusion that AI agents will revolutionize the software market. Agentic automation threatens jobs or, at best, transforms them into entirely new roles of AI control. In certain industries concerned with regulation and liability, however, people are relaxed. They believe humans still have something to offer that artificial intelligence does not exude: trust.

The release of software tools such as OpenAI Codex, Claude Code, or the desktop tool Claude Cowork triggered a minor earthquake in the stock markets in recent months. Industry experts speak of a “SaaS-pocalypse” as investors fled Software-as-a-service stocks, fearing that AI would call their business models into question.

As the Financial Times reports after speaking with CEOs, investors, and analysts, the future development is now being viewed somewhat more nuanced. In areas where errors can have devastating consequences, such as finance and law, market observers expect traceability and accountability to remain more important than the increased speed promised by AI deployment. “Almost right is not good enough,” sums up Thomson Reuters CEO Steve Hasker in the report.

Videos by heise

In cybersecurity as well, market participants view finding code vulnerabilities as only one part of the equation. AI is primarily good at clearly defined tasks. Despite ever-improving reasoning capabilities, it still struggles to think outside the box. It's a similar problem to the tendency of many AI chatbots to constantly tell their users what they want to hear.

In financial services, entrepreneurs like Jean-Baptiste Brian, Co-CEO of the investment firm Hg Capital, consider AI to be “absolutely miserable at judgment.” According to the Financial Times' research, proprietary data, regulation, network effects, and system integration will remain a protective wall for established software providers against the advance of AI for now. The insurance industry also has reservations about jeopardizing customer trust through the use of AI. Hallucinations regarding prices, terms and conditions, and consents would undermine trust.

From the AI industry, however, one hears that it does not claim to replace existing software completely. AI tools like Cowork are intended to complement existing software, argues Catherine Wu, Anthropic's product lead for Claude Code. This also corresponds to the practices of many employees who already use AI. They hope for efficiency gains. While the work performed by AI must be checked, if time savings through AI use and post-control by humans still result in a time saving, the use of AI has paid off. At least as long as AI models are still affordable.

(mki)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.