AI agents: Popularity is skyrocketing – despite lack of security

AI agents are becoming increasingly popular. However, standards for their safety and behavior are lacking. This is shown by the AI Agent Index 2025.

listen Print view
A robot hand on a keyboard

(Image: kung_tom/Shutterstock.com)

4 min. read
Contents

Agentic AI systems are increasingly capable of handling complex tasks with minimal human intervention. At the same time, there is no consensus regarding the safety and behavioral standards of AI agents. These and other findings are presented in the AI Agent Index 2025, a study by the Computer Science & Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT).

For the AI Agent Index, a team of scientists from various universities examined thirty AI agents. Their research was based on publicly available information or correspondence with developers. The researchers focused on the origin, scope of functions, system architectures, and safety measures of common agentic AI systems.

The conclusion is clear: the popularity of agentic AI systems has massively increased in the past year. According to the study, interest is not limited to the general public. This is evident, for example, in the increase in corresponding Google searches. However, the number of scientific publications on agent-based AI has also risen. It more than doubled in 2025 compared to the period from 2020 to 2024. This is understandable, as such systems did not exist before 2023.

Furthermore, the development of agentic AI systems is progressing rapidly, according to the scientists. Eighty percent of the AI agents examined in the study were released between 2024 and 2025 or received significant updates to their agentic capabilities. At the same time, a significant structural dependency on large AI companies is revealed. The majority of AI agents rely on the model families GPT, Claude, or Gemini from major US AI corporations. These companies are known to be in a race for ever-new AI models – including those with agentic capabilities.

Despite the growing interest in AI agents, important aspects of their development and deployment remain opaque, according to the scientists. There is a lack of publicly available information, especially for researchers or policymakers. Ultimately, all the mentioned models are proprietary, and very little is known about their architecture and training data.

According to the scientists, only four of the thirteen examined AI agents with a high degree of autonomy disclose safety assessments. Twenty-five out of all thirty agentic AI systems published no results from internal product safety investigations whatsoever.

Behavioral standards for AI agents are also insufficiently regulated, according to the AI Agent Index 2025. Many agents simply ignore robots.txt files on websites. The robots.txt is a text file in the root directory of a website that instructs web crawlers which areas can be automatically searched and analyzed and which cannot. Some examined AI agents are even explicitly designed to bypass anti-bot systems, according to the MIT scientists.

Furthermore, the question of responsibility for misconduct and security problems of agentic AI systems is fundamentally difficult. Because most agents are built on the AI models of large developers like OpenAI, Google, or Anthropic, it is unclear who should take responsibility.

Even OpenAI CEO Sam Altman warns against the use of agentic systems. These can be attacked particularly easily, for example, through prompt injections, where agents receive hidden instructions. However, to be used effectively, agents need access to information and a certain degree of freedom – for example, access to emails, calendars, or similar.

(rah)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.