Critical vulnerability in Microsoft 365 Copilot shows risk of AI agents

The M365 AI agent could be tricked into releasing sensitive information via email and without a mouse click. Microsoft has now closed the gap.

listen Print view
Magnifying glass on Microsoft 365 and Outlook icon

(Image: IB Photography/Shutterstock.com)

4 min. read

Users of Microsoft 365 Copilot were threatened by a critical security vulnerability for months. The AI assistant for company software could be tricked into disclosing sensitive and other information. All that was needed was an email with cleverly worded instructions, no human mouse click was required. This is because the artificial intelligence (AI) read and processed the email on its own. However, Microsoft has already resolved this problem.

M365 Copilot is the AI assistant for Microsoft 365 applications such as the Office products Word, Excel, PowerPoint, Outlook, and Teams. Thanks to its integration into the company's network, the AI agent based on the large language model GPT-4 from OpenAI also has access to company data, some of which is sensitive. Attackers can take advantage of this, as the AI acts independently and reads and processes emails sent to employees, for example. In contrast to the well-known phishing emails, no mouse click is required here.

The security researchers at Aim Security have uncovered this procedure. However, even without human intervention, exploiting this vulnerability requires special wording within the email, including specially designed links. Also, the instructions for the AI agent should not be too obvious to the human reader so that the attack cannot be quickly traced. Although the Copilot access system stipulates that each employee only has access to their data, this could also include sensitive content.

Microsoft lists this vulnerability, called “EchoLeak” by Aim Security, as CVE-2025-32711 and describes it as “critical”. However, according to the company, it has not yet been exploited and has now been closed. Users of M365 Copilot do not need to take any further action. The publication of a security update is therefore only for transparency purposes. A Microsoft spokesperson told Fortune that “additional in-depth defenses” will be implemented to “further strengthen security”.

Videos by heise

The security researchers discovered the vulnerability back in January of this year and reported it to Microsoft. However, it took the software company around five months to resolve the problem. Adir Gruss, co-founder and Chief Technology Officer of Aim Security, described Microsoft's response time to Fortune as “on the (very) high side”. This may have been because this was a novel vulnerability, and it took time to find and instruct the correct teams and employees for the countermeasures.

According to Gruss, EchoLeak is also likely to affect other AI agents, such as Anthropic's MCP (Model Context Protocol), which links AI assistants with other applications, or Agentforce from Salesforce. These language models could also be similarly manipulated to provide attackers with company data. As companies that are now integrating AI agents into their systems, Gruss would be “terrified”. In his view, this is a fundamental issue, comparable to the software security vulnerabilities of the 1990s, when attackers tried to gain control of laptops or cell phones.

Gruss is therefore calling for AI agents to be structured completely differently. “The fact that agents use trusted and untrusted data in the same 'thought process' is the fundamental design flaw that makes them vulnerable,” he said. “Imagine a person who does everything they read – they would be easy to manipulate.” AI agents and processes should be designed with a clear separation of trusted commands and untrusted data.

(fds)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.