AI assistant: Microsoft's Copilot falsified access logs for months
If you asked the virtual co-pilot for document summaries, for example, it sometimes withheld its accesses. Microsoft concealed the problem.
(Image: Tada Images/Shutterstock.com)
Microsoft is fully committed to artificial intelligence: the AI assistant Copilot is now an integral part of the cloud-based office package M365. Many companies also use this service to edit and process confidential information. It goes without saying that any access to such sensitive documents should be logged. However, Copilot saw things differently under certain circumstances. Microsoft knew about the vulnerability for months, but only fixed it a few days ago. The company did not inform those affected or the public.
"Copilot, please summarize the annual report for the second quarter of 2025" – This or something similar could be a typical request. In the "audit log", i.e. the log of all accesses to documents in the Microsoft cloud, a read access to the source document by Copilot then appears. However, if the virtual assistant was queried in a special way about a document stored in M365, this only generated an empty log entry. To generate this behavior, it was sufficient to request that the document not be linked in the response, but merely summarized.
Zack Korman, the CTO of a SaaS startup, noticed this strange behavior in early July 2025. It seemed problematic to him, because only with complete audit logs can companies implement their security and compliance requirements and detect the leakage of documents into unauthorized hands. An employee who was too curious or even bribed by attackers could exploit the Copilot error and obtain information undetected. Obviously, falsified logs are a security problem.
Security is a process – but which one?
So Korman contacted the Microsoft Security Response Center (MSRC) and trusted that the professionals in Redmond would follow their documented procedure, resolve the problem and inform affected customers. However, his initial euphoria quickly gave way to disillusionment: although the MSRC began to reproduce the problem just three days according to his report, another three days later, on July 10, the technicians had apparently already silently rolled out a bug fix.
Videos by heise
This contradicted the company's own process description, which prompted Korman to knock on Redmond's door again and inquire about the status. On August 2, the software giant announced that they would be rolling out an update for the M365 cloud two weeks later, on August 17, and Korman would be able to publish his discovery a day later. When Korman asked when he would receive a CVE vulnerability designation for the gap he had found, the MSRC replied in the negative. They generally do not issue CVE IDs for vulnerabilities in cloud products if end customers do not have to take action themselves.
This also clearly contradicted the statements that the MSRC made coram publico just over a year ago. At that time, it was stated that in the future, CVE IDs would also be issued for critical vulnerabilities in cloud services, explicitly in cases where customers do not have to take action themselves. The MSRC promised that this would ensure greater transparency following a security disaster: presumably Chinese attackers had stolen a master key for Azure. But back to the Copilot gap: When Korman noted this discrepancy, the Microsoft security team pivoted. They understood that he did not have a complete overview of the process, they said in a slightly passive-aggressive manner, but the vulnerability was only classified as "important" and not "critical". This means that it falls below Microsoft's own threshold for assigning a CVE ID.
Faulty audit logs for months? Not worth mentioning!
Korman was surprised once again: he was not aware of any classification of the vulnerability to date. This is usually carried out by the affected company together with the discoverer and discussed if necessary. In this case, however, Microsoft was just as reluctant to discuss the issue as it was to be transparent. On August 14, Korman was informed that the company would not only refrain from issuing a CVE ID, but was also not planning to inform customers about the vulnerability.
Meanwhile, the discoverer had established that the problem must have existed for considerably longer than originally suspected.Michael Bargury, founder of an AI start-up, had already drawn attention to the errors in the logging of AI file accesses in the Microsoft cloud in a presentation at the Black Hat security conference in August 2024, i.e. a year before Korman's discovery. Plenty of time for the Redmond software giant to address the issue, but it only responded last week.
For companies using M365 and Copilot, this leaves a stale aftertaste. They have to face the fact that their audit logs may have been incorrect for months and that accesses have taken place that are no longer traceable. Attackers and industrial spies are also likely to have included the prompt tricks in their toolkit since last year's Black Hat conference at the latest, which is likely to exacerbate the compliance headaches for those affected.
Microsoft proclaimed the "Secure Future Initiative" as a confidence-building measure after last year's Azure disaster, but has been criticized for months for sloppy security patches and miserable communication. heisesecurity founder JĂĽrgen Schmidt summarized this in a comment with a rustic vocabulary: Bullshit. The latest incident seems to reinforce this impression.
(cku)