EchoLeak: First AI 0-Click vulnerability in Microsoft Copilot

Copilot[German]Security researchers have discovered the first zero-click vulnerability in an AI application. Unsurprisingly for me, this concerns Microsoft 365 Copilot. Attackers could use this vulnerability, known as EchoLeak, to force Microsoft 365 Copilot to exfiltrate data.

Microsoft "imposes" Copilot on all Office users if their administrators do not intervene accordingly. This is particularly problematic because the old GPOs for deactivation no longer work. In addition to the fact that users are suddenly provided with unwanted functions, the question of security also arises.

Is confidential data being leaked unintentionally? Could a security vulnerability put my company environment at risk? These are certainly questions that concern IT managers. Let me remind you of my blog post BlackHat 2024: Remote code execution attack on M365 Copilot via email, where such a scenario was outlined. Now there is the next case.

First AI vulnerability discovered in Copilot

It is an addendum, as AIM Labs already disclosed this EchoLeak vulnerability on June 11, 2025 in the blog post Breaking down 'EchoLeak', the First Zero-Click AI Vulnerability Enabling Data Exfiltration from Microsoft 365 Copilot.

Security researchers from the startup Aim Labs have discovered a critical zero-click AI vulnerability, which they have named EchoLeak. The vulnerability is in Copilot, which is being rolled out with Microsoft 365 (M365). The security researchers outline a chain of attack known as "LLM Scope Violation" that can be applied to Copilot to extract information. AIM Labs writes that this new technique for abuse could potentially be used in other RAG-based chatbots and AI agents.

The abbreviation RAG stands for Retrieval-Augmented Generation, and refers to AI-supported chatbots that use an additional layer of information retrieval and generation. The AI chatbot combines the strengths of search engines with the possibilities of large language models (LLMs). Essentially, an LLM is provided with relevant information from an external knowledge database or knowledge base to generate more accurate and contextualized answers.

Normally, it is assumed that only employees of a company have access to Copilot, so that the information retrieved remains within the company. On an ad hoc basis, however, I ask myself what happens if an employee manages to retrieve sensitive information from the HR department about employees' salaries or sensitive emails from the management that have been "auto-evaluated by Copilot" in an internal company LLM?

Remote attacks via email on Copilot

But the EchoLeak vulnerability mentioned above seems even more damgerpis, if I'm interpreting it correctly. The security researchers state that unauthorized third parties can retrieve information from Copilot, even though the M365 Copilot interface is only accessible to company employees.

The security researchers have probably taken the attack method mentioned in my blog post BlackHat 2024: Remote code execution attack on M365 Copilot via email a little further. They write: To successfully carry out an attack, an attacker only needs to send an email to the victim. Unless the sender's email has somehow been restricted at the recipient's end (so that no information is returned to the sender), access to the "Copilot crown jewels" is open.

The background to this is that Copilot is integrated into the Office applications and has access to all of the company's documents. Copilot is supposed to analyze these documents for content so that prompts ("give me all employees with a monthly salary greater than 7,000 Dollar") provide the user with the relevant information.

When an email arrives in Outlook, techniques could be used to access Copilot and instruct this AI solution to return certain information. This information could then be transmitted to the attacker via the active content of the email. All the security mechanisms built into Copilot could be bypassed in the process.

Several vulnerabilities reported

According to the security researchers, they have reported several attack chains on Copilot to Microsoft's MSRC team that can be exploited. As a zero-click AI vulnerability, EchoLeak opens up far-reaching possibilities for data exfiltration and blackmail attacks for motivated threat actors, according to the discoverers.

These attack methods and the resulting possibilities are described in more detail in the security researchers' article linked above. There is also an FAQ with answers as to who could have been affected and what data could have been leaked. However, the discoverers write that they are not aware of any cases of data leakage.

It took Microsoft five months to get the fix

The security researchers reported their discovery (according to this source) to the Microsoft Security Response Center in January 2025. According to the source, it then took a full five months for anything to happen. Sounds much more positive in Microsoft's own words. A Microsoft spokesperson wrote to the source:

"We are grateful to Aim for responsibly reporting the identified issue so that it could be fixed before our customers were impacted. We have already updated our products to mitigate this issue and no action is required for customers. We are also in the process of implementing additional defense-in-depth measures to further strengthen our security measures."

The security researchers announced that those responsible for security at Microsoft categorized the discovery of the attack vector as "groundbreaking". This may be just a stretch of the imagination, because based on my blog post BlackHat 2024: Remote code execution attack on M365 Copilot via email , Microsoft should potentially have been aware of the problem. However, it is possible that the attack variants and the scope were unclear until the report was made. This case shows once again what potential risks lurk in a world in which agents and chatbots continue to develop and are sometimes unleashed on users on a weekly basis – largely unchecked.

My thoughts on this

I am particularly concerned about the security aspect of the above issue, because there is currently a "rat race" between providers to see who is the quickest to destroy everything with AI functions. The user is simply not included in this equation. This makes the whole thing even more explosive.

I have pointed out the energy consumption for these AI solutions and the GDPR and security issues in various blog posts. But the benefits of these AI approaches also seem highly questionable (I have already touched on this before).

I just read an article before on software developer tools: A fifth of IT managers don't see AI as an improvement. In the area of software development, the experience of IT managers with regard to AI-supported software development is very sobering. Only marginal benefits are reported across the board – in other words, the stuff is pretty useless when costs are weighed against benefits. The above article is based on a 2025 State
of AI code quality survey by Qodo.

I also came across the article Enterprises are getting stuck in AI pilot hell, say Chatterbox Labs execs. The editorial team at The Register conducted an interview with Danny Coleman, CEO, and Stuart Battersby, CTO of Chatterbox Labs. They discuss the question of why companies have so far been slow to move from AI pilot tests to production use. The key message: before AI becomes commonplace in companies, business leaders would need to commit to an ongoing safety testing program that is attuned to the nuances of AI models. But if the management has no idea, something like the one outlined above will emerge and everyone will rub their eyes "how could this happen, nobody could have guessed". Or how do you see it?

Similar articles
US Congress bans the use of Microsoft AI solution Copilot
Microsofts Copilot+PC, a privacy and security nightmare – Part 2
Copilot+AI: Recall, a security disaster – AI-assisted theft

This entry was posted in Security, Software and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

Note: Please note the rules for commenting on the blog (first comments and linked posts end up in moderation, I release them every few hours, I rigorously delete SEO posts/SPAM).