EchoLeak Vulnerability: Microsoft 365 Copilot Hit by RAG Exploitation

A Vulnerable Retrieval-Augmented Generation (RAG) System Can Obscure an Even Greater Threat

The recent discovery of EchoLeak highlights an even deeper security concern surrounding vulnerable RAG architectures. A critical vulnerability—codenamed CVE-2025-32771—was identified as allowing malicious actors to extract sensitive data via Microsoft 365’s Teams’ Copilot feature. This issue exemplifies how a seemingly isolated flaw can be part of a more complex web of interconnected vulnerabilities, leading to significant data breaches.

In reality, the exploit involves a combination of multiple vulnerabilities working in tandem. The main attack chain encompasses three categories from the OWASP Top 10: LLM01 (prompt injection), LLM02 (leakage of confidential information), and LLM04 (data and model poisoning). Together, these create a potent vector for malicious data exfiltration through AI-assisted platforms.

Data Exfiltration via Poisoned RAG

The core method involves an injection command embedded indirectly within the RAG system itself. Attackers craft one or several emails containing carefully worded instructions designed to appear as natural human correspondence. These emails bypass standard content filters by exploiting how RAG interprets and sources information from external prompts.

To increase the likelihood that Copilot will execute the malicious instruction, adversaries can leverage social engineering tactics—targeting specific knowledge of what kinds of queries the victim is likely to perform. Additionally, if the underlying system uses a vector database to store embedded data, the attacker can maximize their footprint by flooding the latent space with numerous crafted emails or a single, lengthy message divided into multiple smaller segments. Each segment acts as a “chunk,” occupying specific points within the system’s embedding space, thus increasing the attack’s effectiveness.

The primary attack sequence involves these two options: either inserting a link pointing to a malicious server within the prompts or exploiting the way information is fetched. Initially, attackers considered embedding a URL pointing to a malicious server within the prompt, hoping that Copilot would fetch and execute it. However, since Copilot restricts external links—preventing access to outside resources unless explicitly whitelisted—the attack faced hurdles.

Nevertheless, Markdown reference links—like [ref]—can bypass some of these restrictions, serving as an indirect method to deliver malicious payloads. These links can be crafted to appear benign but still facilitate the exfiltration process when processed by the AI.

Using Teams URLs for “Zero-Click” Attacks

To eliminate the need for a user to click on a malicious link, attackers can exploit embedded URLs within Microsoft Teams. For example, generating a special URL that, when loaded by the browser, automatically triggers the retrieval of embedded content—such as images—without any user interaction.

Under normal security policies, images are restricted in origin, only permitted from specific Microsoft-controlled domains. These domains include SharePoint, which is integrated into Microsoft 365’s ecosystem. Some of these URLs allow the system to execute requests with the user’s active permissions, such as retrieving embedded data from SharePoint sites after the user has logged in and accepted an invitation. This means that with minimal user interaction—simply opening a message—the attacker can obtain sensitive data under the guise of legitimate activity.

To bypass this, malicious actors can employ alternate URLs within Teams that perform the same function but do not require any user-triggered actions. These URLs exploit the seamless integration of Teams and SharePoint to stealthily fetch and deliver confidential information.

Another mitigation strategy involves instructing Copilot not to source or reference specific emails, especially suspected malicious ones, thus avoiding dissemination or execution of harmful prompts.

Following the identification of these vulnerabilities, Microsoft rolled out a server-side patch in May that addressed the weaknesses. Official statements indicate no confirmed exploitation to date, but the potential risk underscores the importance of continuous security monitoring.

In summary, the EchoLeak case underscores how vulnerabilities in AI-powered systems—particularly those involving RAG and collaborative platforms like Microsoft 365—can be exploited to execute sophisticated and stealthy data breaches. These exploits leverage indirect injection techniques, social engineering, and platform integrations to circumvent conventional security safeguards, emphasizing the need for ongoing vigilance and robust safeguards in enterprise AI deployments.

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.