CYBERNEWSMEDIA Network:||
AD · 970×250

Artificial Intelligence

New ‘Reprompt’ Attack Silently Siphons Microsoft Copilot Data

The attack bypassed Copilot’s data leak protections and allowed for session exfiltration even after the Copilot chat was closed. The post New ‘Reprompt’ Attack Silently Siphons Microsoft Copilot Data appeared first on SecurityWeek.

Microsoft Copilot attack

Security researchers at Varonis have discovered a new attack that allowed them to exfiltrate user data from Microsoft Copilot using a single malicious link.

Dubbed Reprompt, the attack bypassed the LLMs data leak protections and allowed for persistent session exfiltration even after the Copilot was closed, Varonis says.

The attack leverages a Parameter 2 Prompt (P2P) injection, a double-request technique, and a chain-request technique to enable continuous, undetectable data exfiltration.

The Reprompt Copilot attack starts with the exploitation of the ‘q’ parameter, which is used on AI platforms to deliver a user’s query or prompt via a URL. All it takes is for the user to click on the link.

“By including a specific question or instruction in the q parameter, developers and users can automatically populate the input field when the page loads, causing the AI system to execute the prompt immediately,” Varonis explains.

A threat actor, the cybersecurity firm notes, could abuse the feature to make Copilot execute unwanted actions. The attack resulted in one-click compromise and, because it leveraged the active user session, it persisted after the chat was closed.

To prevent sensitive information leaks, Copilot typically fetches URLs only if a valid reason has been provided, and reviews and alters sensitive information before returning it.

However, Varonis discovered that the protections only applied to the initial request, and that they could be bypassed by supplying each request multiple times.

The researchers added instructions for Copilot to perform each task twice, which resulted in the LLM leaking user information.

Specifically, they requested it to fetch a URL containing a secret phrase twice. Copilot removed the sensitive information on the first try, but included it in the second response.

Next, the researchers developed a chain request, where Copilot retrieved the new instruction directly from their attack server.

Each request instructed it both to exfiltrate more user information and to fetch another instruction, in a continuous exchange with the server.

This ongoing exchange, Varonis notes, would allow an attacker to exfiltrate as much information as possible, requesting more data based on previous responses.

Furthermore, with all commands sent from the server, hidden in the follow-up requests, victims could not determine what data was leaked after the initial prompt.

“Client-side monitoring tools won’t catch these malicious prompts, because the real data leaks happen dynamically during back-and-forth communication — not from anything obvious in the prompt the user submits,” Varonis says.

Microsoft has resolved the underlying issue. The attack does not affect enterprise customers using Microsoft 365 Copilot, Varonis notes.

“We appreciate Varonis Threat Labs for responsibly reporting this issue. We have rolled out protections that address the scenario described and are implementing additional measures to strengthen safeguards against similar techniques as part of our defense-in-depth approach,” a Microsoft spokesperson told SecurityWeek.

*Updated with statement from Microsoft.

Related: ‘EchoLeak’ AI Attack Enabled Theft of Sensitive Data via Microsoft 365 Copilot

Related: Rethinking Security for Agentic AI

Related: Chrome Extensions With 900,000 Downloads Caught Stealing AI Chats

Related: Militant Groups Are Experimenting With AI, and the Risks Are Expected to Grow

Latest News

CYBERNEWSMEDIAPublisher