For years, the gold standard of online safety has been “don’t click suspicious links.” But recent research has revealed a more disturbing reality: you can be compromised by clicking a perfectly legitimate link to a service you trust.
In January 2026, Varonis Threat Labs disclosed a vulnerability named Reprompt. It allowed attackers to hijack a Microsoft Copilot session and exfiltrate highly personal data, all with a single click and no further interaction from the user.
The Architecture of a Silent Theft
The Reprompt attack didn’t rely on malware or broken code. Instead, it weaponized the very features designed to make AI helpful. The exploit worked through a three-stage “chain” that bypassed traditional security thinking.
- The “Hidden” Prompt – Attackers used a common web feature called the q= parameter. In a standard link like copilot.microsoft.com/?q=[instructions], any text placed in that “q” section is automatically treated as a user prompt the moment the page loads. By sending a target a link with malicious instructions pre-loaded, the attacker forced Copilot to execute those commands the second the user clicked.
- The Double-Request Bypass – Copilot has internal “guardrails” meant to stop it from leaking private info. However, researchers found these filters were primarily focused on the first attempt at a sensitive action. By simply commanding the AI to “repeat this twice,” attackers found that the first request would be blocked while the second attempt frequently slipped through undetected.
- The Unending Conversation – Once the initial “gate” was opened, the AI was told to connect to an attacker-controlled server. This created a “chain-request” where the attacker’s server could issue follow-up instructions based on what the AI had already revealed. Because this back-and-forth happened between the AI and the server, it was invisible to the user and bypassed traditional client-side security tools.
What the AI Could Reveal
Once compromised, the AI could be forced to divulge anything it had access to in that session:
- Personal Identity: Your full name and location clues.
- Active Context: Summaries of files you had recently opened or accessed.
- Privacy Secrets: Past conversation history and upcoming travel or vacation plans.
Crucially, the attack was persistent. Because it lived within your active, authenticated browser session, the data exfiltration could continue even after you closed the Copilot chat tab.
Staying Safe in the “Reprompt” Era: Official Varonis Advice
The “Reprompt” vulnerability has been patched, but the underlying risk, that an AI assistant can be “tricked” into leaking your data via a single click, is a permanent feature of the new AI landscape. Varonis Threat Labs, the researchers who discovered the exploit, have issued a set of specific safety protocols for both everyday users and corporate administrators.
For the Individual User: Trust But Verify
Varonis emphasizes that because this attack uses legitimate Microsoft links, traditional “red flags” like a weird URL might not be present.
- Treat AI Links Like Attachments: You wouldn’t open a random .exe file from an unknown sender. Apply that same logic to links that open AI tools like Copilot. Only click them if you were expecting them from a source you trust.
- Spot the “Pre-Fill”: If you click a link and Copilot opens with a question already typed out or, worse, already generating an answer you didn’t ask for, do not interact with it.
- The Emergency Reset: If your AI begins acting strangely or probing for personal details (like home address or travel plans) out of context:
- Close the browser tab immediately.
- Delete the suspicious chat history.
- Log out of the app and log back in to refresh your session tokens.
For the Organization: Hardening the “Blast Radius”
For businesses, Varonis warns that AI assistants shouldn’t be treated as standard software, but as “insider threats” that have been given keys to the kingdom.
- The “Double-Check” Rule for AI: Developers and vendors must ensure that safety guardrails aren’t just one-time checks. The “Reprompt” attack worked because Copilot stopped checking for data leaks after the first request. Safeguards must be persistent across the entire conversation.
- Enforce Strict “Least Privilege”: If an employee only needs access to the “Marketing” folder, the AI assistant they use should also only be able to see that folder. Organizations must audit their “blast radius”, the total amount of data an AI could theoretically touch if compromised.
- Treat URLs as Untrusted Input: System admins should configure security tools to validate and scrub externally supplied inputs, including “deep links” that attempt to pre-populate AI prompts.
The Bottom Line
The Varonis research proves that “safety filters” are often just speed bumps for a determined attacker. While Microsoft has closed this specific door, the most effective defense remains a skeptical user. In the age of AI, your assistant is only as secure as the last link you clicked.