A vulnerability in the AI code editor Cursor allowed remote attackers to exploit an indirect prompt injection issue to modify sensitive MCP files and execute arbitrary code.
Tracked as CVE-2025-54135 (CVSS score of 8.6), the flaw existed because Cursor did not require user approval when creating a sensitive MCP file.
The security defect allowed an attacker to write a dotfile, such as the .cursor/mcp.json file, through an indirect prompt injection, and then trigger remote code execution (RCE) without the user’s approval.
“If chained with a separate prompt injection vulnerability, this could allow the writing of sensitive MCP files on the host by the agent. This can then be used to directly execute code by adding it as a new MCP server,” Cursor’s advisory reads.
According to Aim Labs, which discovered the bug and called it CurXecute, the issue is that suggested mcp.json edits immediately land on disk and Cursor executes them, before the user accepts or rejects them.
Thus, an attacker can add a standard MCP server that exposes the agent to untrusted data, then supply a prompt that instructs the agent to improve mcp.json, resulting in Cursor launching the MCP server in the modified file, which leads to RCE.
“This happens before the user has any chance to approve or reject the suggestion – providing the attacker with an arbitrary command execution,” Aim Labs underlines.
Any third‑party MCP server that processes external content is susceptible to the attack, including customer support tools, issue trackers, and search engines, Aim Labs says.
Addressed in Cursor version 1.3, this was not the only code execution flaw resolved in the AI agent recently. Another one, tracked as CVE-2025-54136 (CVSS score of 7.2), could have allowed attackers to swap harmless MCP configuration files with malicious commands, without triggering a warning.
“If an attacker has write permissions on a user’s active branches of a source repository that contains existing MCP servers the user has previously approved, or an attacker has arbitrary file-write locally, the attacker can achieve arbitrary code execution,” Cursor notes.
Another indirect prompt injection attack against Cursor was flagged by BackSlash and HiddenLayer. It was related to Cursor’s Auto-Run mode, where commands would be automatically executed, without requesting permissions, and was addressed in Cursor version 1.3.
Users could define a list of commands that the AI agent had to request user permissions to run, but this protection could be bypassed by including the prompt injection in the comment block within a git repository’s Readme.
When the victim clones the repository, Cursor reads the instructions and follows them, which allows the attacker to exfiltrate sensitive information from the system, chain legitimate tools to harvest and exfiltrate files, or perform other malicious actions, without warning the victim, HiddenLayer says.
“We found no fewer than four ways for a compromised agent to bypass the Cursor denylist and execute unauthorized commands,” BackSlash notes.
Related: Flaw in Vibe Coding Platform Base44 Exposed Private Enterprise Applications
Related: The Wild West of Agentic AI – An Attack Surface CISOs Can’t Afford to Ignore
Related: Google Says AI Agent Thwarted Exploitation of Critical Vulnerability
Related: Malicious NPM Packages Target Cursor AI’s macOS Users

