Researchers have shown how popular AI systems can be tricked into processing malicious instructions through an indirect prompt injection attack that involves image scaling.
Image scaling attacks against AI are not a new concept, but experts at cybersecurity research and consulting firm Trail of Bits have now shown how the technique can be leveraged against modern AI systems.
AI products, particularly those that can process large images, often automatically downscale an image before sending it to the core AI model for analysis.
Trail of Bits researchers showed how threat actors can create a specially crafted image that contains a hidden malicious prompt. The attacker’s prompt is invisible in the high-resolution image, but it becomes visible when the image is downscaled by preprocessing algorithms.
The low-resolution image with the visible malicious prompt is passed on to the AI model, which may interpret the message as a legitimate instruction.

Trail of Bits demonstrated the potential impact of the attack by hiding a text instructing the AI model to exfiltrate the user’s calendar data.
AI tools are increasingly integrated with other solutions, particularly in enterprise environments, and researchers regularly show how AI assistants can be abused for sensitive data theft and manipulation through hidden prompts.
Trail of Bits said its image scaling attack works against the Gemini command-line interface (CLI), Gemini’s web and API interfaces, Vertex AI Studio, Google Assistant, Genspark, and likely other products.
In some cases, particularly when a CLI is used, the victim does not see the rescaled image (in which the malicious prompt is visible) before it’s processed by the AI model, which makes the attack attempt even less likely to be discovered.
The security firm has released an open source tool named Anamorpher, which can be used by other researchers to craft and visualize image scaling attacks against AI systems.
Related: OneFlip: An Emerging Threat to AI that Could Make Vehicles Crash and Facial Recognition Fail
Related: GPT-5 Has a Vulnerability: Its Router Can Send You to Older, Less Safe Models
Related: Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ for Enterprise

