A vulnerability in Google’s AI assistant Gemini allowed attackers to leak a victim’s private meetings via Google Calendar events, cybersecurity firm Miggo reports.
The attack involved creating a malicious calendar event and sending an invite to the targeted user.
Using a payload in the Calendar event’s description, the indirect prompt injection attack bypassed Calendar’s privacy controls to access meeting data and create deceptive events without user interaction.
The attack, Miggo explains, abused Calendar’s integration with Gemini, where the AI functions as an assistant, parsing all event data, including titles, times, attendees, and descriptions.
“Because Gemini automatically ingests and interprets event data to be helpful, an attacker who can influence event fields can plant natural language instructions that the model may later execute,” Miggo notes.
The cybersecurity firm discovered it was possible to create a calendar description that would instruct Gemini to summarize a victim’s meetings, including private ones, write the data in the description of a new calendar event, and deliver a harmless response to the user, to hide the malicious actions.
“The payload was syntactically innocuous, meaning it was plausible as a user request. However, it was semantically harmful when executed with the model tool’s permissions,” Miggo notes.
The payload was triggered when the user asked Gemini a question about their schedule, and resulted in the AI creating a new calendar event containing the user’s data in the description. The new calendar event with the victim’s private meeting data was accessible to the attacker, Miggo says.
As the cybersecurity firm notes, the attack was successful because it relied on seemingly innocuous instructions that any user might give to Gemini. The context and intent made it malicious and dangerous.
“This shift shows how simple pattern-based defenses are inadequate. Attackers can hide intent in otherwise benign language and rely on the model’s interpretation of language to determine the exploitability,” Miggo notes.
The cybersecurity firm reported the findings to Google, which confirmed the vulnerability and addressed it.
Related: Vibe Coding Tested: AI Agents Nail SQLi but Fail Miserably on Security Controls
Related: New ‘Reprompt’ Attack Silently Siphons Microsoft Copilot Data
Related: ‘ZombieAgent’ Attack Let Researchers Take Over ChatGPT
Related: Google Patches Gemini Enterprise Vulnerability Exposing Corporate Data

