CYBERNEWSMEDIA Network:||
AD · 970×250

Artificial Intelligence·Cloud Security

Google Addresses Vertex Security Issues After Researchers Weaponize AI Agents

Palo Alto Networks has disclosed the details of its analysis of Google Cloud Platform’s Vertex AI. The post Google Addresses Vertex Security Issues After Researchers Weaponize AI Agents appeared first on SecurityWeek.

AI hack

Palo Alto Networks has shared details about how its researchers weaponized AI agents built on Google Cloud’s Vertex AI development platform.

The research focused on the Vertex Agent Engine and the Agent Development Kit (ADK), which enable developers to create, deploy, manage, and scale AI agents.

The Palo Alto Networks researchers found that these agents could be compromised by attackers and turned into ‘double agents’, enabling various types of malicious activities, including exfiltrating data, creating backdoors, and compromising infrastructure.

One of the main issues uncovered by the researchers concerns the Per-Project, Per-Product Service Agent (P4SA), which is associated with the user-deployed AI agent. A service agent is a service account that enables Google Cloud Platform (GCP) services to access resources.

The problem, according to Palo Alto, is that P4SA has excessive permissions by default. The company’s researchers showed that these permissions could be abused to obtain a GCP service agent’s credentials and leverage them to move from the AI agent’s execution context into the owner’s project and the associated data storage.

“This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into an insider threat,” the researchers explained

In addition, they showed how an attacker could abuse the compromised P4SA credentials to gain unrestricted access to the Google project that hosts Vertex AI. An attacker could use this access to download container images from private repositories.

“These images form the core of the Vertex AI Reasoning Engine. Gaining access to this proprietary code not only exposes Google’s intellectual property, but also provides an attacker with a blueprint to find further vulnerabilities,” the researchers noted.

They also found that the compromised credentials could be used to access restricted Artifact Registry repositories containing other images that could be useful to attackers, as well as Google Cloud Storage buckets containing potentially sensitive information.

The researchers also came across a file that an attacker may be able to manipulate for remote code execution within the agent’s environment. A threat actor could use this to create a powerful and persistent backdoor.

Palo Alto has shared its findings with Google, and the tech giant has addressed the issue by revising its documentation to point out potential risks. 

Google also recommends using Bring Your Own Service Account (BYOSA) to secure Agent Engine and ensure least-privilege execution. BYOSA enables Agent Engine users to enforce the principle of least privilege, granting the agent only the permissions it requires to function.

Additionally, Google noted that strong, non-overridable controls are in place to prevent service agents from altering production images.

Related: Palo Alto Networks, Google Cloud Strike Multibillion-Dollar AI and Cloud Security Deal

Related: AI Supply Chain Attack Method Demonstrated Against Google, Microsoft Products

Related: AI Systems Vulnerable to Prompt Injection via Image Scaling Attack

Latest News

CYBERNEWSMEDIAPublisher