A new report from Grip Security analyzes 23,000 SaaS application environments. The statistics it found include: 100% of analyzed companies operate SaaS environments with embedded AI; there has been a year-over-year 490% spike in public SaaS attacks; and 80% of documented incidents involve PII and/or customer data.
“But what really surprised me,” says Chad Holmes, product marketing consultant at Grip Security, “is that organizations have an average of 140 AI-enabled SaaS environments.” If an AI-enabled app is breached, any integral agentic AI can be used first to access data from connected systems, secondly to cascade from the one breach to a breach of every other AI-enabled environment within the organization – and potentially to expand further into AI-enabled environments in other organizations. This is chaos.
The poster boy example of this cascading chaos is the Salesloft Drift incident (the ‘Great SaaS Breach of 2025’). Ultimately more than 700 organizations were affected, including security firms Cloudflare, Palo Alto Networks, Zscaler and CyberArk. UNC6395 attackers compromised Salesloft’s internal systems, starting with their GitHub repositories and moving from there into the Drift AWS environment. Here they stole the active OAuth and refresh tokens used by customers to connect the Drift Chatbot to local installations of Salesforce and other apps such as Slack.
Armed with the legitimate pre-approved OAuth token, the attackers were able to impersonate Drift and log directly into Salesforce installations into companies also using the Drift chatbot. One breach of a SaaS app (Drift) cascaded into hundreds of compromises in different companies across the globe.
Now it is fair to say that this incident prompted a drive to improve the security of SaaS apps and their AI implementations, but Grip is not convinced it will be enough. “One thing we’re seeing,” comments Holmes, “as we’ve moved outside the traditional perimeter, outside firewalls and network level protections, identity is the new perimeter. The focus is on identity, and if we have that identity, we can log into any environment anywhere.” For attacks against SaaS AI, the key ‘identity’ is a valid OAuth token.
It’s worth briefly examining why this is such a threat, if control hasn’t been achieved. Much of it is down to the current need for speed in business. SaaS developers are tempted to rapidly include agentic AI within their own products to improve efficiency ahead of their competition, and they don’t always make the implications apparent to their customers. The customer may have installed shadow AI without knowledge. ‘Shadow’ is the epithet used to describe the use of AI or autonomous agentic AI without formal oversight from the IT and security departments – and if the customer is unaware of the AI within the SaaS app, it is automatically ‘shadow’.
Customers adopt these apps too hastily, again to rapidly improve their own efficiency, and often without auditing them. Meanwhile, they have become so accustomed to issuing OAuth tokens that they may do so automatically as required by the SaaS app without considering wider implications from installing shadow AI.
While complexity is the enemy of security, SaaS is the disguiser and multiplier of complexity, through poor visibility into its shadow AI. An attacker can often find greater visibility into a SaaS app by stealing the right OAuth access and/or refresh token (courtesy of the modern infostealer that can enter, scrape and depart without the victim realizing it). Armed with the right OAuth token, the attacker can enter the app unhindered, and start gathering data from whatever other systems are connected to the agentic system simply by feeding it tailored prompts through the APIs.
However, the danger is not limited to the single SaaS app containing the shadow AI’s own environment – it can cascade to other environments within the organization. IdentityMesh can be an unintentional flaw resulting in a single unified authentication context. This allows the attacker inside the SaaS app to access all other agentic systems within the environment. If one of these has access to third party apps or shared service accounts used across organizations, the attacker can then pivot the attack across multiple organizations – and the cascading effect caused by a single stolen OAuth token in one company can cause the loss of data in many others.
“AI is not a future risk, nor is it ‘just an IT problem’,” notes the report. “And crucially, governing it is not optional. It is now one of the most influential forces shaping how modern businesses operate and take on risk.”
The report suggests that 2026 may be the worst year yet for SaaS breaches, warning that the increased blast radius may expand further as autonomous workflows outpace existing security controls. There are attempts at regulation, but globally, such regulations are moving in different directions with conflicting mandates, uneven enforcement and increased compliance friction. “AI regulation will get messier before it gets clearer,” says the report.
But the chaos can be brought under control. “The way out is not more policy or slower innovation. It is a shift in how AI is governed in practice,” adds the report. The key is increased visibility into, and understanding of, SaaS shadow AI, and more dynamic governance.
“Leaders who succeed replace static approvals with continuous oversight, discovery, and risk-based controls. AI becomes a managed third-party risk, monitored continuously, aligned to business outcomes, and governed with the same rigor as any critical supplier.”
Learn More at the AI Risk Summit
Related: The Blast Radius Problem: Stolen Credentials Are Weaponizing Agentic AI
Related: Microsoft Highlights Security Risks Introduced by New Agentic AI Feature
Related: Infostealers: The Silent Smash-and-Grab Driving Modern Cybercrime
Related: Security Analysis of Moltbook Agent Network: Bot-to-Bot Prompt Injection and Data Leaks

