CYBERNEWSMEDIA Network:||
AD · 970×250

Application Security

Cyber Insights 2026: API Security – Harder to Secure, Impossible to Ignore

API cybersecurity will be a ping pong ball, battered between the rackets of AI-assisted attackers and AI-assisted defenders. The post Cyber Insights 2026: API Security – Harder to Secure, Impossible to Ignore appeared first on SecurityWeek.

API Security Insights
SecurityWeek’s Cyber Insights 2026 examines expert opinions on the expected evolution of more than a dozen areas of cybersecurity interest over the next 12 months. We spoke to hundreds of individual experts to gain their expert opinions. Here we explore securing Application programming interfaces (APIs), with the purpose of evaluating what is happening now and preparing cybersecurity teams for what lies ahead in 2026 and beyond.

Application programming interfaces (APIs) are essential to the operation of a connected cyberworld. “APIs have become the connective tissue of modern technology and are part of our entire digital world,” explains Chrissa Constantine, senior cybersecurity solution architect at Black Duck. “Some recent estimates show that approximately 83% of internet traffic flows through APIs, which reflects how APIs are deeply connected in our digital lives.”

Chrissa Constantine
Chrissa Constantine, Senior Cybersecurity Solution Architect at Black Duck.

Randolph Barr, CISO at Cequence Security, adds, “In many ways, 2026 will mark a phase in which APIs move from ‘just a delivery mechanism’ to the operational backbone of digital business, especially in a world increasingly dominated by agentic AI and monetization imperatives.”

Anything so ubiquitous and important will attract cyberattacks. In July 2024, Akamai monitored 26 billion attacks targeting APIs in June 2024 alone, part of a 49% growth from Q1 2023 to Q1 2024.

Here’s the rub. It’s going to get much worse in 2026 – and largely because of agentic AI.

The expanding API attack surface

The primary reason for the increase in API attacks will be a new surge in the number of APIs, and where and how they are used. “We’re now entering a new API boom. The previous wave was driven by cloud adoption, mobile apps, and microservices. Now, the rise of AI agents is fueling a rapid proliferation of APIs, as these systems generate massive, dynamic, and unpredictable requests across enterprise applications and cloud services,” comments Jacob Ideskog, CTO at Curity.

The boom in enterprise use of agentic AI is creating an even bigger boom in the proliferation of APIs.

Neil Roseman, CEO at Invicti, adds, “The rise of agentic AI – AI systems capable of autonomous reasoning and task execution – is multiplying the number of APIs in use. Each agent requires APIs to access data, trigger workflows, and interact across applications. This introduces new challenges: dynamically generated APIs that are difficult to inventory, hidden AI-to-AI communications, and increased risk of sensitive data exposure through model integrations. The result is an even larger, more volatile attack surface that traditional security tools cannot keep up with.”

Enterprises are rushing to harness the autonomous power of AI, often with too much haste and not enough understanding. 

Randolph Barr, CISO at Cequence Security
Randolph Barr, CISO at Cequence Security

Barr explains in more detail: “The business push for APIs is intensifying. Traditional human-mediated interactions – for example, call centers, branch visits, manual workflows – are being replaced by automated, always-on services, as retailers, banks, and other industries race to monetize AI-enabled experiences. That means APIs aren’t just internal glues anymore; they are value streamed, with the business logic layer exposed, scaled, and monetized.”

The growing use of agentic AI systems and the way they act autonomously, making decisions and triggering workflows, is ballooning the number of APIs in play.  “It isn’t just ‘I expose one billing API’,” he continues, “now there are dozens of APIs that feed data to LLMs or AI agents, accept decisions from AI agents, facilitate orchestration between services and micro-apps, and potentially expose ‘agentic’ endpoints (via autonomous scheduling, procurement, and product configuration).”

Each AI agent implicitly introduces new APIs (tools, services, and data connectors) and multiplies the attack surface. “In short,” he says, “APIs are growing horizontally (more endpoints), vertically (more critical business logic), and contextually (embedded into AI/agent flows).”

The effect of this rapid increase in numbers and complexity, suggests Paul Nguyen, co-founder and co-CEO at Permiso, is that organizations will lose inventory control. “By 2026, most enterprises will be unable to answer basic questions. How many API endpoints exist? How many API credentials are in use? What permissions does each credential have? When were they last rotated? This visibility gap becomes a significant security risk.”

Attacking APIs in 2026

“APIs are the most direct link between users and business logic. Attackers know that weak authentication, business logic flaws, and misconfigurations can open paths straight to sensitive data,” warns Roseman. “Meanwhile, shadow APIs – undocumented, forgotten, or misconfigured endpoints – continue to grow, leaving organizations blind to large portions of their attack surface. As a result, APIs are now the top target for web-based attacks.”

Barr adds that in the rush to deploy AI faster, cheaper and AI-first, the adversary advantage grows and is made worse by legacy assumptions. “Many organizations assume their existing web application firewall (WAF), content delivery network (CDN), or API gateway is sufficient. But API security, especially when APIs embody business logic or autonomous agent workflows, requires deeper behavioral and context-aware controls.”

The AI attack surface spans three distinct layers, each requiring specialized defenses, explains Eleanor Watson, IEEE member and AI ethics engineer. “At the data/model layer: adversaries poison training datasets, inject backdoors into retrieval corpora, and compromise model integrity. At the prompt  / tooling layer: attackers deploy jailbreaks, execute indirect prompt injections through documents and websites, and manipulate tool-use chains.”

And, “At the API / systems layer: threats include model extraction, policy cloning, API abuse through chained tool invocations, and polymorphic malware generation using code models.”

The Model Context Protocol (MCP) introduced by Anthropic in 2024, is causing particular concern. “Since launching MCP in November 2024, adoption has been rapid: the community has built thousands of MCP servers, SDKs are available for all major programming languages, and the industry has adopted MCP as the de-facto standard for connecting agents to tools and data,” enthused Anthropic on November 4, 2025.

But while MCP has provided productivity advantages, it also impacts API security issues, aggravated by the rising incidence of shadow MCP – that is, MCP servers deployed by employees without the oversight, formal approval or even knowledge of the IT or security teams. 

“In 2026, repositories hosting MCP servers, A2A endpoints, and capability plug-ins will become prime targets. Just as NPM, PyPI, and Docker Hub were exploited to deliver poisoned packages, MCP registries and agent marketplaces will be infiltrated with trojanized service manifests and malicious context providers,” warns Pascal Geenens, VP of cyber threat intelligence at Radware. 

Ariel Parnes, COO at Mitiga and former IDF 8200 cyber unit colonel, warns: “The next major cloud-scale breach won’t start in a misconfigured bucket – it’ll start in an MCP API. As organizations plug AI assistants into enterprise data, these new API layers will expose sensitive systems in unpredictable ways. MCP abuse will emerge in 2026 as the central attack vector connecting SaaS, AI, and data exfiltration campaigns. Most enterprises still lack the visibility and control challenges needed to secure this growing layer of integration.”

Gianpietro Cutolo, staff threat research engineer at Netskope
Gianpietro Cutolo, Cloud Threat Researcher at Netskope.

Attackers exploited OAuth and 3rd-party app tokens in the Salesforce and Salesloft incidents. “The same threat pattern is now emerging in AI ecosystems. As AI agents and MCP-based systems increasingly integrate with 3rd-party APIs and cloud services, they inherit OAuth weakest links: over-permissive scopes, unclear revocation policies, and hidden data-sharing paths,” warns Gianpietro Cutolo, a Cloud Threat Researcher at Netskope.

“These integrations will become prime targets for supply-chain and data-exfiltration attacks, where compromised connectors or poisoned tools allow adversaries to silently pivot across trusted AI platforms and enterprise environments.”

In short, “Agentic AI brings new risks in API sprawl – too many unmanaged or shadow API endpoints, not enough governance – prompt injection and context poisoning (attackers manipulate AI inputs via APIs), and chained API exploits (exploit an AI agent and pivot to target interconnected APIs and systems),” says Black Duck’s Constantine.

George Gerchow, CSO at Bedrock Data and faculty at IANS Research recommends replacing MCP Servers with security posture management (SPM) servers. “SPM and MCP servers serve two fundamentally different but complementary purposes in AI security,” he explains. “MCP servers are components of the AI system that enable capabilities, while SPM is the overarching security strategy that monitors and protects the entire system, including the MCP Servers.”

AI will be harnessed to attack APIs in 2026

APIs have been a major attack surface for years – the problem is ongoing. Starting in 2025 and accelerating through 2026 and beyond, the rapid escalation of enterprise agentic AI deployments will multiply the number of APIs and increase the attack surface. That alone suggests that attacks against APIs will grow in 2026.

But the attacks themselves will scale and be more effective through adversaries’ use of their own agentic AI. Barr explains: “Agentic AI means that bad actors can automate reconnaissance, probe API endpoints, chain API calls, test business-logic abuse, and execute campaigns at machine scale. Possession of an API endpoint, particularly a self-service, unconstrained one, becomes a lucrative target. And AI can generate payloads, iterate quickly, bypass simple heuristics, and map dependencies between APIs.”

Furthermore, he continues, “Since APIs support AI / agent flows, attackers may target the agent-API junction; for example, by telling an AI agent to call a vulnerable API in unintended ways or tricking the agent into exposing privileged API access.”

In the past, figuring out which pathways an API would use to access user data required considerable guesswork by the attackers. Now, explains Inti De Ceukelaire, chief hacker officer at Intigriti, “AIs are particularly good at predicting how APIs and their parameters will look. Now, these pathways can likely be discovered within minutes.”

“Offense use cases,” continues Constantine, “include adversaries weaponizing AI to automate API enumeration, fuzzing, and credential stuffing at scale. Generative models can craft realistic API requests to bypass filters and imitate legitimate user behavior.”

Moiz Virani
Moiz Virani, CTO and Co-Founder at Momentum,

“New API issues are emerging,” adds Moiz Virani, CTO and co-founder at Momentum, “particularly around security, such as agent-to-agent (A2A) communication vulnerabilities, where a compromised agent could use its access to attack other agents or systems via the APIs. Furthermore, the sheer volume and speed of API calls generated by autonomous agents make rate limiting, abuse detection, and detailed logging / auditing more complex to manage effectively.”

The API battlefield in 2026 will be intense. Adversarial use of AI will target all enterprise APIs, whether traditional or newly introduced MCP / agentic APIs. In the latter case, the effect of a successful breach could be dramatic.

Securing APIs in the age of AI

With the growing threat to security through attacks against APIs in the coming years, we’re likely to see increased efforts in securing them. API security is not impossible, but we have not been successful yet. In 2026, the deployment of enterprise agentic AI applications will both increase the adversaries’ attack surface and make exploitation more dramatic.

“There are various ways to protect APIs against attacks and abuse. As applications evolve to be more complex, keeping them secure does require a significant investment. I wouldn’t be so sure that they will be better secured in the future, as opening them up to be mainly used by a company’s AI agent may shift the responsibility of security to the agent rather than the API itself,” warns De Ceukelaire. 

“APIs can absolutely be secured, but not through legacy tools designed for web applications. The next generation of API protection must combine continuous visibility, behavioral analytics, context-driven access, intelligent automation, and developer-native testing,” says Cequence Security’s Barr. “Attackers now blend legitimate API calls with malicious sequences that exploit business logic or abuse agentic workflows. Defenders must employ real-time behavioral analytics that profile normal API usage and detect deviations, such as when an AI agent suddenly makes repetitive data-exfiltration calls, or a session token is reused across unrelated transactions. These runtime analytics can allow defenders to spot subtle misuse before it escalates into a breach.”

“APIs can be secured, but success starts with visibility. You can’t protect what you don’t know exists.” adds Invicti’s Roseman. “A modern AppSec testing platform provides a multilayered approach to API discovery and vulnerability testing. Discovery is achieved by layering runtime scanning, API management integration, source code repository mining, and production network traffic analysis across internet-facing proxy technologies like F5, NGINX, and Cloudflare.

“Once discovered,” he continues, “dynamic application security testing (DAST) engines validate reachable, exploitable vulnerabilities – covering OWASP Top 10 API risks, common API business logic flaws like BOLA and BFLA, leaking secrets with weak authentication, and traditional web app weaknesses like SQL injection or prompt injection.”

It’s complex, but doable through multi-layered defense. “APIs can be secured through identity governance, but not through technical hardening,” suggests Permiso’s Nguyen. “The security model requires comprehensive discovery of all API credentials in use, permission rightsizing (each credential has only the permissions it actually needs), behavioral monitoring (alerting when credentials are used anomalously), and rapid response capability (revoking compromised credentials).”

Final thoughts

API security is doable but hasn’t yet been done. This problem will escalate in 2026. “APIs will become the most valuable and vulnerable element of digital infrastructure,” warns Radware’s Geenens. “As AI agents begin exchanging data and performing actions independently, API traffic will surge beyond human oversight, exposing new pathways for exploitation. This expansion will push API management into the center of security strategy.”

The problem isn’t unique to APIs – it is part of the great conundrum of the Age of Artificial Intelligence. Enterprise develops and deploys AI for increased business efficiency, while attackers develop and deploy (often the same) AI for increased attack efficiency. Both are effective – so cybersecurity defenders are forced to develop and deploy additional AI to defend enterprise AI from bad actor AI while simultaneously further increasing the attack surface. 

It’s part of the never-ending cycle of attack and defense. Plus ça change, plus c’est la même chose.

Related: SesameOp Malware Abuses OpenAI API

Related: Claude AI APIs Can Be Abused for Data Exfiltration

Related: Exposed Docker APIs Likely Exploited to Build Botnet

Related: Insurance Firm Lemonade Says API Glitch Exposed Some Drivers’ License Numbers

Latest News

CYBERNEWSMEDIAPublisher