CYBERNEWSMEDIA Network:||
AD · 970×250

Artificial Intelligence

AI Emerges as the Hope—and Risk—for Overloaded SOCs

With security teams drowning in alerts, many suppress detection rules and accept hidden risks. AI promises relief through automation and triage—but without human oversight, it risks becoming part of the problem. The post AI Emerges as the Hope—and Risk—for Overloaded SOCs appeared first on SecurityWeek.

SOC

The problems faced by SOCs are well known, understood, and quantified – but not yet solved.

SMEs get around 500 security alerts every day; larger enterprises receive nearer 3,000. Forty percent of these are never investigated, while 57% of companies suppress their detection rules to lessen the load. Most SOCs cannot cope with the existing alert load, while others seek to reduce it by consciously accepting unknown risk (often in the cloud and identity spheres).

These figures come from a Prophet Security analysis (PDF) that canvassed 282 security leaders (CISOs, security directors, managers, and analysts) from companies with more than 1,000 employees, primarily in the United States.

Fifty-five percent of the same respondents say they already use some form of AI for alert triage and investigation, while 60% plan to evaluate an AI SOC solution within the next year. Moreover, 83% of security leaders today believe that more than half of the SOC workload will be completed by AI in the next three years.

The three main use cases expected for AI in the SOC are ‘alert triage and investigation’ (67% of respondents), ‘detection engineering and tuning’ (65%), and ‘threat hunting’ (64%). ‘Remediation and incident containment’ came in lower, with 43% of respondents. “This suggests that while security leaders recognize AI’s power in identification and analysis, there’s a current tendency to view human intervention as crucial in response and containment phases,” suggests Prophet.

Prophet Security provides Prophet AI – an agentic AI SOC Platform aimed at solving the issues that cause alert fatigue in SOC analysts, and the problems that result from alert fatigue.

The underlying cause of alert fatigue is too much data. Prophet notes that organizations have an average of 17 alert generating tools, while larger companies have more than 20 such tools. “Organizations equate more data with better visibility,” comments Marco Giuliani (VP, head of research with ThreatDown at Malwarebytes). But the opposite happens. “Too much data equals zero visibility – analysts simply don’t know where to look anymore, and the signal gets completely lost in the noise.”

Peter Coroneos (founder at Cybermindz) agrees: too much data and too much noise leads to habituation and vigilance decrement. “In other words,” he says, “SOC analysts’ ability to spot the real threat among the false positives declines over time.”

To cope with the noise, 57% of organizations deliberately suppress detection rules, accepting higher risk just to stay afloat, says Francis Odum (founder and CEO of Software Analyst Cyber Research). “When teams suppress alerts, they trade short-term survivability for long-term visibility debt. Every silenced rule becomes a gap attackers can probe and commoditize. Based on SACR research, the remedy is not ‘more analysts,’ but smarter detection engineering and automation.”

Manoj Bhatt (founder at Cyberhash) expands on the suppression issue. “We are finding that large volumes of alerts are flagged – however, most people tune these down to a manageable level which means that alerts might be missed. There is a very real problem that not all alerts are being actioned, and SOC teams are missing them.”

It’s not only the business that is at risk – the human suffers equally. Lisa Ventura (chief executive and founder at AI and Cyber Security Association) continues: “Alert fatigue is crushing the morale and effectiveness of our cyber security professionals.” The people who should be the first line of defense are being worn down by the relentless noise. “They’re becoming desensitized to alerts, rushing through investigations, and frankly, some are leaving the industry altogether because of burnout.”

Alert fatigue is caused by too much data with far too much noise. Manual triaging becomes hit and miss – false positives wear down the analyst while potential false negatives are not investigated. “Alert fatigue really isn’t just a buzzword,” explains Nikki Webb (director at Custodian360). “It burns out analysts and gives organizations a dangerous illusion of safety. Dashboards full of alerts mean nothing if no one has time to investigate them properly.”

Alessandro Di Carlo (senior product manager at ThreatDown) expands on the problem: “The effects are pretty clear: slower triage and response, higher analyst fatigue and turnover, and ultimately a dip in service quality because time is wasted chasing benign events.”

Criminal adoption and skill in using AI is making matters worse: attacks are increasing in speed, complexity and stealth. This is introducing a new problem – a cybersecurity version of the uncertainty principle. 

“Consider an organization going through an after-breach forensics process, determining what the vector was and how the breach was conducted,” says Kris Bondi (CEO and co-founder at Mimoto). “Then, it creates a plan of action of how to recognize and respond to this type of attack in the future. In the time it took the organization to go through these steps, the AI-enhanced attack has evolved several times. The organization is preparing for a version of an attack that is generations old.” The more we understand the last attack, the less we know about the next attack – and that’s all down to the criminal use of AI.

The next question, then, is whether the defenders can solve long standing alert fatigue, the increasing cause of alert fatigue, and the potentially disastrous effect of alert fatigue, by employing their own AI? The consensus appears to be, ‘Yes, but only with care…’

Grant Oviatt (Prophet Security co-founder and head of security operations) is an enthusiast. “SOC analysts are overwhelmed with security alerts that need investigation, leading to fatigue and eventually missed detections. AI provides a way to handle repetitive and tedious tasks at a fraction of the time, ultimately freeing up analysts’ time to focus on high-value work.” 

Albert Estevez Polo (field CTO global at Zero Networks) explains the ‘yes’ part of an AI solution. “There are many manual tasks in a SOC and of course AI is good at automating certain types of tasks. In fact, we see more companies built around this concept of AI-SOC just because AI can be called with API, and you can build agents to correlate alerts and discard false positives by implementing other logics. This is great for SOC analysts because now they have an AI-Assistant to do all the homework and save tons of human hours that can finally be used to review the reports/tasks run by the AI Agents.”

SOC AI can reduce the workload to improve human efficiency. This also introduces the ‘but’ part of the solution. “AI acts as a force multiplier in the SOC. It can automate tasks like triage and even perform autonomous investigation, allowing security teams to pivot from reactive alert-handling to more strategic initiatives like threat hunting, cyber resilience planning, and risk mitigation,” explains Nicole Carignan (Senior VP security & AI strategy, and field CISO at Darktrace). 

“However,” she adds, “realizing this benefit requires a workforce that understands how to effectively use, operationalize, govern, and most importantly trust these technologies. It’s not enough to simply deploy an AI solution – security practitioners must understand how the underlying machine learning techniques function, what their strengths and limitations are, and how to evaluate their outputs. Without explainability and trust, AI risks exacerbating alert fatigue rather than solving it.”

It would be a mistake to simply install or create an AI SOC and expect the existing analysts to just get on with it. “SOC analysts must understand how AI models work, their limits, and how to understand AI-driven insights,” adds Casey Ellis (founder at Bugcrowd). “This isn’t about turning analysts into data scientists. It’s about equipping them to work alongside AI effectively – understanding when to trust it, when to question it, and how to leverage it to decrease noise and focus on high-priority threats. Training should focus on integrating AI into workflows, emphasizing its role in augmenting human decision-making rather than replacing it.”

SOC AI will be good at the ‘heavy lifting’ on initial triage, enriching alerts with context, and helping prioritize what really needs human attention. “This could free up our analysts to do what they do best – the complex thinking, strategic analysis, and decision-making that humans excel at,” says Ventura. “However, we need to be honest about AI’s limitations. It’s only as good as the data we train it on, and it can perpetuate biases. More importantly, cybercriminals aren’t sitting still, they’re already working on ways to evade AI detection.”

The current situation is that criminal AI is rapidly increasing the workload on SOC analysts. Those analysts must simultaneously learn and employ their own defensive AI to counter this. Whether the latter can cancel out the former remains an open question. SOC AI is essential, but not a panacea. 

The human wellness side of the SOC will remain paramount. Coroneos comments on this. AI can help in system defense, “For example by filtering noise, clustering alerts and helping to prioritize what matters most.” But it is not a cure-all; attackers are adapting quickly using AI offensively. The mental stress on defenders will remain high. “The resolution lies in a hybrid approach by combining AI-driven efficiencies with human resilience strategies such as attention training and vigilance preservation.”

Webb, a user of SOC AI, summarizes: “AI can filter and enrich, but it cannot replace human judgment. In our SOC, every single alert gets human eyes. That is non-negotiable,” she says. “Machines do not and cannot understand nuance, intent, or business context the way an experienced analyst does. We are a long way from trusting AI alone with that responsibility. The future is not about replacing people with AI, it is about AI supporting people. Analysts must stay at the center of SOC operations, because only humans can truly separate noise from risk.”

The conclusion is simple. Prophet’s survey demonstrates that SOCs are not coping. Criminal use of AI will worsen this. Defensive use of AI by the SOC is not an option but an inevitable necessity; but whether this will give defenders a new advantage or simply rebalance the status quo is yet to be seen.

Related: Dropzone AI Raises $37 Million for Autonomous SOC Analyst

Related: SentinelOne’s Purple AI Athena Brings Autonomous Decision-Making to the SOC

Related: Exaforce Banks Hefty $75 Million for AI-Powered SOC Remake

Related: Google Targets SOC Overload With Automated AI Alert and Malware Analysis Tools

Latest News

CYBERNEWSMEDIAPublisher