CYBERNEWSMEDIA Network:||
AD · 970×250

Artificial Intelligence·Threat Intelligence

Cyber Insights 2026: Threat Hunting in an Age of Automation and AI

Understanding how threat hunting differs from reactive security provides a deeper understanding of the role, while hinting at how it will evolve in the future. The post Cyber Insights 2026: Threat Hunting in an Age of Automation and AI appeared first on SecurityWeek.

Threat Hunting
SecurityWeek’s Cyber Insights 2026 examines expert opinions on the expected evolution of more than a dozen areas of cybersecurity interest over the next 12 months. We spoke to hundreds of individual experts to gain their expert opinions. Here we explore threat hunting as adversaries adopt automation and AI, and how security teams are adapting.

Threat hunting is in flux. What started as a largely reactive skill became proactive and is progressing toward automation.

Threat hunting is the practice of finding threats within the system. It sits between external attack surface management (EASM), and the security operations center (SOC). EASM seeks to thwart attacks by protecting the interface between the network and the internet. If it fails, and an attacker gets into the system, threat hunting seeks to find and monitor the traces left by the adversary so the attack can be neutralized before damage can be done. SOC engineers take new threat hunter data and build new detection rules for the SIEM.

That’s a theoretical representation – precise details vary between different organizations.

Proactive or reactive?

A common perception of cybersecurity defines defense as necessarily reactive. Defenders are naturally forced into a position of reacting to attacks, while attackers are free to be proactive in their own activity. In many cases this is valid, but the distinction doesn’t fit neatly into threat hunting.

Threat hunting is reactive in seeking evidence of an event that has already happened; but is proactive since it doesn’t know what the event was, nor even if it really happened. It assumes a breach but doesn’t know the breach has occurred until it finds evidence.

Understanding how threat hunting differs from reactive security provides a deeper understanding of the role, while hinting at how it will evolve in the future.

“Threat hunting is one of the most proactive actions an analyst can perform,” claims David Norlin, CTO at Lumifi Cyber. “I also argue that free-form threat hunting is perhaps the most effective way at finding unknown threats. It’s unlikely the precise technical means of exploitation will be seen by threat hunting, but exploits and malicious tampering usually leave artifacts and residual signals that can be detected.”

Dave Tyson, chief intelligence officer at iCOUNTER, continues, “Threat hunting assumes a cyber adversary has already infiltrated your environment and is either hiding in the shadows, has implanted a web shell or backdoor, or deployed malware waiting to detonate at a predetermined time. In practice, adversaries often become aware of these discovery efforts and may react defensively, sometimes executing their payloads such as ransomware prematurely.”

In this sense, threat hunting can reverse the traditional role: the defender is proactive, forcing the attacker to become reactive.

The evolution from reactive threat hunting to proactive hunting is explained by Scott Miserendino, VP of engineering, advanced cybersecurity solutions at DataBee. “Traditional hunting often relies on known indicators of compromise (IOCs) and signature-based detection, which means teams are always one step behind attackers. In a world where attack methodologies evolve daily and AI-generated malware can create infinite variants, reactive hunting is no longer enough.

“Proactive threat hunting,” he continues, “starts with behavioral analysis, zero-day malware detection and anomaly detection, not just known signatures. By leveraging machine learning and advanced analytics, security teams can identify patterns that deviate from normal network behavior – such as unusual beaconing, encrypted command-and-control traffic, or file characteristics that suggest malicious intent – even when those threats have never been seen before.”

Anomalous activity within the network is the key. This must include anonymous behavior of accredited identities. A background knowledge of current cyber threat intelligence (CTI), championed by Frankie Sclafani, director of cybersecurity enablement at Deepwatch, is also important. 

“Cyber threat intelligence serves as cybersecurity’s early warning system, aiming to understand the nature and source of attacks, identify adversaries and targets, recognize the presence of existing attacks, and assess the likelihood of imminent attacks. CTI helps defenders prepare for and prevent attacks, rather than merely respond to them,” he says.

Allison Wikoff
Director, Global Threat Intelligence – Americas Lead at PwC

Behavioral anomaly detection can trigger a threat hunter’s curiosity, while CTI knowledge can focus attention more deeply. Allison Wikoff, director and Americas lead for global threat intelligence at PwC, adds, “Proactive hunting is about forming scenarios based on threat actor behaviors and testing them before an alert ever fires.”

AI-assisted attacks are so frequent and stealthy this cannot be achieved without automated assistance, and threat hunting already relies heavily on machine learning anomaly detection. All automation, including attacks, are being supercharged by AI – and this is the future for threat hunting.

The continuing rise of automation

Automation in threat hunting already exists with machine learning behavioral analysis both learning the behavioral baseline and then flagging divergence from it. Machine learning is artificial intelligence now being enhanced by rapidly improving generative AI, which in turn is being enhanced by agentic AI.

Most of the cyber world (commercial business, cybersecurity, and cyber attackers) are already on this conveyor belt – but threat hunting may be a bit slower. “Some types of threat hunting can be meaningfully automated, usually within the context of looking for new indicators of known threats that have surfaced within the last few days,” says Norlin.

However, he adds, “There will be no replacing the unpredictability and idle curiosity of a human analyst. This is arguably the best kind of threat hunting – a human roaming around a large dataset in search of something interesting. Humans love novelty, and good threat hunters are largely occupied by this pursuit, whether they consciously know it or not. It’s going to be a long time before AI mimics this inquisitive spirit, if it ever does.”

“Instead of chasing known TTPs, next-generation threat hunters will rely on anomaly-based AI systems trained on historical baselines and user behavior patterns,” says Ariel Parnes, former IDF 8200 cyber unit colonel and COO at Mitiga.

“Successful teams in 2026 will hunt for deviation, not confirmation,” he continues. “The shift from ‘assume breach’ to ‘assume anomaly’ will define the next era of proactive defense, especially across cloud and SaaS environments where logs are fragmented and ephemeral.”

Much of today’s threat hunting is already automated. “Cybersecurity tools and anomaly detection systems are constantly scanning for suspicious patterns,” says Ihar Kliashchou, CTO at Regula.

This is likely to continue and expand through 2026. “Systems establish behavioral baselines for each identity (human and non-human), detect deviations in real-time, and alert analysts. The automation scales to monitor millions of identities continuously. Human threat hunters shift from tactical detection to strategic investigation – validating detections, understanding context, determining response,” expands Jason Martin, co-founder and co-CEO at Permiso.

Jason Martin, co-founder and co-CEO at Permiso
Jason Martin, co-founder and co-CEO at Permiso.

The limiting factor, he adds, is the setup time. “Behavioral baselines require 60-90 days of baseline data before anomaly detection becomes reliable. Organizations that establish baselines in Q1 2026 will have mature proactive hunting by Q3 2026. Those starting in Q3 will not have reliable detection until late 2026 or Q1 2027.”

The implication is clear. Companies that have not yet started on the automation trail are likely to get burned by bad actors adopting AI automation at a faster rate.

The next shift in automation is likely to be an adoption of agentic AI-assisted threat hunting. Exactly what this means, the extent to which it will be adopted, and the timeline toward it is, however, heavily debated. But in one form or another it is inevitable. Attackers are already developing and adopting full agentic AI models; and the only way that defenders, including threat hunters, can keep up will be through their own agentic systems.

“AI can be used both to detect and to generate threats, making it a double-edged sword. We might soon see AI-powered attacks that adjust tactics in real time, and defensive systems will need to match that level of speed and adaptability,” warns Kliashchou.

For now, agentic AI in threat hunting will be limited to discrete AI agents tackling individual tasks. In some places this has already started. The full agentic capability has an additional AI agent orchestrating and automating the individual agents into one system that will not merely locate behavioral anomalies but will suggest remedial action and have the ability to perform that remediation without human intervention. That, however, is a long way off for now.

“Agentic AI will increase automation in reconnaissance, enrichment and even suggestion of hypotheses, but human oversight will remain critical for context, legal decisions and complex reasoning. Over time, the balance may shift but not to full replacement,” comments Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University.

“Full automation is extremely unlikely to replace human hunters. Humans remain critical for hypothesis-driven investigation, adversary emulation and interpreting ambiguous behaviors,” says Ashley Jess, senior intelligence analyst at Intel 471.

“As agentic AI continues to advance, AI will take on routine and data-intensive tasks, freeing human analysts up to focus on strategic investigations and complex decision-making – a partnership rather than replacement scenario,” adds Devon Kerr, director of threat research at Elastic.

“The role of AI is not to replace hunters but to expand what they can see,” concludes Biswajit De, CTO at CleanStart. “Instead of reviewing isolated alerts, teams will rely on AI agents that continuously evaluate build integrity, verify dependencies, and surface patterns that signal early-stage tampering. Over time, this will make proactive threat hunting more automated, more continuous, and more sensitive to signals that typically appear long before an incident.”

The reason for this almost total rejection of fully autonomous agentic AI-instigated automated remediation is the wide belief that current AI, so good for so many tasks, is so poor at understanding business context. It doesn’t understand what it finds.

“AI can tell you what is anomalous; human hunters tell you why it matters. The reason for this divide is simple: AI lacks business context, can’t truly understand attacker motivation, and struggles with the judgment calls that define sophisticated threat hunting,” explains Mitch Davies, senior data scientist and cyber threat research at Arkose Labs.

“Context determines everything – automated response works beautifully when context is clear, like with known malware signatures, but fails spectacularly when context is ambiguous,” he continues.

This doesn’t mean that all autonomous remediation is off the table. It has been an option with standard ML-based anomaly detection systems for years; but is generally restricted to contained or constrained instances – like isolating an endpoint.

“Automated systems can take immediate action,” says Jess, “such as quarantining hosts or isolating compromised endpoints, when high-confidence threats are detected.” The attempt is to mitigate fast-moving threats, like ransomware or infostealers, and reduce the need for human intervention in time-sensitive scenarios.

Kevin Curran, Professor of Cybersecurity at Ulster University.

“Adversaries are also increasingly exploring AI to develop and optimize their kits,” he continues, “so defenders will need to leverage some automation alongside intelligence-driven hunting to keep pace.” 

‘Contain’ is the key word for automated remediation in the near future. “Automated responses in the form of automatic containment will grow for high-confidence detections to reduce dwell time,” says Curran. “Organizations will adopt safety checks, risk thresholds and rollback procedures to avoid business disruption while enabling swift containment.”

The pressure to expand automated remediation is growing, but the dangers are too fierce with current AI. The constant danger we have known from all detection systems continues – the cost of false positives.

“We see this constantly in fraud prevention,” comments Davies, “automated blocking must balance security against customer friction. Block too aggressively, and you’re causing revenue loss and user lockout. The solution is tiered automation: low-risk actions like isolating endpoints or blocking suspicious IPs can be automated, but high-risk actions like taking down production systems always need human oversight.”

This is the conundrum faced by almost all defensive use of AI. We are hampered by AI’s inability to cater for the intricacies of business environments. If we make one mistake in our use, the consequences could be disastrous for us personally or our company. Attackers have no such concerns. If they make a mistake, it is of little consequence. They simply learn from the mistake and try again.

The result of this lack of consequence for attackers is a rapid adoption of AI. The potential severity of consequence for defenders requires the insistence on human oversight within the AI loop – and that results in delay. Attackers are rapidly becoming too fast for us to detect and stop.

That’s the conundrum. We dare not unleash the full potential of defensive AI while sooner or later we must. And all of this will unravel over the next couple of years.

Visibility gaps

The visibility gap affects all of cybersecurity. How can you secure what you don’t know? For threat hunters this translates directly into, How can you monitor and search what you cannot see? 

The primary culprits in the visibility gap are shadow IT (now increasingly shadow AI), unapproved software-as-a-service (SaaS) applications, and remote working. All are increasing.

Melissa Bischoping, Tanium
Melissa Bischoping, director of endpoint security research at Tanium

“Shadows complicate hunting by creating blind spots and unauthorized telemetry sources. This is a growing issue as teams adopt new tools rapidly,” comments Curran. “Remote work increases the diversity of endpoints, network contexts and authentication patterns, making baseline-building harder and increasing false positives.”

Ian Ashworth, security operations lead at Fortra, adds, “Unapproved SaaS applications or artificial intelligence (AI) tools create visibility gaps and potential data exposure risks. Environments with remote or hybrid workforces introduce new challenges for threat hunting, as devices outside traditional network boundaries can create visibility gaps and inconsistent logging.”

Shadow AI is worsening the long standing shadow IT problem. “Shadow AI is just a new class of Shadow IT to manage – but one with significantly more complexity and potential consequences,” comments Melissa Bischoping, director of endpoint security research at Tanium. “Every executive I’ve spoken with has become increasingly concerned about an employee copying and pasting sensitive company data, such as financial information or intellectual property, into an AI chat box that isn’t managed by the organization itself. This creates a risky, muddy opportunity for data spillage.”

It’s not a passing issue – it’s accelerating in 2026. “The reason is simple: it’s easier than ever to spin up SaaS tools, AI services, and cloud resources without IT approval. Generative AI adoption has turbocharged this trend. The impact on threat hunting is severe because you can’t hunt threats on infrastructure you don’t know exists. Shadow AI tools processing sensitive data represent exfiltration vectors you’re not monitoring – massive blind spots in your security posture,” says Arkose Labs’ Davies.

“I think we’re in a phase of extreme acceleration with AI, especially around misuse. We are likely going to see major compromises associated with AI-connected services in email, workplace tools, and AI-enabled SaaS applications,” warns Lumifi’s Norlin. “As soon as we start connecting agents that receive input from the wider world, we are creating new attack surface for exploitation.” 

It’s no different than the waves of SQL injection and other input or injection type attacks we’ve seen in the past, except, he says, “You now have a semi-intelligent, autonomous system with tools at its disposal that can receive input that may not be filtered by any governing system or external gateway. To do their job, they have to be connected to backend sources of data that feed into context. This is ripe for misconfiguration as administrators race them into production and don’t audit the data sources to which they’re connected.”

“The detection approach requires hunting for symptoms: anomalous data flows, unusual API calls, unrecognized authentication patterns, employees using personal accounts for business purposes. But here’s the crucial part – technical controls alone won’t solve this. Shadow IT exists for a reason: official tools are too slow, too restrictive, or don’t meet business needs,” he adds.

“If you’re only watching approved infrastructure, you’re missing a huge chunk of your actual attack surface. Shadow AI makes this worse because data exfiltration often looks legitimate (someone copying a file or using an API),” cautions Aimee Cardwell, CISO in Residence with Transcend.

Most people using shadow AI are just trying to get work done faster and don’t realize the risk. “This is why I work so hard to enable the business with easy to use approved solutions. If you make the secure path the path of least resistance, people are more likely to use it,” she adds.

Remote working has been a security concern since before the pandemic, but the practice expanded because of it. It is theoretically more manageable if the organization provides company devices, but that can be very expensive and doesn’t preclude people still using their own unmanaged devices.

Jason Baker
Jason Baker, Managing Security Consultant, Threat Intelligence at GuidePoint Security

“One of the primary ways that remote work impacts threat hunting is by increasing the attack surface – remote workers may be more likely to access enterprise resources via personal devices, or to use enterprise devices to access malicious infrastructure,” explains Jason Baker, managing security consultant, threat Intelligence at GuidePoint Security. “Threat hunting is less likely to be achievable against personally owned devices, but enterprise endpoints such as corporate laptops should still be ‘hunt-able’.”

“Remote work can significantly impact threat hunting. Depending on geographic jurisdiction and privacy laws, organizations may have limited ability to collect and analyze user data when employees work remotely or off network, such as from home or hotels. This makes visibility and context more difficult and requires new detection and data governance approaches,” adds iCOUNTER’s Tyson.

The visibility gap cannot be tackled if you don’t know where it exists. Finding it is the first priority. Shining a light into it can make it more accessible to threat hunters, but not always easy. The light may leave some dark corners, and there may be new visibility gaps appearing that haven’t been found. This is one area where the experience, curiosity and imagination of human hunters remains important.

Final Thoughts

Threat hunting is evolving from network-focused to behavior-focused; from reactive to hypothesis-driven; and from human-only to human-AI hybrid, suggests Davies. “The goal isn’t to predict the future perfectly – it’s to get better at recognizing ‘wrong’ faster, even when we don’t know exactly what kind of ‘wrong’ we’re facing.”

AI will continue to enhance detection, correlation, and response, but it’s the human element – understanding behavior, context, and risk – that ensures effective defense, says PwC’s Wikoff. “Ultimately, threat hunting is not just about tools or technology, but about people using those tools to stay one step ahead of adversaries.”

Ashworth adds, “While many aspects can and should be automated, the combination of human expertise and AI-assisted analysis will remain the most effective approach.”

The general view is that threat hunting will adopt more tools and more automation in the future. AI will become widespread, and the use of automatic remediation will increase – but always under human oversight and final control.

That, however, is an idealized view based on threats and threat hunting today. The rapid evolution of AI is disrupting everything, and adversaries are adopting and using AI faster than defenders can defend. A ‘human in the loop’ of defense may be comforting today but will become a liability in the future. Any delay caused by human triaging could become disastrous. There may be a time in the not distant future where human involvement in remediation will necessarily be withdrawn in favor of autonomous agentic AI remediation. At that point, the threat hunter will necessarily evolve further from proactive tactics to predictive strategy based on autonomous remediation.

Related: Creating an Effective Threat Hunting Program with Limited Resources

Related: Profile of a Threat Hunter

Related: The Wild West of Agentic AI – An Attack Surface CISOs Can’t Afford to Ignore

Related: Beyond GenAI: Why Agentic AI Was the Real Conversation at RSA 2025

Latest News

CYBERNEWSMEDIAPublisher