| SecurityWeek’s Cyber Insights 2026 examines expert opinions on the expected evolution of more than a dozen areas of cybersecurity interest over the next 12 months. We spoke to hundreds of individual experts to gain their expert opinions. Here we explore AI-assisted social engineering attacks, with the purpose of evaluating what is happening now and preparing leaders for what lies ahead in 2026 and beyond. |
The most successful breaches in 2026 are likely to exploit trust, not vulnerabilities. All courtesy of artificial intelligence (AI).
We’re going to explore how AI-assisted social engineering attacks might evolve from 2026 onward, and how cybersecurity could, and perhaps should, adapt to meet the new challenge. The threat is no longer against individuals, nor even businesses, but entire cultures.
Basic changes introduced through AI
We knew at the beginning of 2025 that social engineering would get AI wings. Now, at the beginning of 2026, we are learning just how high those wings can soar.
“What once targeted human error now leverages AI to automate deception at scale”, explains Bojan Simic, CEO at HYPR. “Deepfakes, synthetic backstories and real-time voice or video manipulation are no longer theoretical; they are active, sophisticated threats designed to bypass traditional defenses and exploit trust gaps… They’re happening right now, at scale and with devastating precision.”
There is a strong belief that the basics of social engineering will not change but simply improve in quality, increase in speed, and scale in quantity. This is partly true, but in at least two areas the change will be large. Some experts believe adversaries will shift from mass phishing to hyper-personalized campaigns; but in reality, it will be hyper-personalized campaigns at mass phishing scale. Spear-phishing will be delivered at spray and pray levels.
The real mover is the arrival of agentic AI.
“Next year, we may see autonomous adversary agentic AI capable of running entire phishing campaigns. They could independently research and profile potential targets, conduct reconnaissance, craft personalized lures and payloads, and even deploy and manage C2 infrastructure,” says Jan Michael Alcantara, senior threat research engineer at Netskope. “This advancement would further lower the technical barriers for launching sophisticated attacks, allowing more threat actors to participate.”
Roman Karachinsky, CPO at Incode Technologies, adds: “Agentic AI is going to create the same productivity improvements for fraudsters as it does for legitimate users. Millions of malicious agents could continuously mine the internet for faces, voices, and personal data, running autonomous social engineering attacks against employers, family members, and service providers.”
The basic components of advanced AI-assisted social engineering are already in place: near perfect synthetic face and video generation, high quality voice copies, and complex supporting documentation. For the moment, these need to be combined manually; but this won’t last.
“A new era of cyberattacks is dawning, powered by interconnected large language models (LLMs) that specialize in different stages of the attack chain,” explains Carl Froggett, CIO at Deep Instinct. “While a single ‘master’ orchestrator does not exist yet, the building blocks are falling into place as LLMs designed for reconnaissance, social engineering, exploitation, and evasion are already operating independently.”
There’s an additional change worth considering. Social engineering has always been successful because humans are neurologically programmed to trust others – it’s part of the biology that helped our ancestors socialize and survive. Those of us with strong social programming easily fall prey; those with less strong programming can more easily detect something suspicious. Basic psychology has been the trigger to access our inbuilt trust: urgency, reward, fear of missing out, etcetera.

Now AI provides the possibility for deeper psychological massaging. Eleanor Watson, IEEE member and a fellow in ethics in the AI faculty at Singularity University, explains: “AI transforms social engineering from crafted campaigns to dynamically optimized psychological operations. Current systems already automate persona discovery and message optimization in real-time, shifting from generating ‘sticky content’ to developing ‘sticky personas’ – dialogue agents that form emotional bonds before steering user behavior.”
This could develop by manipulating AI’s known tendency to be sycophantic. “The trajectory points toward deepfake voice and video wrapped in consistent, documentable backstories; scalable emotional manipulation; and A/B-tested sycophancy individually tuned to psychological profiles,” she continues. “We’re moving from spear-phishing and vishing to relationship operations where victims actively defend the agents exploiting them.”
Old-style social engineering was effectively ‘here’s the lure, take it or leave it’. AI-assisted attacks could involve multiple approaches psychologically steering the target into an even more trusting state of mind. Clues on how to achieve this could be collected by AI agents trawling and analyzing the target’s social media.
Social engineering in 2026
“We’re already seeing the early versions of this play out,” comments Ariel Parnes, COO at Mitiga and former IDF 8200 cyber unit colonel. “WPP’s CEO was impersonated using a cloned voice, a fake WhatsApp account, and YouTube footage: a coordinated attempt that mimicked a Teams meeting with GCHQ-style manipulation. What once required a spear-phishing campaign now takes minutes with generative AI.”
This 2024 attack failed, but a successful video deepfake scam against the Hong Kong branch of a multinational firm cost the firm around $25 million. At the end of September 2025, OpenAI released SORA 2, a video generation system that is “more physically accurate, realistic, and more controllable than prior systems.”
Open AI added, “We’re at the beginning of this journey, but with all of the powerful ways to create and remix content with Sora 2, we see this as the beginning of a completely new era for co-creative experiences.” Just replace ‘co-creative experiences’ with ‘deepfake creations’.
This is important. Through 2026 and beyond, the quality of deepfake social engineering will continuously improve. Criminal professionalism will also improve. Consider SheByte, a new phishing-as-a-service (PhaaS) platform available on the criminal underground (with subscriptions costing around $200).
“It’s a phishing kit that incorporates AI-generated templates to automate the creation and management of phishing websites at scale. These toolkits are becoming more accessible, and we expect this trend to intensify throughout 2026 because criminal operators are continuing to refine and commercialize these platforms,” explains Kevin Gosschalk, founder and CEO at Arkose Labs.
He continues, “Beyond phishing sites, there are sophisticated toolkits designed specifically for fraud that can perfectly spoof voice and video. These aren’t consumer AI tools like ChatGPT being misused; these are purpose-built criminal products engineered for deception.”
Jon Abbott, CEO and co-founder at ThreatAware, adds: “We’re seeing something particularly concerning: native English-speaking cybercriminals from the US, UK, and Canada partnering with Russian ransomware operations. The FBI has confirmed that groups like Scattered Spider (the Hacker Com part of the decentralized English-speaking ‘Community’ of young cybercriminals) are now working with notorious Russian gangs like BlackCat.”
These partnerships combine Western social engineering expertise with Russian technical sophistication and malware capabilities.
Alex Mosher, president and chief revenue officer at Armis, warns: “Artificial intelligence will enable attacks that learn and adapt in real time. Using large language models and gen-AI algorithms, cybercriminals could deploy social engineering based attacks such as phishing emails, messages, and voice deepfakes that adjust tone, language, and content mid-interaction to manipulate victims more effectively. Chains of AI agents will independently identify vulnerabilities, generate exploits, and launch attacks without human oversight, ushering in an era of self-directed cyber offense.”
Keith McCammon, co-founder and Chief Security Officer at Red Canary (acquired by Zscaler), sees the browser overtaking email as phishing’s most exploited entry point in 2026. “With generative AI lowering the cost and complexity of deception, adversaries will use deepfakes, poisoned search results, and fake CAPTCHA [ClickFix] to trick users into executing code directly from the browser. These lures will be almost indistinguishable from legitimate sites, turning the browser into the easiest place to win trust and break it.”

He is not alone in this concern over ClickFix attacks. Archana Manoharan, platform support engineer at CyberProof, also sees a rise in ClickFix attacks. “Social engineering will become more sophisticated, with attackers weaponizing legitimate browser prompts to trick users into executing harmful commands. These techniques bypass traditional security controls by shifting the ‘execution’ step to the user.”
Mark St. John, COO and co-founder at Neon Cyber, warns, “The ever-accelerating ability for AI to mimic brands, applications, human voice and video is going to take fraud in 2026 to new, dystopian levels. What we are witnessing with attacks like the video-driven ClickFix phishing attacks, which are already wildly successful, will be a blueprint for future attacks in which something that seems completely normal, spurred with urgency, will fool not just the indiscriminate user but also the more tech-savvy and aware.”
McCammon continues, “Phishing will become a real-time, AI-driven numbers game. Adversaries will target thousands of users with adaptive, highly personalized lures, needing only a few victims to reap significant financial reward. Unlike Windows or macOS, browsers act as a joker in the pack. They sit outside the traditional security stack and therefore lack the mature controls and visibility that protect operating systems and endpoints. Recent warnings around ChatGPT’s AI-powered Atlas browser show how this blind spot could also widen as intelligence moves into the browser itself.”
To stay ahead next year, businesses must start treating browsers as critical infrastructure, he suggests. “That means tightening access and identity controls, improving endpoint and cloud-level monitoring, and training users to recognize the new generation of attacks. Awareness alone won’t be enough – defenses rely on both user and system resilience working in concert.”
But it’s not just business that needs to be concerned about AI – the financial industry could be attacked. “The industry will need to prepare for autonomous trading bots and AI-driven deepfakes that manipulate stock markets, commodities, and cryptocurrency ecosystems,” warns Nadir Izrael, CTO and co-founder at Armis.
He explains, “By impersonating regulators or company executives, AI systems could trigger false earnings reports, disseminate false corporate announcements, falsify investor briefings, or simulate market crashes. The result: global financial instability with seconds-scale losses that human operators cannot contain.”
And entire countries could also be affected. Mosher again: “Cyber operations will increasingly target public trust itself. During election cycles or geopolitical flashpoints, coordinated campaigns using AI-generated content, fabricated news, and deepfakes will aim to manipulate sentiment, divide societies, and destabilize institutions. These attacks will not seek financial gain but rather to erode confidence in governments, corporations, and democratic systems, turning information itself into a weapon of influence.”
Detecting social engineering attacks
The first requirement in stopping any cyberattack is detection. So, the question going forward is can we detect future AI-enhanced deepfake-rooted social engineering? Historically, social engineering has been successful against individuals, but less successful against computer tools designed to recognize the process. It goes without saying that without enhanced detection tools, AI-enhanced social engineering will be undetectable.
That leaves just two possibilities: improved deepfake detection tools and advanced people processes – or a huge uptick in social engineering success rates.
The security industry is confident that current deepfake detection tools can distinguish between fake and true. Leaving aside that industry must say that (and it is probably currently true), that is for now. But we know that AI constantly improves.
We’re entering the whack-a-mole period: attackers attack in new places with new approaches; defenders learn of the attack, understand the attack, and whack it. But there is always a period after the mole pops up before it gets whacked.
Mick Baccio, global security advisor at Cisco Foundation AI, comments, “The best systems will need to combine signal analysis with behavioral context, cross-checking metadata, timing, and narrative consistency. Still, defenses will lag behind the offensive curve.”
Paul Nguyen, co-founder and co-CEO at Permiso, adds: “Detection techniques continue evolving but will never keep pace with generation quality. By 2026, deepfake video and audio will be undetectable through technical analysis. Spectrograms will show no artifacts. Video frame analysis will reveal no rendering flaws. The only reliable defense is refusing to authenticate through channels that can be spoofed.”
The insider threat – which has always been difficult to detect, is likely to worsen in 2026. Matthieu Chan Tsin, SVP of resiliency services at Cowbell, comments, “Insider threats are a serious cyber threat because they originate from individuals within an organization who already have authorized access, making them difficult to detect… Insiders can exploit their privileged positions to steal data, disrupt systems, or facilitate external attacks, leading to financial losses, legal issues, and breaches of sensitive information.”
Sumedh Barde, CPO at Simbian, adds: “Deepfakes have been a common problem on the internet pre-2025. In 2025, they entered the workplace, with many incidents of fraud involving adversaries posing as interview candidates or a business partner in a video call.”
He aligns this concern with the insider threat: ‘rogue insiders, employees who hurt their organizations from inside’. “Sometimes they do this on behalf of an external adversary in return for money, whereas others are lone wolves.”
His concern, however, is: “In 2026 these two will converge, with rogue insiders leveraging AI and deepfakes. Employees who have the proclivity to cheat but were afraid will be encouraged to cheat with AI making it easy and deepfakes providing plausible deniability. Any insider has all the business context to customize deepfake attacks to seem much more real than anything we’ve seen in 2025.”
Let’s not forget that foreign states could help place their own people in sensitive industries with the help of AI-fabricated backgrounds. In times of geopolitical unrest, this could be described as a sleeper threat, where there would be nothing to detect until it is too late.
“Because of the success seen by North Korean threat actors (and others), we can expect this trend to continue and accelerate in 2026,” suggests Ryan LaSalle, CEO at Nisos.
Brian Long, CEO and co-founder at Adaptive Security, adds, “We’ve seen this play out in real-world campaigns: North Korean IT workers, posing as legitimate remote developers, have infiltrated global tech companies by building convincing online personas and LinkedIn histories. These aren’t ‘hacks’ in the technical sense – they’re manipulations of human trust.”
Prevention
Eran Barak, co-founder and CEO of MIND, says, “No matter how advanced our defenses become, humans will continue to be the first click in a breach. As social engineering becomes more sophisticated, especially with AI-generated phishing and deepfake impersonation, the only sustainable strategy that an organization can really control is context-aware data control. The next generation of security isn’t about catching bad actors. It’s about eliminating the opportunity.”
Prevention is better than cure; and if the illness is incurable (as AI-enhanced social engineering is likely to become) eliminating the opportunity is essential. This will largely require improved human processes.
“Processes are our best weapon against deepfakes,” suggests Jake Williams, faculty at IANS Research and VP of R&D at Hunter Strategy. “If our processes allow verification of identity based on likeness (for example, recognizing someone by their voice or image), then we’re going to be exploited by deepfakes. Conversely, if we implement processes that forbid identity verification based on someone’s likeness, then deepfakes aren’t a threat.”
Patrick Sayler, Director of Social Engineering, NetSPI, agrees with the idea. “Don’t tell salespeople I said this, but a go-to defense against voice cloning is just don’t answer your phone. You can’t be socially engineered if you don’t give the attacker a live audience.”
In practice, prevention will depend on two things: we must change users’ mindset from naturally trusting to naturally distrusting, and we must adapt our workflows to exclude the potential for social engineering.
The former could be promoted by applying the concepts of red teaming and zero trust to humans in a new form of awareness training. Staff training could include deepfake attacks to demonstrate how easily they could be fooled. There are dangers, of course, since staff who aren’t fooled could emerge with a false sense of superiority; but the purpose is to instill zero trust principles into people. Never trust, always verify.
But it’s not traditional zero trust identity verification. “Traditional awareness training won’t stop it. Defensive focus will move from verifying identity to verifying intent,” suggests Mitiga’s Parnes.
The latter, adapted workflows, will be equally important.
“Workflows can be redesigned to encourage detection of deepfake attacks,” says Joe Jones, CEO and co-founder of Pistachio. “For example, businesses should require multiple employees to approve money transfers or data access requests, thus improving the chances that an isolated incident of deception is picked up.
“As threats evolve,” he continues, “it’s likely we’ll see businesses adopt highly specific internal protocols for communication. For instance, by only using specific platforms for internal communication, creating executive passcodes, or a ‘pause and verify’ culture (in which, for example, if calls come from unknown numbers, employees have to verify identities via another method of communication before proceeding).”
Summary
We are entering a new era of distrust. While a neurological natural inclination to trust is part of early human survival processes, it must be replaced by distrust if we wish to continue to survive. AI-enhanced social engineering, with undetectable deepfakes and compelling AI-developed backstories, has advanced from attacking individuals, companies and industries for cybercriminals to entire cultures for adversarial nation states.
Of course, everything written here could be false. “The best defense against deepfakes isn’t just better detection technology, but building a culture where skepticism is standard and quick reactions give way to careful verification,” warns Audra Streetman, senior threat Intelligence analyst at Splunk. “Cybersecurity analysts and journalists alike will need strict vetting standards to confirm the source of online material before trusting it in their work.”
Do you really know what I am? As Ariel Parnes says: “The most successful breaches in 2026 will exploit trust, not vulnerabilities.”
Related: Going Into the Deep End: Social Engineering and the AI Flood
Related: How Social Engineering Sparked a Billion-Dollar Supply Chain Crypto Heist
Related: How Agentic AI will be Weaponized for Social Engineering Attacks
Related: GitHub Warns of North Korean Social Engineering Attacks

