Malicious LLMs make it easier for less-skilled threat actors to conduct attacks, and Palo Alto Networks researchers have analyzed two recently launched tools: WormGPT 4 and KawaiiGPT.
Anthropic reported recently that its Claude AI was abused by Chinese cyberspies, with the AI reportedly powering 80-90% of their campaign.
Security researchers and threat actors often find ways to bypass the guardrails of legitimate AI assistants. However, there are some LLMs — known as malicious or dark LLMs — that are specifically designed for malicious purposes and don’t have any of the guardrails that legitimate services have.
While legitimate AI tools can be abused by threat actors to design or boost their campaigns, dark LLMs lower the entry barrier for less-skilled attackers, enabling them to generate phishing emails, write polymorphic malware, and automate reconnaissance.
Palo Alto Networks researchers have conducted a detailed analysis of two such dark LLMs. One of them is WormGPT 4.
The original WormGPT emerged in 2023 and was shut down the same year. WormGPT 4 appeared recently, being advertised on underground forums and Telegram channels, with sale campaigns seen by Palo Alto Networks in late September.
One month of access to the AI tool costs $50, but for $220 users can acquire ‘lifetime access’, which includes access to source code.
WormGPT 4 can be used by threat actors to compose convincing phishing messages and other social engineering lures.
The service also provides malware creation functionality. Palo Alto Networks tested it to create ransomware, including file-encrypting functionality, command and control support, and a ransom note.
While WormGPT 4 is advertised to users as a “key to an AI without boundaries”, Palo Alto researchers noted, “The developers of WormGPT 4 maintain secrecy regarding its model architecture and training data. They neither confirm nor deny whether they rely on an illicitly fine-tuned or trained LLM or merely persistent jailbreaking techniques”.
The second dark LLM analyzed by Palo Alto researchers is KawaiiGPT, which appears to have emerged in July 2025. KawaiiGPT is freely available on GitHub and easy to set up.
The researchers showed how it can be used to create convincing social engineering lures, create a script for lateral movement on a Linux host, generate a script for data exfiltration, and write a ransom note.
“In contrast to the commercial nature of WormGPT 4, the accessibility of KawaiiGPT is a threat unto itself. The tool is free and publicly available, ensuring that cost is zero barrier to entry for aspiring cybercriminals,” the researchers explained.
They added, “This open-source, community-driven approach has proven highly effective in attracting a loyal user base. The LLM has already self-reported over 500 registered users, with a consistent core of several hundred weekly active users using the platform.”
Palo Alto Networks warned that dark LLMs such as WormGPT 4 and KawaiiGPT represent a “new baseline for digital risk”, mainly driven by the democratization of skill and commercialization of cyberattacks.
“These unrestricted models have fundamentally removed some of the barriers in terms of technical skill required for cybercrime activity. These models grant the power once reserved for more knowledgeable threat actors to virtually anyone with an internet connection and a basic understanding of how to create prompts to achieve their goals,” the security firm explained.
Related: ChatGPT Vulnerability Exposed Underlying Cloud Infrastructure
Related: SquareX and Perplexity Quarrel Over Alleged Comet Browser Vulnerability

