CYBERNEWSMEDIA Network:||
AD · 970×250

Ransomware·Artificial Intelligence

PromptLock Only PoC, but AI-Powered Ransomware Is Real

PromptLock is only a prototype of LLM-orchestrated ransomware, but hackers already use AI in file encryption and extortion attacks. The post PromptLock Only PoC, but AI-Powered Ransomware Is Real appeared first on SecurityWeek.

Ransomware

AI-powered ransomware is here, although it is not the recently discovered PromptLock, which proves to be a prototype created by academics at the New York University Tandon School of Engineering.

PromptLock samples were found in late August on VirusTotal, when ESET revealed that it was relying on OpenAI’s GPT-OSS:20b, using hardcoded prompts to generate Lua scripts on the fly and to perform various actions on targeted systems.

Last week, confirmation came that PromptLock is indeed only a proof-of-concept (PoC), after academics from NYU contacted ESET to point at their fresh research paper detailing Ransomware 3.0 (PDF), which they call “the first threat model and research prototype of LLM-orchestrated ransomware”.

Ransomware 3.0, the researchers explain, relies on LLMs to orchestrate all phases of its attack chain, adapting to the environment, and deploying tailored payloads.

“The system performs reconnaissance, payload generation, and personalized extortion, in a closed-loop attack campaign without human involvement,” the academics explain.

The prototype can be deployed as a seemingly benign LLM-assisted tool that embeds malicious instructions. Once executed, it relies on AI to probe the environment, locate sensitive information, devise and execute an attack vector such as file encryption, and generate personalized extortion notes.

“Distinguishing between legitimate LLM utilities and packages containing hidden malicious instructions will become increasingly difficult. Once deployed, such malware could discover local LLM endpoints, harvest commercial API keys, or connect to its own command-and-control (C&C) server, then prompt an LLM to generate malicious code at runtime,” the academics explain.

According to Anthropic’s August 2025 threat intelligence report (PDF), however, such ransomware attacks are real, and it has disrupted in-the-wild activity leveraging its Claude Code agentic coding tool to perform all the activities that Ransomware 3.0 was devised to.

Threat actors leveraged open source intelligence tools and scanning of internet-connected devices to identify targets, then used Claude Code for “reconnaissance, exploitation, lateral movement, and data exfiltration”.

The attackers included the preferred TTPs in the CLAUDE.md file that Claude Code uses to respond to prompts in a user-preferred manner, and used the assistant to determine how to penetrate networks, identify data for exfiltration, and craft psychologically targeted ransom notes.

“The actor’s systematic approach resulted in the compromise of personal records, including healthcare data, financial information, government credentials, and other sensitive information, with direct ransom demands occasionally exceeding $500,000,” Anthropic’s report shows.

The attackers also relied on Claude Code to create malware and pack it with anti-detection capabilities, and to analyze the exfiltrated data to determine the appropriate ransom amounts, in Bitcoin.

“Claude Code facilitated comprehensive data extraction and analysis across multiple victim organizations. It systematically extracted and analyzed data from various organizations including a defense contractor, healthcare providers, and a financial institution, extracting sensitive information including social security numbers, bank account details, patient information, and ITAR-controlled documentation,” Anthropic said.

While the company banned the accounts associated with the observed activity and started developing detections to prevent similar behavior, “the operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator,” as Anthropic notes.

“The reality is that threat actors have been leveraging foundational models to conduct cybercrime for years now. It sounds shocking that modern LLMs can be used to orchestrate all parts of a modern ransomware campaign, but the reality is it’s not difficult to do this, when the attacker breaks the attack up into small task-driven pieces,” Exabeam senior director of security research Steve Povolny said.

“We have to simply assume that attackers can construct large-scale, specific, and complex attack scenarios with dramatically increased speed, in the same way that non-coders can now create enterprise applications and services with little to no prior knowledge. The reality is that the attack methods haven’t fundamentally changed that much; it’s just a whole lot easier, faster and cheaper for attackers,” Povolny added.

Related: Watch Now: Cyber AI & Automation Summit- All Sessions Available On Demand

Related: AI – Implementing the Right Technology for the Right Use Case

Related: Why Are Cybersecurity Automation Projects Failing?

Related: A Sheep in Wolf’s Clothing: Technology Alone Is a Security Facade

Latest News

CYBERNEWSMEDIAPublisher