SentinelOne and Censys identified AI infrastructure spanning 175,000 exposed Ollama hosts, operating without the typical guardrails and monitoring that providers implement.
Over 293 days of research, the security firms made 7.23 million observations distributed across 130 countries and 4,032 autonomous system numbers (ASNs), with 23,000 hosts accounting for most of the activity.
Roughly half of the identified hosts could execute code, access APIs, and interact with external systems, SentinelOne says.
The cybersecurity firm explains that a small set of transient hosts accounted for most of the observed activity. Specifically, 13% of the hosts appeared in more than 100 observations (generating nearly 76% of the activity).
“Conversely, hosts observed exactly once constitute 36% of unique hosts but contribute less than 1% of total observations,” SentinelOne notes.
The hosts that persistently appeared in observations, SentinelOne says, “provide ongoing utility to their operators and, by extension, represent the most attractive and accessible targets for adversaries.”
Looking at infrastructure distribution, the cybersecurity firm notes that 56% of hosts were found on fixed-access telecom networks, including consumer ISPs.
In terms of geographical distribution, China accounted for the majority of hosts, at approximately 30%, followed by the US, at just over 20%. Virginia accounted for 18% of the hosts in the US.
While the observed behavior pointed toward multi-model deployments, Llama AI models were the most prevalent, followed by Qwen2, Gemma2, Qwen3, and Nomic-Bert, SentinelOne says.
The cybersecurity firm also discovered that at least 201 hosts were running prompt templates that explicitly remove safety guardrails.
The exposed hosts, SentinelOne says, could be accessed without authorization, monitoring, or billing controls, and could be abused maliciously at zero marginal cost for the attackers.
“The victim pays the electricity bill and infrastructure costs while the attacker receives the generated output. For operations requiring volume, such as spam generation, phishing content creation, or disinformation campaigns, this represents a substantial operational advantage,” SentinelOne notes.
At the same time, these unprotected models could be abused through prompt injections, as the lack of authentication and safety mechanisms results in the AI complying with the attackers’ requests when it comes to information retrieval.
Hosts on residential and telecom networks could be abused to launder malicious traffic, while those with vision capabilities could be exploited for indirect prompt injection via images, at scale.
“The exposed Ollama ecosystem represents what we assess to be the early formation of a public compute substrate: a layer of AI infrastructure that is widely distributed, unevenly managed, and only partially attributable, yet persistent enough in specific tiers and locations to constitute a measurable phenomenon,” SentinelOne notes.
A fresh report from Pillar Security has shown how a threat actor has hijacked and monetized over 30 LLMs as part of Operation Bizarre Bazaar.
Related: LLMs in Attacker Crosshairs, Warns Threat Intel Firm
Related: WormGPT 4 and KawaiiGPT: New Dark LLMs Boost Cybercrime Automation
Related: Cyber Insights 2026: Quantum Computing and the Potential Synergy With Advanced AI
Related: Cyber Insights 2026: Threat Hunting in an Age of Automation and AI

