CYBERNEWSMEDIA Network:||
AD · 970×250

Artificial Intelligence·Vulnerabilities

Data Exposure Vulnerability Found in Deep Learning Tool Keras

The vulnerability is tracked as CVE-2025-12058 and it can be exploited for arbitrary file loading and conducting SSRF attacks. The post Data Exposure Vulnerability Found in Deep Learning Tool Keras appeared first on SecurityWeek.

Keras deep learning tool vulnerability

A vulnerability in the open source library Keras could allow attackers to load arbitrary local files or conduct server-side request forgery (SSRF) attacks.

Providing a Python interface for artificial neural networks, Keras is a deep learning API that can be used as a low-level cross-framework language for the building of AI models that work with JAX, TensorFlow, and PyTorch.

Tracked as CVE-2025-12058 (CVSS score of 5.9), the medium-severity flaw exited because the library’s StringLookup and IndexLookup preprocessing layers allow for file paths or URLs to be used as inputs to define vocabularies.

When Keras reconstructed the layers by loading a serialized model, it would access the referenced file paths during deserialization, without proper validation or restriction, and incorporate the contents of the specified files into the model state.

“This means that even when security features like safe_mode are enabled, a malicious model can still instruct Keras to access local files or external URLs during load time, exposing sensitive data or enabling remote network requests,” Zscaler explains.

According to the company, this behavior bypasses safe deserialization, allowing attackers to read arbitrary local files, exfiltrate information through vocabularies, and conduct SSRF attacks.

In real-world scenarios, attackers could exploit the vulnerability by uploading to public repositories malicious Keras models with specially crafted vocabulary parameters, such as those targeting SSH keys.

When a victim downloads and loads the model, during deserialization, their SSH private keys are read into the model’s vocabulary. The attacker can retrieve the keys by redownloading the model or through vocabulary exfiltration.

“Potential impact: complete compromise of victim’s SSH access to servers, code repositories, and cloud infrastructure. Attackers can pivot to active intrusion: clone private repos, inject backdoors or malicious commits into CI/CD, execute code in production, and move laterally,” Zscaler says.

If a malicious model is deployed in cloud environments with instance metadata services, its loading in a VM allows attackers to retrieve IAM credentials and gain full control over an organization’s cloud resources.

The vulnerability was resolved in Keras version 3.11.4 by embedding vocabulary files directly into the Keras archive and loading them from the archive upon initialization. It also disallows the loading of arbitrary vocabulary files when safe_mode is enabled.

Related: Chrome 142 Update Patches High-Severity Flaws

Related: Cisco Patches Critical Vulnerabilities in Contact Center Appliance

Related: Critical Vulnerabilities Patched in TP-Link’s Omada Gateways

Related: Oracle Releases October 2025 Patches

Latest News

CYBERNEWSMEDIAPublisher