CYBERNEWSMEDIA Network:||
AD · 970×250

Artificial Intelligence

Easily Exploitable Critical Vulnerabilities Found in Open Source AI/ML Tools

Protect AI warns of a dozen critical vulnerabilities in open source AI/ML tools reported via its bug bounty program. The post Easily Exploitable Critical Vulnerabilities Found in Open Source AI/ML Tools appeared first on SecurityWeek.

A dozen critical vulnerabilities have been discovered in various open source AI/ML tools over the past few months, a new Protect AI report shows.

The AI security firm warns of a total of 32 security defects reported as part of its Huntr AI bug bounty program, including critical-severity issues that could lead to information disclosure, access to restricted resources, privilege escalation, and complete server takeover.

The most severe of these bugs is CVE-2024-22476 (CVSS score of 10), an improper input validation in Intel Neural Compressor software that could allow remote attackers to escalate privileges. The flaw was addressed in mid-May.

A critical-severity issue in ChuanhuChatGPT (CVE-2024-3234) that allowed attackers to steal sensitive files existed because the application used an outdated, vulnerable iteration of the Gradio open source Python package.

LoLLMs was found vulnerable to a path traversal protection bypass (CVE-2024-3429) leading to arbitrary file reading, which could be exploited to access sensitive data or cause a denial-of-service (DoS) condition.

Two critical-severity vulnerabilities in Qdrant (CVE-2024-3584 and CVE-2024-3829) could allow attackers to write and overwrite arbitrary files on the server, potentially enabling full takeover.

Lunary was found to allow users “to access projects via the API from an organization that they should not have authorization to access”. The issue is tracked as CVE-2024-4146.

Other critical-severity flaws researchers from the Huntr community discovered include: server-site request forgery (SSRF) in AnythingLLM, insecure direct object reference (IDOR) in Lunary, missing authorization and authentication mechanisms in Lunary, improper path sanitization in LoLLMs, path traversal in AnythingLLM, and log injection in the Nvidia Triton Inference Server for Linux.

A dozen other high-severity vulnerabilities were identified and reported in LoLLMs, Lunary, AnythingLLM, Deep Java Library (DJL), Scrapy, and Gradio.

“It is important to note that all vulnerabilities were reported to the maintainers a minimum of 45 days prior to publishing this report, and we continue to work with maintainers to ensure a timely fix prior to publication,” Protect AI notes.

Related: Critical PyTorch Vulnerability Can Lead to Sensitive AI Data Theft

Related: Eight Vulnerabilities Disclosed in the AI Development Supply Chain

Related: Critical Vulnerabilities Found in Open Source AI/ML Platforms

Related: Beware – Your Customer Chatbot is Almost Certainly Insecure: Report

Latest News

CYBERNEWSMEDIAPublisher