CYBERNEWSMEDIA Network:||
AD · 970×250

Vulnerabilities·Supply Chain Security

From Open Source to OpenAI: The Evolution of Third-Party Risk

From open source libraries to AI-powered coding assistants, speed-driven development is introducing new third-party risks that threat actors are increasingly exploiting. The post From Open Source to OpenAI: The Evolution of Third-Party Risk appeared first on SecurityWeek.

Code supply chain attack

The Silicon Valley mantra to “move fast and break things” prioritizes growth over anything else. Unfortunately, this velocity extends to efficiently introducing vulnerabilities into the software supply chain. From open source software libraries to AI-enabled coding assistants, these tools enable rapid innovations, but they are also enabling attack vectors that threat actors are looking to exploit.

Third-party risks have always been an issue, but they have not always been top of mind. For the past decade, ransomware dominated the headlines and mindshare of cybersecurity leaders. In more recent years, nation-state threats and the growing risk of cyberwarfare have come to the forefront. However, regardless of their motive or mode of operation, vulnerabilities in the software supply chain are an attractive target for cyberattack.

The SolarWinds breach was a wake-up call for the danger of third-party risk. It doesn’t matter how iron-clad your defenses are if an upstream service provider has a secure tunnel into your enterprise. The Log4J vulnerability, Log4Shell, demonstrated how third-party risk can manifest in open source software libraries, which have been widely adopted and implemented into enterprise services. Threat actors wasted no time scanning for these vulnerabilities when they were disclosed.

Lately, threat actors have turned their attention to AI-enabled coding assistants and their tendency to hallucinate factually incorrect responses; in this example, a software library that does not exist. When threat actors identify a hallucinated software library, they register a malicious binary with the same name. Developers are unknowingly integrating these malicious packages into their code. Cybersecurity researchers have dubbed this attack “slopsquatting.”

Consequently, there is a need for both DevOps and cybersecurity to become more proactive about identifying and remediating these risks. DevOps calls this “shift left,” and cybersecurity calls this “left of boom.” Visibility is the foundation of this approach.

A legacy of third-party risk

Third-party risks are hardly a new phenomenon. Examples of supply chain attacks date back at least two decades. Someone infamously attempted to insert a backdoor into the Linux kernel in 2003. However, it wasn’t until the SolarWinds breach in 2020 that organizations started getting serious about third-party risk.

The SolarWinds breach was a sophisticated supply chain attack conducted by a Russian advanced persistent threat (APT). A compromised software update pushed a malicious backdoor to 18,000 customers, enabling the threat actors to access high-value targets, including dozens of U.S. federal agencies.

One year later, Log4Shell, a Log4J vulnerability, became the cause of a major supply chain security crisis. Log4J is a popular open source logging library in the Java ecosystem, which is embedded in hundreds of millions of applications and devices. Vulnerabilities such as these are even more of an issue in operational technology (OT) environments, which contain mission-critical assets and legacy technology that are difficult or impossible to patch.

Many organizations leverage open source software libraries because they accelerate innovation and reduce costs, but in the case of Log4J, this convenience came at the cost of leaving their software exposed. For weeks following the disclosure of Log4Shell, organizations scrambled to try to identify which of their vendors had integrated Log4J into their solutions, and vendors had to reassure their customers that everything was under control as they rolled out remediation plans.

Please don’t kill my vibe

Vibe coding” has emerged as a prominent “killer app” for generative AI. According to GitHub, 97% of developers have used AI tools in the past year. However, an overreliance on AI-generated code is introducing vulnerabilities into the codebase.

Academic researchers have discovered (PDF) that among the 16 leading code-generation tools, 19% of all recommended software packages don’t exist, and 43% of hallucinated software packages were repeated every time.

Malicious actors are proactively discovering these hallucinated software packages and preemptively registering malicious code with the same name. A Python Software Foundation developer has dubbed the attack “slopsquatting” due to its similarity to cybersquatting or typosquatting.

For example, a malicious software package “ccxt-mexc-futures” was registered and downloaded more than 1,000 times on PyPl, a public software repository. In this instance, the malware modified key operations used for cryptocurrency trading; however, given the recent success of the Shai-Hulud worm replicating across PyPl, it seems likely that slopsquatting could be a future attack vector for initiating a high-profile worm.

Beyond the enterprise, before the attack

There are many differences between these examples of third-party risk. The SolarWinds breach was a highly-targeted supply chain attack that raised the issue of supply chain integrity. The Log4J vulnerability was considered the most widespread vulnerability ever disclosed at the time, which raised the need for supply chain observability. The emergence of slopsquatting attacks requires more oversight of AI-enabled coding assistance.

A common theme here is the need for more visibility. The complexity of software dependencies throughout the supply chain can make it difficult to monitor for vulnerabilities and risks. DevOps needs to “shift left” to become more proactive and transparent about risk so that cybersecurity teams can identify and remediate these risks before they are exploited (i.e., left of boom).

Organizations can audit the provenance of software with Software Bills of Materials (SBOMs) to track each dependency’s origin and version. Static application security testing (SAST) and dynamic application security testing (DAST) can identify vulnerabilities that could be exploited in an attack. Again, visibility is key in this approach. Organizations need to begin with a comprehensive asset inventory in order to assess the business impact of specific applications.

Cybersecurity teams can also monitor for indicators of action or attack (IOAs), such as unusual system behavior or new connections. Identifying behavioral anomalies such as these are an ideal use case for AI since machine learning excels at pattern recognition and deviations from normal behavior.

When it comes to AI-enabled coding assistants, developers need to be aware of their risks, such as poor account authentication and input validation weaknesses. Developers should embed security into their prompts, such as MFA lockouts and input sanitization. It is also essential, now more than ever, to include mandatory human review and application security testing.

Latest News

CYBERNEWSMEDIAPublisher