Blog
Published on

Strengthening AI Supply Chains: Lessons from CVE-2024-34359

5
min read
Strengthening AI Supply Chains: Lessons from CVE-2024-34359

In the rapidly advancing field of artificial intelligence, the security of our AI supply chains has never been more critical. The recent CVE-2024-34359 (https://nvd.nist.gov/vuln/detail/CVE-2024-34359)  vulnerability, known as "Llama Drama," provides a cautionary tale of how dependencies in AI systems can expose significant security risks.

Generative AI: A Complex Software Ecosystem

Generative AI software represents a new niche in the software landscape. While it is driven by large language models (LLMs), these systems are built upon a diverse array of software components. Beyond the models themselves, generative AI systems integrate numerous libraries, frameworks, and tools that make up the software supply chain. Each of these components plays a vital role in the system's functionality and security.

In-Depth Exploit Explanation

Understanding the Vulnerability:

  • Package Affected: llama-cpp-python is widely used for integrating AI models through Python bindings for llama.cpp.
  • Root Cause: The vulnerability is due to the improper use of the Jinja2 template engine to process model metadata without sandboxing, leading to server-side template injection (SSTI).

Python Exploit Example:

Here’s a technical breakdown of how this vulnerability can be exploited:

How It Works:

  • Malicious Template: The chat_template field contains an expression that executes the ls -la command, demonstrating how system commands can be run through unsanitized input.
  • Execution: The lack of sandboxing allows this command to be executed on the server, potentially leading to unauthorized access to sensitive files and directories.

Mitigation Measures:

  • Upgrade: Users should upgrade to version 0.2.72 or later of llama-cpp-python, which includes fixes for this issue.
  • Sandboxing: Implement sandboxing to restrict the execution capabilities of templates, preventing unsafe operations.
  • Input Validation: Ensure that all user inputs are thoroughly validated and sanitized to block malicious code.

The Broader Implications for AI Security

Although CVE-2024-34359 was discovered and addressed two months ago, it continues to serve as a powerful lesson in securing AI supply chains. The vulnerability underscores the importance of addressing software dependencies, as a single weakness can have cascading effects, compromising entire systems.

By focusing on strengthening our AI supply chains and ensuring all components are regularly audited and updated, we can enhance the resilience and security of AI technologies, enabling them to drive innovation securely and responsibly.

Related Blogs

Find out how we’ve helped organisations like you

Pioneering Application Security with AI: Scantist at SGTech Partnership Innovations Day

Today, Scantist had the incredible opportunity to exhibit our AI-Driven Application Security solutions at the SGTech Partnership Innovations Day, held in collaboration with ST Engineering. 🎉

Exciting Update 🎉 Scantist’s CTO Dr. Ding Sun Joins FY24/25 CSC Executive Committee!

We’re proud to announce that Dr. Ding Sun, CTO of Scantist, has been appointed as one of the Co-Opted Executive Committee Members for SGTech’s Cyber Security Chapter (CSC). This milestone underscores our commitment to enhancing Singapore’s cybersecurity landscape and contributing expertise to national efforts.

Cybersecurity Innovation Day 2024 – Scantist’s Innovation of Supply Chain Security with AI Technology

Scantist commemorated the Cybersecurity Innovation Day 2024 on Monday, as one of the Singapore’s most vibrant cybersecurity community event held with regard to Cyber Security Organized by the Cyber Security Agency of Singapore (CSA) and the CyberSG TIG Collaboration Centre.