Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
AI-driven recommendations can inadvertently propagate security vulnerabilities across the software development lifecycle, says Yossi Pik, CTO and Co-Founder of Backslash Security.
When developers rely on AI tools for guidance, there's a risk that they might adopt solutions without fully understanding the underlying dependencies or security implications, Pik continues. This lack of transparency can lead to a chain reaction, where insecure or outdated packages are repeatedly used across multiple projects, exacerbating the security risks. The AI's ability to mask complexity and present solutions with a high degree of confidence further complicates this issue, as developers may overlook the need for thorough validation and testing, assuming the AI's recommendations are inherently safe.
The Veracode 2024 State of Software Security Report reveals a disconcerting trend: Despite the speed and efficiency AI brings to software development, it does not necessarily produce secure code, Chris Wysopal, Co-Founder and Chief Security Evangelist at Veracode confirms. In fact, other research shows that AI-generated code contains about the same percentage of flaws as code created by humans.
"I'm concerned about security," Mike Loukides, VP of Emerging Tech Content at O'Reilly Media, agrees. "How much of the code in sources like GitHub or StackOverflow was written with security in mind? Probably not very much. I think an AI could be built that would generate secure code, but the first step would be assembling training data that consisted only of secure code. (Or training data that labels code as secure or insecure.) And I don't think anyone has done that."
Ultimately, we will always need the critical analysis derived from the "people factor" of software development, in order to anticipate and protect code from today's existing and emergent sophisticated attack techniques, adds Pieter Danhieux, Co-Founder and CEO of Secure Code Warrior.
DEVOPSdigest invited experts across the industry — consultants, analysts and vendors — to comment on how AI can support the software development life cycle (SDLC). In Part 6 of this series, the experts warn of the security risks associated with using AI to help develop software.
Unapproved Assistants
Developers may decide to use coding assistants that they've used in the past because they are familiar with them, even though they may not be approved by IT. This can leave an organization exposed to significant security risks, such as potential data breaches, vulnerabilities and unauthorized access because of the generated code.
David Brault
Product Marketing Manager, Mendix
UNSAFE DATA
With anything in the technology space, data exposure is one of the top risks any business will be concerned about. There are still questions as to how the data gets used depending on the application, where it gets stored, what is being used to learn from it, etc. So not understanding what data you are giving the system to use, not following corporate data policies, or having technical controls in place that would allow for safe usage are all challenges to a secure deployment.
Sean Heide
Research Technical Director, Cloud Security Alliance
DATA BREACHES
When integrating AI into software development, security and data privacy emerge as critical challenges. AI systems often require access to large datasets, which can include sensitive and personal information. Ensuring the privacy and security of data is essential to prevent unauthorized access and breaches.
Ed Frederici
CTO, Appfire
Malicious actors who gain unauthorized access, or manipulate model inputs or outputs, can compromise the model's integrity and the confidential data it stores. Strong security measures are key for preventing cyber incidents and ensuring the reliability of LLM applications. Organizations using LLMs should implement holistic data protection strategies, conduct regular security audits, and develop robust incident response plans to mitigate risks.
Ratan Tipirneni
President and CEO, Tigera
AI enables organizations to enhance software development practices by boosting efficiency and reducing cycle times, but its use cannot be at the cost of privacy and data security. Using AI requires guardrails to be in place for it to be implemented responsibly — both for organizations and their customers. Without carefully considering how AI tools store and protect proprietary corporate, customer, and partner data, organizations may be vulnerable to security risks, fines, customer attrition, and reputational damage. This is especially important for organizations in highly regulated environments, such as the public sector, financial services, or healthcare, that must adhere to strict external regulatory and compliance obligations.
David DeSanto
Chief Product Officer, GitLab
OUTDATED OPEN SOURCE SOFTWARE
Outdated open source software (OSS) package recommendations represent a significant security risk posed by GenAI coding tools. Due diligence on suspicious OSS packages often reveals that the recommended versions may be outdated and contain known vulnerabilities, largely due to the rapid pace at which new vulnerabilities are discovered. The core issue stems from the fact that vulnerability databases are continuously updated as new versions are released and new vulnerabilities are identified, while large language models (LLMs) are trained on static datasets, which may not reflect the latest security information.
Yossi Pik
CTO and Co-Founder, Backslash Security
One of the biggest challenges in using AI to support development is organizations' inability to identify the lineage of the code AI generates. It's more than likely that AI will pull from open-source to build software, and 82% of open-source software components are inherently risky due to security issues, vulnerabilities, code quality, and more. It's critical that organizations have the tools in place to discover software components and continuously assess the integrity of software components — especially those that are produced by AI.
Javed Hasan
CEO and Co-Founder, Lineaje
VULNERABLE DEPENDENCIES
Security-wise, an AI may suggest embedding vulnerable dependencies because this dependency is the most frequent one in the code it has been trained with. It does seem pretty good at avoiding some basic problems, like SQL injection, though. It cannot be blindly trusted, because cybersecurity is made of constantly moving targets and attacks.
Mathieu Bellon
Senior Product Manager, GitGuardian
AI-generated guidance can sometimes result in the direct use of indirect OSS packages that are not listed in the manifest. These "phantom package" scenarios occur due to the confidence with which AI models present recommendations. At first glance, these solutions may seem simple, but they often hide the incorrect usage of dependencies. As a result, developers might be unaware of hidden dependencies, which can introduce security vulnerabilities that are not easily detectable through conventional manifest-based dependency checks.
Yossi Pik
CTO and Co-Founder, Backslash Security
Model Exploitation
Model exploitation occurs when bad actors identify and exploit vulnerabilities within LLMs for nefarious purposes. This can lead to incorrect or harmful outputs from the model, in turn compromising its effectiveness and safety.
Ratan Tipirneni
President and CEO, Tigera
Prompt Injection
When it comes to cybersecurity risks for LLM applications, prompt injection is a serious threat. In prompt injection, attackers manipulate the input prompts to an LLM to generate incorrect or harmful responses. This can compromise the model's integrity and output quality. Safeguards against prompt manipulation include validating and sanitizing all inputs to the model.
Ratan Tipirneni
President and CEO, Tigera
Training Data Poisoning
Library poisoning involves a bad actor intentionally corrupting library data to manipulate the AI responses in malicious ways during model development. For organizations developing their own AI model, it is imperative your development teams carefully monitor and curate data before leveraging AI-based tools.
Chetan Conikee
Co-Founder and CTO, Qwiet AI
Training data poisoning refers to tampering with the data used to train LLMs in an attempt to corrupt the model's learning process. This can alter model outputs, leading to unreliable or biased results. Hyper-vigilance in data sourcing and validation is necessary in order to prevent data poisoning. Countermeasures include using verified and secure data sources, using anomaly detection during training, and constantly monitoring model performance for signs of corruption. Ratan Tipirneni
President and CEO, Tigera
LEAKING INTELLECTUAL PROPERTY
A risk of using Generative AI in software development is the leaking of intellectual property through the AI prompt. Implementing context-aware filtering can prevent the model from responding to prompts that could lead to IP leakage. It is also vital to train all users and developers interacting with Generative AI tools, making them aware of IP protection risks and policies.
Ed Charbeneau
Developer Advocate, Principal, Progress
DEVELOPER EDUCATION
When looking at how to determine ethical AI integration, only humans can (and should) be the ones providing oversight when considering compliance requirements, design and threat modeling practices for developer teams. That said, we've seen how certain, more traditional, upskilling efforts have not been able to keep pace with a constantly evolving threat environment. This leads to concern of how to intuitively address developer education, to evolve it into something that can be tailored to the foundational security skills and advanced techniques individuals need to learn to keep their applications and more importantly, organizations, safe and secure.
Pieter Danhieux
Co-Founder and CEO, Secure Code Warrior
Go to: Exploring the Power of AI in Software Development - Part 7: Maturity
Industry News
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.
Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.
Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.
Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.