Progress announced the launch of Progress Data Cloud, a managed Data Platform as a Service designed to simplify enterprise data and artificial intelligence (AI) operations in the cloud.
As part of DEVOPSdigest's annual list of DevOps predictions, DevSecOps experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how DevSecOps and related risks and tools will evolve in 2025. Part 3 covers AI security risks.
AI SECURITY THREAT IN 2025
AI as a Double-Edged Sword in Software Security: AI will increasingly help coders, defenders, and attackers accelerate their work. By integrating AI with automated tooling and CI/CD pipelines, developers will be able to quickly identify and fix coding flaws. Defenders can leverage AI's ability to analyze massive amounts of data and identify patterns, accelerating the work of SOC teams and other blue-team operations. Unfortunately, attackers may also use AI to craft sophisticated social engineering attacks, review public code for vulnerabilities, and employ other tactics that will complicate cybersecurity in the near future. We need to learn how to secure AI before broadly deploying it for security purposes.
Christopher Robinson
Chief Security Architect, OpenSSF
Significant increase in software that's developed thanks to AI: By January 2023, 92% of US-based developers were using AI coding tools, so software AI generation is already here. Developers are becoming more comfortable with it and will be using it more. However, study after study has found that AI-generated code tends to have more vulnerabilities than human-generated code, which makes sense — it can't fully understand the code, and there's a lot of vulnerable code it's learning from. The most likely solution will be in two parts. First, automation: Projects like the AIxCC competition are working to develop AI tools to find and fix vulnerabilities. Second, we need humans to better understand how to develop secure software so that they can better supervise AI systems. We encourage software developers to take a course, such as our "Developing Secure Software" (LFD121) course, to learn how to develop secure software.
David A. Wheeler
Director of Open Source Supply Chain Security, OpenSSF
AI Governance Will Emerge as a Sprawling Security Challenge: With different regions enforcing cybersecurity regulations at varying speeds, such a complex global landscape will force software providers to invest heavily in compliance efforts. Existing AI regulations focus predominantly on ethical guidelines, bias, safety, and disinformation, rather than security. In the coming year, AI governance will become a critical concern for both cybersecurity professionals and regulators, particularly as US-based software regulators grapple with drafting standards for this ever-evolving technology.
Sohail Iqbal
Chief Information Security Officer, Veracode
2025 will be the year where we really see the challenges of securing AI both from a technology perspective but also business risk management, forcing industry, and governments to address them. Right now, the industry has a baseline understanding of how to use AI safely and, therefore, lacks a full understanding of its risks. The most important action we'll need to take in the coming year is to gain a deeper understanding of AI/ML engines , and their journey in production usage. This could represent an organization's most vulnerable risk and attackers are exploring how they can be exploited.
Paul Davis
Field CISO, JFrog
AI SECURITY THREAT: LLM-DRIVEN CODING
In 2025, the rise of LLM-driven development will fundamentally reshape decision-making in coding, prioritizing efficiency and functionality. Developers will rely on LLMs to provide the "best" answer to prompts, often overlooking vulnerabilities in favor of immediate usability. As a result, packages with known vulnerabilities may increasingly find their way into production, as security becomes a smaller factor in the decision-making process. This trend underscores the urgent need for AppSec solutions to proactively identify risks and ensure secure code paths without slowing down innovation. The challenge will be balancing AI-powered speed with robust security, preventing a surge in exploitable vulnerabilities.
Yossi Pik
CTO and Co-Founder, Backslash Security
AI SECURITY THREAT: GENAI-DRIVEN CODING
GenAI-driven Coding Will Saddle Organizations with More Security Debt: As AI-fueled code velocity increases, the number of vulnerabilities and level of critical security debt will also grow. With more code created at a rapid pace, developers will become inundated with compliance risks, security alerts, and quality issues. Identifying a solution to help will be key. As security debt grows, so too will the demand for automated security remediation, however using GenAI to write code is still two years ahead of using the same technology for security hardening and remediation. This is why, in 2025, we can expect a rapid increase in the adoption of AI-powered remediation to fix vulnerabilities faster and materially reduce security debt.
Chris Wysopal
Co-Founder and Chief Security Evangelist, Veracode
In 2025, the pressure to develop software faster will continue, but speed has become a serious security risk that is only being furthered by GenAI. The more we speed up development and release cycles with GenAI and otherwise, the more code vulnerabilities are being introduced. Next year, organizations must start focusing on balancing software development momentum with security. They will need to slow down enough to embed security at every stage of development, not just shifting left, to reduce risks and close potential attack entry points.
Karthik Swarnam
Chief Security and Trust Officer, ArmorCode
AI SECURITY THREAT: INJECTION ATTACKS
Injection Attacks Resurface as AI-Generated Code Opens New Vulnerabilities: As AI-driven coding tools become mainstream in 2025, injection attacks are set to make a strong comeback. While AI accelerates development, it frequently generates code with security weaknesses, especially in input validation, creating new vulnerabilities across software systems. This resurgence of injection risks marks a step back to familiar threats, as AI-based tools produce code that may overlook best practices. Organizations must stay vigilant, reinforcing security protocols and validating AI-generated code to mitigate the threat of injection attacks in an increasingly AI-powered development environment.
Randall Degges
Head of Developer & Security Relations, Snyk
AI SECURITY THREAT: OPEN SOURCE
The rise of AI-driven threats in open source: In 2025, open source software threats will shift from traditional vulnerabilities to AI-generated backdoors and malware embedded in open source packages. With attackers leveraging AI tools to develop and disguise malware within open source code, addressing these new threats will require a significant advancement in security tools to stay ahead of these quickly evolving challenges.
Idan Plotnik
Co-Founder and CEO, Apiiro
AI SECURITY THREAT: API
The API economy is set to experience massive changes by 2025, with AI leading the charge. Simply put, there's no AI without APIs — they're the foundation that makes AI integration possible. As developers continue to explore AI and large language models (LLMs) for innovation, the number of APIs will grow exponentially. In fact, the value of APIs enabling AI is expected to skyrocket by 170% by 2030. But with this growth comes challenges, especially in security. The more advanced technologies like AI become, the more sophisticated attackers get. Over the past year, 55% of organizations dealt with API security incidents, and for 20% of them, remediation costs topped $500,000. What's more, 25% of companies have already faced AI-enhanced API threats, and 75% are worried about what's to come. Tackling these risks will require organizations to focus on complete visibility into their API endpoints and adopt centralized management platforms to stay ahead of attackers.
Marco Palladino
CTO and Co-Founder, Kong
Hardening API Security Must Be a CISO Priority in 2025:
APIs form the backbone of computer-to-computer communications, powering nearly every generative AI application and workplace tool. But as APIs fuel this innovation, they also open the door to increasingly sophisticated cyberattacks. Gartner reports that API breaches result in 10 times more data exposure than the average security incident, underscoring the importance of securing APIs as a top priority, especially as organizations adopt generative AI into workflows.
Prioritizing secure APIs in DevOps not only ensures healthy software development but reduces the risk of reputational or financial damage. To keep up with the AI revolution, stay ahead of growing threats, and the rapidly expanding threat landscape, organizations must critically evaluate their API security strategy and ensure that it is a critical component of their DevSecOps mandate.
Rupesh Chokshi
SVP and GM of Application Security, Akamai
AI SECURITY THREAT: KUBERNETES
With flexibility at the forefront, Kubernetes is quickly becoming the de facto platform in which GenAI applications are being deployed. Organizations can run Kubernetes for GenAI across various workloads including virtual machines (VMs), containers, or bare metal servers — or a mixture of all three. Against this backdrop, in 2025, there will be a heightened focus on Kubernetes security.
Ratan Tipirneni
President and CEO, Tigera
Industry News
Sonar announced the release of its latest Long-Term Active (LTA) version, SonarQube Server 2025 Release 1 (2025.1).
Idera announced the launch of Sembi, a multi-brand entity created to unify its premier software quality and security solutions under a single umbrella.
Postman announced the Postman AI Agent Builder, a suite empowering developers to quickly design, test, and deploy intelligent agents by combining LLMs, APIs, and workflows into a unified solution.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of CubeFS.
BrowserStack and Bitrise announced a strategic partnership to revolutionize mobile app quality assurance.
Mendix, a Siemens business, announced the general availability of Mendix 10.18.
Red Hat announced the general availability of Red Hat OpenShift Virtualization Engine, a new edition of Red Hat OpenShift that provides a dedicated way for organizations to access the proven virtualization functionality already available within Red Hat OpenShift.
Contrast Security announced the release of Application Vulnerability Monitoring (AVM), a new capability of Application Detection and Response (ADR).
Red Hat announced the general availability of Red Hat Connectivity Link, a hybrid multicloud application connectivity solution that provides a modern approach to connecting disparate applications and infrastructure.
Appfire announced 7pace Timetracker for Jira is live in the Atlassian Marketplace.
SmartBear announced the availability of SmartBear API Hub featuring HaloAI, an advanced AI-driven capability being introduced across SmartBear's product portfolio, and SmartBear Insight Hub.
Azul announced that the integrated risk management practices for its OpenJDK solutions fully support the stability, resilience and integrity requirements in meeting the European Union’s Digital Operational Resilience Act (DORA) provisions.
OpsVerse announced a significantly enhanced DevOps copilot, Aiden 2.0.