Check Point® Software Technologies Ltd. announced it has been named as a Recommended vendor in the NSS Labs 2025 Enterprise Firewall Comparative Report, with the highest security effectiveness score.
Software development is on the precipice of a massive transformation. New research from GitLab surveying C-level decision-makers shows that 89% of executives expect that agentic AI will define industry-standard software development processes within three years. However, this evolution also brings substantial challenges. The research also found that 85% of executives recognize that agentic AI will create unprecedented security risks that will require entirely new approaches to security.
Security executives face a challenging reality as they must support AI adoption while simultaneously controlling its emergent security threats. Urgency compounds when considering that 91% of executives plan to expand their AI investments in software development during the next 18 months. Each AI advancement adds complexity to this balancing act.
AI Governance Must Catch Up to Adoption
Most security leaders are painfully aware of the top agentic AI risks cited by respondents: cybersecurity threats (52%), data privacy and security (51%), and maintaining governance (45%). The landscape and even definitions of these risks are evolving and deeply intertwined.
Establishing a governance model for AI is required for organizations to evolve their security strategy alongside emerging AI risks. However, doing so is not straightforward, as AI spans many technology and security domains from data governance to identity and access management. Nevertheless, almost half of those surveyed admitted their organization has not implemented regulatory-aligned governance (47%) nor internal policies (48%) for AI.
Industry-wide challenges create obstacles to AI governance, leaving leaders uncertain about where to focus their strategic efforts most effectively. The non-deterministic nature of agents causes them to behave in unexpected ways, which has been proven to disrupt existing security boundaries. Adding to this security complexity, universal protocols, such as Model Context Protocol and Agent2Agent, are emerging to streamline data access and improve agent interoperability, but their ecosystem-building capabilities introduce additional security considerations.
But these challenges cannot stop security leaders from prioritizing AI governance. If you're awaiting comprehensive best practices for this dynamic technology, you'll be playing a perpetual game of catch-up. Any organization that avoids AI adoption altogether will still be exposed to AI risk through vendors and shadow AI usage in their environment.
3 Ways to Strengthen AI Governance
The window to prepare is closing rapidly. CISOs can start by establishing AI observability capable of tracking, auditing, and attributing agentic behaviors across environments. Here are a few steps CISOs can take today to reduce AI risk and improve governance:
1. Attribute agent behavior through composite identities
As AI systems proliferate, tracking and securing these non-human identities becomes just as important as managing human user access. One way to achieve this is through composite identities, which link an AI agent's identity with that of the human user directing it. So, when an AI agent attempts to access a resource, you can authenticate and authorize the agent and clearly attribute activity to the responsible human user.
2. Continuously monitor agent activity
Operations, development, and security teams need ways to monitor the activities of AI agents across multiple workflows, processes, and systems. It's not enough to know what an agent is doing in your codebase. You also need to be able to monitor its activity in both staging and production environments, as well as in the associated databases and any applications it accesses.
3. Foster new skillsets across security
A culture of security now requires AI literacy. 43% of survey respondents acknowledged a widening AI skills gap, which is likely to grow unless technical leaders prioritize upskilling teams to understand model behavior, prompt engineering, and how to evaluate model inputs and outputs critically.
Understanding where models are performant versus where their use is suboptimal helps teams avoid unnecessary security risk and technical debt. For example, a model trained on anti-patterns will perform well at detecting those patterns, but will not be effective against logic bugs it has never encountered before. Teams should also recognize that no model can replace human ingenuity. When models fail in domains where security engineers or developers lack expertise, they will not be able to identify the security gaps the model has left behind.
CISOs should consider dedicating a portion of learning and development budgets to continuous technical education. This fosters AI security expertise in-house, allowing newly minted AI champions to educate their peers and reinforce best practices.
When Used Properly, AI Delivers Advantages
Properly implemented AI delivers measurable security outcomes, according to executives who have successfully deployed the technology. Nearly half of respondents (45%) identified AI-powered security capabilities as the primary value driver for AI in software development. Rather than replacing human expertise, AI serves as a force multiplier that spreads security knowledge throughout development organizations. It handles routine security tasks, delivers intelligent coding suggestions, and embeds security context directly into developer workflows. Consider vulnerability explanations, where AI can instantly provide the context developers need to resolve issues without requiring security team intervention. These capabilities collectively produce stronger security postures, lower risk exposure, and deeper cross-team understanding that strengthens developer-security collaboration.
The organizations poised for success will neither reject AI entirely nor adopt it without consideration, but will weave security into their AI strategies today. Establishing basic security frameworks now, despite their current limitations, enables rapid adaptation as the field advances. If those surveyed are correct, the three-year transformation timeline has already begun. Leaders who guide their teams toward security-first AI implementations will capture a competitive advantage that extends well beyond risk management. Ultimately, software security remains inseparable from software quality.
Industry News
Buoyant announced upcoming support for Model Context Protocol (MCP) in Linkerd to extend its core service mesh capabilities to this new type of agentic AI traffic.
Dataminr announced the launch of the Dataminr Developer Portal and an enhanced Software Development Kit (SDK).
Google Cloud announced new capabilities for Vertex AI Agent Builder, focused on solving the developer challenge of moving AI agents from prototype to a scalable, secure production environment.
Prismatic announced the availability of its MCP flow server for production-ready AI integrations.
Aptori announced the general availability of Code-Q (Code Quick Fix), a new agent in its AI-powered security platform that automatically generates, validates and applies code-level remediations for confirmed vulnerabilities.
Perforce Software announced the availability of Long-Term Support (LTS) for Spring Boot and Spring Framework.
Kong announced the general availability of Insomnia 12, the open source API development platform that unifies designing, mocking, debugging, and testing APIs.
Testlio announced an expanded, end-to-end AI testing solution, the latest addition to its managed service portfolio.
Incredibuild announced the acquisition of Kypso, a startup building AI agents for engineering teams.
Sauce Labs announced Sauce AI for Insights, a suite of AI-powered data and analytics capabilities that helps engineering teams analyze, understand, and act on real-time test execution and runtime data to deliver quality releases at speed - while offering enterprise-grade rigorous security and compliance controls.
Tray.ai announced Agent Gateway, a new capability in the Tray AI Orchestration platform.
Qovery announced the release of its AI DevOps Copilot - an AI agent that delivers answers, executes complex operations, and anticipates what’s next.
Check Point® Software Technologies Ltd. announced it is working with NVIDIA to deliver an integrated security solution built for AI factories.
Hoop.dev announced a seed investment led by Venture Guides and backed by Y Combinator. Founder and CEO Andrios Robert and his team of uncompromising engineers reimagined the access paradigm and ignited a global shift toward faster, safer application delivery.






