Most Organizations Use AI to Generate Code Despite Security Concerns
October 22, 2024

Nearly all (92%) security leaders have concerns about the use of AI-generated code within their organization, according to Organizations Struggle to Secure AI-Generated and Open Source Code, a new report from Venafi.


Source: Venafi

Other key survey findings include:

Tension Between Security and Developer Teams: 83% of security leaders say their developers currently use AI to generate code, with 57% saying it has become common practice. However, 72% feel they have no choice but to allow developers to use AI to remain competitive, and 63% have considered banning the use of AI in coding due to the security risks.

Inability to Secure at AI Speed: 63% of survey respondents report it is impossible for security teams to keep up with AI-powered developers. As a result, security leaders feel like they are losing control and that businesses are being put at risk, with 78% believing AI-developed code will lead to a security reckoning and 59% losing sleep over the security implications of AI.

Governance Gaps: Two-thirds (63%) of security leaders think it is impossible to govern the safe use of AI in their organization, as they do not have visibility into where AI is being used. Despite concerns, less than half of companies (47%) have policies in place to ensure the safe use of AI within development environments.

"Security teams are stuck between a rock and a hard place in a new world where AI writes code. Developers are already supercharged by AI and won't give up their superpowers. And attackers are infiltrating our ranks — recent examples of long-term meddling in open source projects and North Korean infiltration of IT are just the tip of the iceberg," said Kevin Bocek, Chief Innovation Officer at Venafi. "Anyone today with an LLM can write code, opening an entirely new front. It's the code that matters, whether it is your developers hyper-coding with AI, infiltrating foreign agents or someone in finance getting code from an LLM trained on who knows what. So it's the code that matters! We have to authenticate code wherever it comes from."

The Open Source Trust Dilemma

When looking at specific concerns around developers using AI to write or generate code, security leaders cited three top concerns:

1. Developers would become over-reliant on AI, leading to lower standards.

2. AI-written code will not be effectively quality checked.

3. AI will use dated open source libraries that have not been well-maintained.

The research also highlights that it is not only AI's use of open source that could present challenges to security teams:

Open Source Overload: On average, security leaders estimate 61% of their applications use open source. This over-reliance on open source could present potential risks, given that 86% of respondents believe open source code encourages speed rather than security best practice among developers.

Vexing Verification: 90% of security leaders trust code in open source libraries, with 43% saying they have complete trust — yet 75% say it is impossible to verify the security of every line of open source code. As a result, 92% of security leaders believe code signing should be used to ensure open source code can be trusted.

"The recent CrowdStrike outage shows the impact of how fast code goes from developer to worldwide meltdown," Bocek adds. "Code now can come from anywhere, including AI and foreign agents. There is only going to be more sources of code, not fewer. Authenticating code, applications and workloads based on its identity to ensure that it has not changed and is approved for use is our best shot today and tomorrow. We need to use the CrowdStrike outage as the perfect example of future challenges, not a passing one-off."

Maintaining the code signing chain of trust can help organizations prevent unauthorized code execution, while also scaling their operations to keep up with developer use of AI and open source technologies.

"In a world where AI and open source are as powerful as they are unpredictable, code signing becomes a business' foundational line of defense," Bocek concludes. "But for this protection to hold, the code signing process must be as strong as it is secure. It's not just about blocking malicious code — organizations need to ensure that every line of code comes from a trusted source, validating digital signatures against and guaranteeing that nothing has been tampered with since it was signed. The good news is that code signing is used just about everywhere — the bad news is it is most often left unprotected by security teams who can help keep it safe."

Methodology: Venafi surveyed 800 security decision-makers across the US, UK, Germany and France.

Share this

Industry News

October 21, 2024

AWS announced the general availability of Amazon Aurora PostgreSQL-Compatible Edition and Amazon DynamoDB zero-ETL integrations with Amazon Redshift.

October 21, 2024

The Open Mainframe Project, an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources, announces two new projects that will redefine developer experience on the mainframe.

October 21, 2024

Sonar acquired Structure101, a pioneer in code structure analysis, to further the company's promise of enabling all developers and organizations to improve the quality and security of their code, whether AI-generated or human-written.

October 17, 2024

Progress announced the latest release of Progress® Flowmon®, the network observability platform with AI-powered detection for cyberthreats, anomalies and fast access to actionable insights for greater network and application performance across hybrid cloud ecosystems.

October 17, 2024

Mirantis announced the release of Mirantis OpenStack for Kubernetes (MOSK) 24.3, which delivers enterprise-ready and fully supported OpenStack Caracal, featuring enhancements tailored for artificial intelligence (AI) and high-performance computing (HPC).

October 17, 2024

StreamNative announced a managed Apache Flink BYOC product offering will be available to StreamNative customers in private preview.

October 17, 2024

Gluware announced a series of new offerings and capabilities that will help network engineers, operators and automation developers deliver network security, AI-readiness, and performance assurance better, faster and more affordably, using flawless intent-based intelligent network automation.

October 17, 2024

Sonar released SonarQube 10.7 with AI-driven features and expanded support for new and existing languages and frameworks.

October 16, 2024

Red Hat announced a collaboration with Lenovo to deliver Red Hat Enterprise Linux AI (RHEL AI) on Lenovo ThinkSystem SR675 V3 servers.

October 16, 2024

mabl announced the general availability of GenAI Assertions.

October 16, 2024

Amplitude announced Web Experimentation – a new product that makes it easy for product managers, marketers, and growth leaders to A/B test and personalize web experiences.

October 16, 2024

Resourcely released a free tier of its tool for configuring and deploying cloud resources.

October 15, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of KubeEdge.

October 15, 2024

Perforce Software announced its AI-driven strategy, covering four AI-driven pillars across the testing lifecycle: test creation, execution, analysis and maintenance, across all main environments: web, mobile and packaged applications.

October 15, 2024

OutSystems announced Mentor, a full software development lifecycle (SDLC) digital worker, enabling app generation, delivery, and monitoring, all powered by low-code and GenAI.