Exploring the Power of AI in Software Development - Part 4: Challenges
October 31, 2024

Pete Goldin
DEVOPSdigest

"The promise and perils of Artificial Intelligence (AI) has been dominating the headlines with everyone from software developers to students working on ways to integrate it into their daily processes," says Casey Ciniello, App Builder, Reveal and Slingshot Senior Product Manager at Infragistics. "This burgeoning interest in AI is borne out by the fifth annual Reveal 2024 Top Software Development Challenges survey from Infragistics, which found that the biggest software development challenge in 2024 will be incorporating AI into the development process (40.7%)."

Early adopters who adapt their development processes to leverage AI's potential are likely to gain a competitive edge in productivity, quality, and speed to market, adds Ramprakash Ramamoorthy, Director of AI Research at ManageEngine. However, it's essential to be mindful of the challenges and risks associated with AI adoption and take proactive steps to address them.

DEVOPSdigest invited experts across the industry — consultants, analysts and vendors — to comment on how AI can support the software development life cycle (SDLC). In Part 4 of this series, the experts warn of the many limitations, challenges and risks associated with using AI to help develop software.

Interestingly, some of the same topics discussed by experts as advantages in Part 3 of the series are also cited here as challenges for developers using AI, such as democratization and training new developers. In other cases, some of these challenges directly oppose the advantages posed in Part 3, such as increased vs. reduced code quality or costs. These dichotomies may result from the fact that this technology is still relatively new, and the industry is still trying to figure out the ultimate impacts of AI — both positive and negative.

BLIND FAITH IN AI

Human developers wrongly assume that the code written by AI models is better than what they can create, despite evidence to the contrary.
Chris Wysopal
Co-founder and CTO, Veracode

Because AI can produce results rapidly, have we developed a blind trust in its validity? It is true that AI can produce code that looks safe, but that doesn't mean it will perform safely. Unintentionally implementing insecure or buggy code can lead to a magnitude of issues — from security risks to lengthy stints spent problem-solving and testing. Despite the huge AI advancements and gains in AI, AI can still be delusional and its work needs to be double-checked. Scott Willson
Head of Product Marketing, xtype

Blind faith in AI can be a dangerous approach. Relying too heavily on AI-generated code without thorough human review can lead to security vulnerabilities, including data privacy and performance issues, plus maintainability challenges.
Dotan Nahum
Head of Developer-First Security, Check Point Software Technologies

Blindly adopting AI for your software development teams will lead to chaos. For example, I've heard some talk about using AI to generate code, and then more AI to test that generated code and then just believe the tests and deploy it. This is a recipe for disaster.
Arthur Hicken
Chief Evangelist, Parasoft

The immediate risk is that AI might not always solve software engineering problems in an optimal or even correct way. Just blindly relying on its output can put our products in jeopardy. However, this is not a new risk. Just taking unvetted information from a page like StackOverflow can result in similar problems. Generative AI just takes this problem to a whole new level due to the vast amount of new unique information produced every day.
Matej Bukovinski
CTO, PSPDFKit

The use of these tools depends very heavily on what kind of developer you are. If you are a junior engineer, the misleading nature and occasional inaccuracy of AI tools can be harmful if you don't have a gut feeling of what's right and wrong with your code. Automation bias is a nasty phenomenon where humans often believe that an automated process (such as AI) is generally correct when it can be just as incorrect as any human might be.
Phillip Carter
Principal Product Manager, Honeycomb

OVER-RELIANCE

One of the biggest risks to developers is "complacency” and "over-reliance," which can lead to a potential lack of understanding of the code, becoming overly reliant on such tools, and failing to grow as a programmer and developer.
Cassius Rhue
VP, Customer Experience, SIOS Technology

One of the risks we face is the over dependence on the tool itself. AI is a knowledgeable partner, but it doesn't have all the answers. It doesn't have the years or decades experience of a staff or architect level engineer. This over dependence on AI could provide a false sense of security. Just like a human, AI can be wrong.
Sterling Chin
Senior Developer Advocate, Postman

One of the risks of using AI to support software development is that developers may get too comfortable using AI and end up over-relying on it, maybe even overlooking some errors. Sauce Labs' Developers Behind Badly 2023 survey found that 61% of developers admit to using untested code generated by ChatGPT.
Marcus Merrell
Principal Test Strategist, Sauce Labs

I'm concerned that developers will get "lazy" and rely too much on copying and pasting from LLM output. AI can reduce the need to look stuff up, but at least right now — and I think for the foreseeable future — developers are still going to be responsible for understanding the darker corners of programming languages and libraries.
Mike Loukides
VP of Emerging Tech Content, O'Reilly Media

The risk for challenges occurring when the application is deployed to a production environment increases if AI, in its current state, is relied on heavily for code development. If an individual lacks the proper knowledge and experience in the language and area of the domain, the application is addressing, and AI provides the majority of the code base. You could find yourself in a situation where the application breaks in production, and you have little to no knowledge of how to address the issue. This issue could result in significant downtime, which costs business time and money. Relying entirely on AI for code development may provide short-term benefits, but the risk increases from a long-term perspective if the provided code is not vetted and understood.
Karl Cardenas
Director, Docs & Education,Spectro Cloud

Eventually, we'll likely see mistakes with a global impact because people will get too comfortable and reliant on AI. It's important that guardrails are put in place as well as consistent testing and monitoring. We need to constantly be evaluating and asking: Is it accurate? Is it good enough? Is it complete?
Udi Weinberg
Director of Product Management, Research and Development, OpenText

LACK OF TRANSPARENCY

There are risks of a lack of transparency regarding the models' decision-making. AI models often operate as "black boxes," making it difficult to understand how they arrived at specific code suggestions or decisions and blurring the lines of accountability.
Dotan Nahum
Head of Developer-First Security, Check Point Software Technologies

CODE QUALITY

One of the biggest challenges and risks to a company's code when using AI is the quality of the code that it produces, which can potentially exacerbate tech debt.
Andrea Malagodi
CIO, Sonar

The code quality provided by AI models depends on two significant factors. One is the training data provided to the model. If the model lacks a proper foundation in the programming language, the results will likely be unsatisfactory. The other factor is the prompt inputted by the user. Prompts that outline precise requirements with proper context result in higher-quality output by the AI models. The responsibility of ensuring the model is properly trained on the programming language in use and providing good quality prompts rests on the user's shoulders.
Karl Cardenas
Director, Docs & Education,Spectro Cloud

AI is only as good as its training. Models used to generate code may be trained on a combination of both well-written code and substandard code. This can lead to inconsistency in the code's quality, creating inefficient code or code using outdated practices. The accuracy of AI-generated code must be thoroughly validated to avoid increasing technical debt by incorporating AI-generated code that may be effective now but isn't future-proof and becomes challenging to modify later.
David Brault
Product Marketing Manager, Mendix

Developers need to apply their expertise to verify and refine AI outputs, ensuring the code is accurate and effective.
Tom Hodgson
Innovation Tech Lead, Redgate

CODE DEFECTS AND BUGS

While AI can produce lines of code quickly, it could also introduce defects or poor patterns of code if not properly reviewed and tested for quality. Without these checks in place, like static code analysis, developers and others using AI could unknowingly be contributing to technical debt in their efforts to speed up their work.
Andrea Malagodi
CIO, Sonar

AI-generated code may appear syntactically correct but still harbor subtle bugs or functional issues, making it difficult to identify and resolve these problems effectively.
Faiz Khan
CEO, Wanclouds

One big challenge is automating bug generation. A recent survey estimated that 40% of the code generated by AI has bugs. Using GenAI to write code makes it even more important to create and automate a thorough testing regime for functional, security, and compliance testing.
David Brooks
SVP of Evangelism, Copado

While humans produce bugs and errors, AI-generated bugs that make their way into production cause more unease. These bugs can be subtle, and the "confidence" of the AI in producing the code may not trigger an engineering "spidey sense" that a change is risky and needs further review. A cursory LGTM is never enough when reviewing code, especially if it has been heavily modified with AI. This is particularly risky if you let AI work autonomously as an agent. The best application of AI is still being close to the developer in the IDE, helping with development work, not as a free-roaming agent working entirely on its own.
Michael Webster
Principal Software Engineer, CircleCI

DEBUGGING

If developers leverage AI too much where they don't understand the code, it'll be difficult to debug. You see this already when developers cut and paste code.
Patrick Doran
CTO, Synchronoss

TESTING

Adoption of AI tools in the development workforce will most probably speed up delivery of software solutions. The challenge might become to control the quality of such solutions due to the need to increase the velocity of the testing process.
Igor Kirilenko
Chief Product Officer, Parasoft

LIMITED TO SNIPPETS

AI has advanced to the point at which it can now create code snippets from prompts and reduce the workload of software developers. However, these code snippets may not be suitable for large software projects. The challenge of extending a large code base with new features requires a fundamental understanding of the software architecture that underpins the code base and ensures a coherent design. This understanding usually requires software developers to digest a software architecture specification and interact with key architects to build an intuitive understanding of the guiding principles. Since it is not clear that AI is ready to do this, all AI-generated code must be carefully evaluated to verify that it adheres to the architecture's guidelines. Otherwise, a large code base could quickly become unmaintainable. As a result, it may be better to restrict today's AI-generated code to small projects with short lifetimes.
Dr. William Bain
CEO, ScaleOut Software

Check back tomorrow for Part 5, covering more challenges of AI in the SDLC.

Pete Goldin is Editor and Publisher of DEVOPSdigest
Share this

Industry News

October 30, 2024

LambdaTest introduced a suite of new features to its AI-powered Test Manager, designed to simplify and enhance the test management experience for software development and QA teams.

October 30, 2024

StackHawk launched Oversight to provide security teams with a birds-eye view of their API security program.

October 30, 2024

DataStax announced the enhancement of its GitHub Copilot extension with its AI Platform-as-a-Service (AI PaaS) solution.

October 30, 2024

Opsera partnered with Databricks to empower software and DevOps engineers to deliver software faster, safer and smarter through AI/ML model deployments and schema rollback capabilities.

October 29, 2024

GitHub announced the next evolution of its Copilot-powered developer platform.

October 29, 2024

Crowdbotics released an extension for GitHub Copilot, available now through the GitHub and Azure Marketplaces.

October 28, 2024

Copado has integrated Copado AI into its Community to streamline support and accelerate issues resolution.

October 28, 2024

Mend.io and HeroDevs have forged a new partnership allowing Mend.io to offer HeroDevs support for deprecated packages.

October 28, 2024

Synechron has acquired Cloobees, a Salesforce implementation partner.

October 24, 2024

Opsera announced its AI Code Assistant Insights.

October 24, 2024

Gearset released its latest innovation for Salesforce DevOps: Dev Sandbox Syncing.

October 23, 2024

Treblle announced the release of Treblle 3.0, its AI-enhanced API intelligence platform.

October 23, 2024

WhiteRabbitNeo released a major new version trained on new cybersecurity and threat intelligence data on the Qwen 2.5 family of models, a top performing software engineering model on HumEval.

October 23, 2024

Contrast Security announced the launch of Contrast One™, a new managed Application Security (AppSec) service.