Exploring the Power of AI in Software Development - Part 5: More Challenges
November 01, 2024

Pete Goldin
DEVOPSdigest

In Part 5 of this series, the experts warn of even more limitations, challenges and risks associated with using AI to help develop software.

BIAS

AI bias is a significant known issue, where biases in training data may paint inaccurate pictures that negatively impact application experiences.
Shomron Jacob
Head of Applied Machine Learning & Platform, Iterate.ai

With AI models trained on biased datasets, existing biases in the code they generate can be perpetuated to create discriminatory or unfair outcomes for certain groups of users.
Dotan Nahum
Head of Developer-First Security, Check Point Software Technologies

There is the risk of AI tools being trained on biased or incomplete datasets, which could, unfortunately, lead to biased or suboptimal code generation. Being able to ensure that AI tools are trained on diverse and comprehensive datasets is critical to mitigating this risk.
Jobin Kuruvilla
Head of the DevOps Practice, Adaptavist

HALLUCINATIONS

The biggest risk is a hallucination or AI generally getting it wrong. If you ask a question, AI will always give an answer, but it's not always right and can be trained with biased data that will impact the code and decision-making. Since it's not always accurate and you need to be extremely specific, it's important to prep it with as much info as possible by fine tuning the prompt. You need to think of AI as a very smart person, but it's their first day at the company. It's up to you to give it the proper training and context to be successful.
Udi Weinberg
Director of Product Management, Research and Development, OpenText

An LLM hallucination occurs when a large language model (LLM) generates a response that is either factually incorrect, nonsensical, or disconnected from the input prompt. Hallucinations are a byproduct of language models' probabilistic nature, which generates responses based on patterns learned from vast datasets rather than factual understanding.
Michael Webster
Principal Software Engineer, CircleCI

Often unintentional, AI hallucinations involve AI models with access to large amounts of public information that will take every input as factual. This highlights one of the weak points of AI and accentuates why a human element is still required in the development of code. AI cannot easily differentiate between fact and opinion leading to compelling outputs that are factually incorrect.
Chetan Conikee
Co-Founder and CTO, Qwiet AI

By definition, all large language models are always "hallucinating" the next word. The implication of this is that any leverage of AI is going to result in answers that have the confidence of a 16 year old who thinks they have the world figured out even if they have the knowledge of a 3 year old on the topic. The risk therefore is getting answers that look right but aren't at all. In the next few years we're going to see code that promises things it can't deliver, documentation claiming features exist that don't, and even companies claiming they meet compliance standards they don't actually meet — all because AI is going to be leveraged — poorly — in development.
Kendall Miller
Chief Extrovert, Axiom

I wouldn't blindly trust the code. Just as Microsoft itself advises against using LLMs to create legally binding materials.
Jon Collins
Analyst, Gigaom

AI DOESN'T SOLVE PROBLEMS

One misconception about software development is that writing code is the task or obstacle to overcome. However, software development is more about solving problems for very specific business goals and objectives. While AI boosts productivity of developers, it still requires people who understand a business's domain and how the software relates to its goals and the problems the software aims to solve.
Ed Charbeneau
Developer Advocate, Principal, Progress

In the developer world, writing code isn't necessarily the hardest part of the job. It's the art of problem solving. What is my technical or business problem and what is the solution? How do I achieve a positive outcome? AI doesn't do that. AI can efficiently generate the code, but the developer still needs to understand what's been generated and "where" it's going.
Robert Rea
CTO, Graylog

AI LACKS CREATIVITY

If we want to understand how AI is going to change software development, it's helpful to first recognize what AI can't do. Actually, writing code is a small part of an engineer's job — good software development involves far more thinking than doing. Sure, AI can generate code, but it can't think for you, which means it can't think before executing code, and that brings great risk. This is the fundamental gap between humans and machines, and humans will always have the edge here. Humans will increasingly focus on more novel and creative work, while AI takes on the routine, undifferentiated heavy lifting. AI isn't going to reinvent the wheel of software development.
Shub Jain
Co-Founder and CTO, Auquan

AI DOESN'T UNDERSTAND INTENT BEHIND THE CODE

Using AI to support software development comes with several challenges like misinterpreting outliers for actual problems developers should care about. This is the nuance of being able to discern between "weird" and "bad," and something that still needs the attention of a human developer to do. This is a lack of contextual understanding of an AI, meaning it lacks the ability to fully understand intent behind the code.
Phil Gervasi
Director of Technical Evangelism, Kentik

DEPENDENCY MANAGEMENT

In large applications, there is often a complex taxonomy of libraries and frameworks. AI in its current form is not very sophisticated at adapting to an organization's frameworks.
Chris Du Toit
Head of Developer Relations, Gravitee

VOLUME OF OUTPUT

Code was never the biggest issue; rather, it was complexity. A significant risk is that we correctly create large quantities of code and applications, which will all need to be managed; they may solve for certain problems, but could duplicate effort, require securing, diverge from the original need and so on. We need to avoid becoming the sorcerer's apprentice from the outset. Success should be measured in outcomes, not in volume of outputs.
Jon Collins
Analyst, Gigaom

LACK OF DOCUMENTATION

AI-generated code tends to lack proper documentation and readability, making development and debugging more challenging.
Todd McNeal
Director of Product Management, SmartBear

ETHICAL QUESTIONS

Using AI to support development raises important ethical considerations. Ensuring AI-generated code is free from biases is crucial to prevent unintended consequences. Balancing AI's efficiency with ethical practices is essential to maintain trust and integrity in software development.
Pavan Belagatti
Technology Evangelist, SingleStore

There is a huge question of professional responsibility and ethics with AI-generated code. We expect developers to be professional and, in some cases, legally liable for negligence in the software they build. However, this system depends a lot on human and organizational safeguards like separation of duties, two-person rules, and continuing education. With AI, it might seem tempting to use it to maximum capacity, but it is harder to scale these organizational safeguards. It opens a lot of potential issues when it comes to standards, reviews, and the safety and security of software.
Michael Webster
Principal Software Engineer, CircleCI

INTELLECTUAL PROPERTY VIOLATIONS

Intellectual property violations pose a risk, as AI tools might inadvertently reproduce or suggest code that closely resembles proprietary work, which could result in legal complications.
Faiz Khan
CEO, Wanclouds

One of the troubling aspects of generative AI is the potential reuse of intellectual property during the creation process, and code generation is no exception. Unknowingly, developers may use a coding assistant that generates code violating an intellectual property law, thus exposing the organization to legal risks.
David Brault
Product Marketing Manager, Mendix

AI reproduces the same code that is already on the internet without identifying it as such, leading to concerns around copyright infringement or license violations. It's important to know what data and sources were used by the AI in code generation.
Shourabh Rawat
Senior Director, Machine Learning, SymphonyAI

INCREASED COSTS

AI in development still needs human oversight, because LLMs specifically and AI generally can hallucinate and create code that might break or perform actions that create risk for the business. AI generated code can have bugs, security vulnerabilities, be inefficient, and not properly address the functional requirements of the task. Without proper software engineering practices and quality standards in place, increased usage of AI generated code can increase the overall long-term costs of maintainability of the projects.
Shourabh Rawat
Senior Director, Machine Learning, SymphonyAI

SKILLS GAP

Integrating AI solutions into existing workflows may require significant adjustments and skill development, posing a potential barrier to adoption. This can stunt development cycles (when the intention was to speed production) so leaders should ensure the AI tools they invest in easily integrate with their existing tech stack and are compatible with existing team talent.
Rahul Pradhan
VP of Product and Strategy, Couchbase

AI raises significant questions about skill gaps and job displacement. The widespread adoption of AI in development can concern employees, particularly developers performing repetitive or easily automatable tasks. Addressing the potential skill gap and providing opportunities for upskilling will be critical in integrating AI successfully with existing processes and teams.
Dotan Nahum
Head of Developer-First Security, Check Point Software Technologies

DEMOCRATIZATION

As generative AI tools make code creation more accessible and software development more widespread to even non-technical users, there is a growing risk to the quality and security of our software systems. Non-technical users may not grasp the intricacies of coding, creating code without understanding its potential long-term consequences.
Todd McNeal
Director of Product Management, SmartBear

TRAINING NEW DEVELOPERS

AI is a challenge for new developers who need training. Traditionally, new developers learned from seniors by doing all of the grunt work. If AI does all of the grunt work, then how will new developers learn?
David Brooks
SVP of Evangelism, Copado

The overuse of these tools can hinder the growth and learning curve of junior engineers who are relying heavily on these tools instead of sitting and thinking through the problem at hand. While it's important for people earlier in their careers to use newer tools, it's important to go through the motions of doing many things manually to understand what automations can offer and, ultimately, where they might go wrong.
Phillip Carter
Principal Product Manager, Honeycomb

An over-reliance on AI negatively impacts the learning curve for younger engineers because it prevents them from understanding the root causes of issues and from developing problem-solving skills. We must balance AI usage with traditional learning methods to ensure newer engineers gain a deep understanding of software development fundamentals.
Shub Jain
Co-Founder and CTO, Auquan

What worries me more than wholesale automation is a scenario where senior developers learn to use AI well enough to automate beginner-level development tasks. If this happens in enough places, companies might not invest as much in junior talent. This would disrupt the mentorship model of senior developers to pass down knowledge to junior developers, creating a generational skills gap. Limits on current model capabilities and the number of use cases where AI doesn't operate make this a less imminent scenario, but it is something to consider. The current issues we have with COBOL developers entering retirement, but with many mission-critical systems that use the language and are getting harder to support, is a good recent analogy for this scenario.
Michael Webster
Principal Software Engineer, CircleCI

I see AI widening the gap between junior and senior developers. Seniors will know how to use AI well; juniors won't — and in addition to becoming a competent programmer, they will also need to learn how to use AI well. And the difficulty of using AI well is almost always understated. Let's say that AI outputs a function you need. It works — but it's very inefficient. Do you just pass it on, or do you know enough to realize that it's inefficient? Do you know enough to know whether or not you care that it's inefficient? If you just check the function into your source repo, you're never going to learn. That's a big issue.
Mike Loukides
VP of Emerging Tech Content, O'Reilly Media

Go to: Exploring the Power of AI in Software Development - Part 6: Security Challenges

Pete Goldin is Editor and Publisher of DEVOPSdigest
Share this

Industry News

December 11, 2024

CyberArk announced the launch of FuzzyAI, an open-source framework that helps organizations identify and address AI model vulnerabilities, like guardrail bypassing and harmful output generation, in cloud-hosted and in-house AI models.

December 11, 2024

Grid Dynamics announced the launch of its developer portal.

December 10, 2024

LTIMindtree announced a strategic partnership with GitHub.

December 10, 2024

Solace announced the addition of micro-integrations to its event-driven integration and streaming platform, Solace PubSub+ Platform.

December 10, 2024

GitGuardian has unveiled its NHI Security strategy, a transformative approach to securing the explosive growth of NHIs and the secrets they depend on.

December 09, 2024

Linkerd announced the release of Linkerd 2.17, a new version of Linkerd that introduces several major new features to the project: egress traffic visibility and control; rate limiting; and federated services, a powerful new multicluster primitive that combines services running in multiple clusters into a single logical service.

December 05, 2024

Amazon Web Services (AWS) announced new capabilities for Amazon Q Developer, a generative AI assistant for software development, that take the undifferentiated heavy-lifting out of complex and time-consuming application migration and modernization projects, saving customers and partners time and money.

December 05, 2024

OpenText announced a strategic partnership with Secure Code Warrior to integrate its dynamic learning platform into the OpenText Fortify application security product suite.

December 05, 2024

Salesforce announced a series of updates for Heroku, a platform as a service (PaaS) offering that enables teams to build, deploy, and scale modern applications entirely in the cloud.

December 05, 2024

Onapsis announced the expansion of its Control product line to include a new bundle that enhances application security testing capabilities for SAP Business Technology Platform (BTP).

December 04, 2024

Amazon Web Services announced new enhancements to Amazon Q Developer, including agents that automate unit testing, documentation, and code reviews to help developers build faster across the entire software development process, and a capability to help users address operational issues in a fraction of the time.

December 04, 2024

Amazon Web Services (AWS) and GitLab announced an integrated offering that brings together GitLab Duo with Amazon Q.

December 04, 2024

Tenable announced the release of Tenable Patch Management, an autonomous patch solution built to quickly and effectively close vulnerability exposures in a unified solution.

December 04, 2024

SurrealDB announced the launch of Surreal Cloud, a Database-as-a-Service (DBaaS) offering.