Parasoft is accelerating the release of its C/C++test 2025.1 solution, following the just-published MISRA C:2025 coding standard.
It seems that 2024 is the year that AI is infiltrating the world. Everywhere you turn, companies are announcing AI features, rolling out new AI models to help with specific use cases, and promising that AI will revolutionize everything. GitHub led the way with its Copilot product, which helps developers automate writing code. A recent industry survey described Copilot as a tool that "is like pair programming with a junior programmer." Considering frontend development has long been a maligned skillset in the industry, it's an open question of how the domain will be affected as AI continues to mature.
Frontend developers are responsible for the code powering interfaces that users directly interact with through mobile and web UIs. As such, there's enormous pressure on them to avoid bugs, consider all users, and ensure the user experience is reliable and usable. In the past decade, frontend devs have also seen a massive explosion in code complexity; the adoption of frameworks like Vue, Angular and React means that devs are working in large codebases with many dependencies and abstractions. On top of this, their code is executed on users' devices, which have an untold number of variables at play, between different OS or Browser versions, screen sizes, and CPU and memory availability.
With all these pressures at play, the question of whether or not AI is a useful tool for frontend development is critical; it's the difference between reducing toil or making a complex job even harder. Let's look at three responsibilities inherent in frontend engineering to see how AI could help or hinder the work.
Feature Development
For frontend developers working on web apps, adding or updating features is a core job responsibility. Every new feature is going to be a mix of generic tasks (creating a component and setting up the boilerplate structure) and many more specific tasks (all of the business logic and UI). For most features, the generic tasks are at most 10% of the work; the bulk of the work is going to be all the tasks for building the thing that doesn't exist yet.
When considering whether AI is useful for a given context or not, the right question is: how much of this task can I assume is shaped like something in the corpus of training data the AI model was built on? The details matter, too; an AI is not necessarily generating code modeled on your codebase (some newer products are starting to do this, but not all offer it), it is generating code modeled on all codebases in the training.
Any developer who has worked at a few companies (or even at a few teams in the same company) knows — to steal a phrase from Tolstoy — that every service's codebase is unhappy in its own way. These factors — the global nature of the AI model training, and the unique specifics of your codebase — mean that automation of feature development will be sketchy at best. Expect a lot of hallucinations: function arguments that make no sense, references to variables that aren't there, instantiations of packages that aren't available. If you're using AI products for this type of work, pay close attention. Don't attempt to have it write large blocks of code, start with smaller tasks. Review the code, and ensure it does what you want. Just like any tool, you can't expect perfection out of the box, you need to learn what works well and what doesn't, and you always need to be willing to sign off on the output, since, ultimately, it has your name on it.
Accessibility
The issues above can have a compounding effect which is worth considering. A key responsibility for web and mobile developers is ensuring that every person who wants to use their UI can, which requires ensuring that accessibility standards are met. If you're automating your feature development code, you're going to run into two issues with AI.
The first is simply accessibility is not yet a domain where we can be prescriptive about how to make a specific feature accessible. Accessibility experts use their knowledge of a variety of user needs (kinds of disability and the UX that best works for them) to evaluate a given feature and determine how to make it accessible. There are some foundational rules — images and icons should have text descriptions, interaction elements should be focusable and in a reasonable order, etc — but often, it requires skilled reasoning about what a user's needs are and how to effectively achieve them through code. The contextual nature of accessibility means AI will, at best, get you started, and at worse, accidentally introduce barriers to access through simple mistakes that human developers regularly make.
The AI would do this because of the nature of model training: current-generation AI products cannot solve a problem that isn't accurately modeled in their training data. The unfortunate reality of the web is that it is horribly inaccessible at large, and the state of mobile apps is not much better. This is terrible for the millions of disabled people who are trying to navigate the modern world, and it is terrifying when we imagine a world with much more AI-generated code.
Disabled people are constantly encountering experiences that are inaccessible and prevent them from achieving their goals on the site or in the app. Their experiences will only be magnified and amplified in a future built on AI trained on the code that exists now. All the resulting AI models will be great at generating code modeled on the current state of web and mobile apps, which means the inaccessibility that is widespread today will be perpetuated and expanded through AI generation.
To prevent that, devs will need to be vigilant in testing to ensure accessibility remains and no regressions are shipped.
QA and Testing
Thankfully, developers have a robust set of practices available to them to ensure known issues aren't replicated. Once you have a defined set of requirements and functionality, you can write tests to ensure the requirements are met and the functionality is available.
Here, we find a really promising area for AI to improve work. Testing is one of the toils of frontend development; it involves the repetitive work of writing code that validates that a given component or feature does what it's supposed to. Here, we start to find a shape of problem that AI could genuinely be useful for. Tests are often very repetitive in structure, and the assertions are all based on explicit requirements or functionality. Plus, tests are all executed in a protected environment that succeeds or fails, so it's easy to know if the code is doing what you want.
Except, of course, many devs who have written tests have learned that sometimes a passing test doesn't mean the test is actually proving something works. Here, the comment that Copilot is "like pair programming with a junior programmer" is helpful to keep in mind. The key to success with this type of tool is extra attention to detail. AI products could be immensely helpful for writing test suites and improving code coverage, but extra care will need to be paid to ensure that every test is actually testing and asserting the things it claims to. One thing that the current gen of AI products is great at is coming up with edge cases; all the unexpected ways that a thing can break. Ensuring these cases are covered and don't regress is a key goal of software testing, so this sounds like a great way to leverage these products.
Conclusion
AI is booming in popularity, but as we've seen in these three examples, frontend developers who adopt it may find value but should be aware of the risks. An AI is only as good as its underlying model. Knowing what kinds of problems a model was trained on and what data is likely to actually be in its training corpus is immensely helpful for thinking about its usefulness for a given task.
Industry News
GitHub is making GitHub Advanced Security (GHAS) more accessible for developers and teams of all sizes.
ArmorCode announced the enhanced ArmorCode Partner Program, highlighting its goal to achieve a 100 percent channel-first sales model.
Parasoft is showcasing its latest product innovations at embedded world Exhibition, booth 4-318, including new GenAI integration with Microsoft Visual Studio Code (VS Code) to optimize test automation of safety-critical applications while reducing development time, cost, and risk.
JFrog announced general availability of its integration with NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform.
CloudCasa by Catalogic announce an integration with SUSE® Rancher Prime via a new Rancher Prime Extension.
MacStadium announced the extended availability of Orka Cluster 3.2, establishing the market’s first enterprise-grade macOS virtualization solution available across multiple deployment options.
JFrog is partnering with Hugging Face, host of a repository of public machine learning (ML) models — the Hugging Face Hub — designed to achieve more robust security scans and analysis forevery ML model in their library.
Copado launched DevOps Automation Agent on Salesforce's AgentExchange, a global ecosystem marketplace powered by AppExchange for leading partners building new third-party agents and agent actions for Agentforce.
Harness completed its merger with Traceable, effective March 4, 2025.
JFrog released JFrog ML, an MLOps solution as part of the JFrog Platform designed to enable development teams, data scientists and ML engineers to quickly develop and deploy enterprise-ready AI applications at scale.
Progress announced the addition of Web Application Firewall (WAF) functionality to Progress® MOVEit® Cloud managed file transfer (MFT) solution.
Couchbase launched Couchbase Edge Server, an offline-first, lightweight database server and sync solution designed to provide low latency data access, consolidation, storage and processing for applications in resource-constrained edge environments.
Sonatype announced end-to-end AI Software Composition Analysis (AI SCA) capabilities that enable enterprises to harness the full potential of AI.
Aviatrix® announced the launch of the Aviatrix Kubernetes Firewall.