Check Point® Software Technologies Ltd. announced that Infinity XDR/XPR achieved a 100% detection rate in the rigorous 2024 MITRE ATT&CK® Evaluations.
It seems that 2024 is the year that AI is infiltrating the world. Everywhere you turn, companies are announcing AI features, rolling out new AI models to help with specific use cases, and promising that AI will revolutionize everything. GitHub led the way with its Copilot product, which helps developers automate writing code. A recent industry survey described Copilot as a tool that "is like pair programming with a junior programmer." Considering frontend development has long been a maligned skillset in the industry, it's an open question of how the domain will be affected as AI continues to mature.
Frontend developers are responsible for the code powering interfaces that users directly interact with through mobile and web UIs. As such, there's enormous pressure on them to avoid bugs, consider all users, and ensure the user experience is reliable and usable. In the past decade, frontend devs have also seen a massive explosion in code complexity; the adoption of frameworks like Vue, Angular and React means that devs are working in large codebases with many dependencies and abstractions. On top of this, their code is executed on users' devices, which have an untold number of variables at play, between different OS or Browser versions, screen sizes, and CPU and memory availability.
With all these pressures at play, the question of whether or not AI is a useful tool for frontend development is critical; it's the difference between reducing toil or making a complex job even harder. Let's look at three responsibilities inherent in frontend engineering to see how AI could help or hinder the work.
Feature Development
For frontend developers working on web apps, adding or updating features is a core job responsibility. Every new feature is going to be a mix of generic tasks (creating a component and setting up the boilerplate structure) and many more specific tasks (all of the business logic and UI). For most features, the generic tasks are at most 10% of the work; the bulk of the work is going to be all the tasks for building the thing that doesn't exist yet.
When considering whether AI is useful for a given context or not, the right question is: how much of this task can I assume is shaped like something in the corpus of training data the AI model was built on? The details matter, too; an AI is not necessarily generating code modeled on your codebase (some newer products are starting to do this, but not all offer it), it is generating code modeled on all codebases in the training.
Any developer who has worked at a few companies (or even at a few teams in the same company) knows — to steal a phrase from Tolstoy — that every service's codebase is unhappy in its own way. These factors — the global nature of the AI model training, and the unique specifics of your codebase — mean that automation of feature development will be sketchy at best. Expect a lot of hallucinations: function arguments that make no sense, references to variables that aren't there, instantiations of packages that aren't available. If you're using AI products for this type of work, pay close attention. Don't attempt to have it write large blocks of code, start with smaller tasks. Review the code, and ensure it does what you want. Just like any tool, you can't expect perfection out of the box, you need to learn what works well and what doesn't, and you always need to be willing to sign off on the output, since, ultimately, it has your name on it.
Accessibility
The issues above can have a compounding effect which is worth considering. A key responsibility for web and mobile developers is ensuring that every person who wants to use their UI can, which requires ensuring that accessibility standards are met. If you're automating your feature development code, you're going to run into two issues with AI.
The first is simply accessibility is not yet a domain where we can be prescriptive about how to make a specific feature accessible. Accessibility experts use their knowledge of a variety of user needs (kinds of disability and the UX that best works for them) to evaluate a given feature and determine how to make it accessible. There are some foundational rules — images and icons should have text descriptions, interaction elements should be focusable and in a reasonable order, etc — but often, it requires skilled reasoning about what a user's needs are and how to effectively achieve them through code. The contextual nature of accessibility means AI will, at best, get you started, and at worse, accidentally introduce barriers to access through simple mistakes that human developers regularly make.
The AI would do this because of the nature of model training: current-generation AI products cannot solve a problem that isn't accurately modeled in their training data. The unfortunate reality of the web is that it is horribly inaccessible at large, and the state of mobile apps is not much better. This is terrible for the millions of disabled people who are trying to navigate the modern world, and it is terrifying when we imagine a world with much more AI-generated code.
Disabled people are constantly encountering experiences that are inaccessible and prevent them from achieving their goals on the site or in the app. Their experiences will only be magnified and amplified in a future built on AI trained on the code that exists now. All the resulting AI models will be great at generating code modeled on the current state of web and mobile apps, which means the inaccessibility that is widespread today will be perpetuated and expanded through AI generation.
To prevent that, devs will need to be vigilant in testing to ensure accessibility remains and no regressions are shipped.
QA and Testing
Thankfully, developers have a robust set of practices available to them to ensure known issues aren't replicated. Once you have a defined set of requirements and functionality, you can write tests to ensure the requirements are met and the functionality is available.
Here, we find a really promising area for AI to improve work. Testing is one of the toils of frontend development; it involves the repetitive work of writing code that validates that a given component or feature does what it's supposed to. Here, we start to find a shape of problem that AI could genuinely be useful for. Tests are often very repetitive in structure, and the assertions are all based on explicit requirements or functionality. Plus, tests are all executed in a protected environment that succeeds or fails, so it's easy to know if the code is doing what you want.
Except, of course, many devs who have written tests have learned that sometimes a passing test doesn't mean the test is actually proving something works. Here, the comment that Copilot is "like pair programming with a junior programmer" is helpful to keep in mind. The key to success with this type of tool is extra attention to detail. AI products could be immensely helpful for writing test suites and improving code coverage, but extra care will need to be paid to ensure that every test is actually testing and asserting the things it claims to. One thing that the current gen of AI products is great at is coming up with edge cases; all the unexpected ways that a thing can break. Ensuring these cases are covered and don't regress is a key goal of software testing, so this sounds like a great way to leverage these products.
Conclusion
AI is booming in popularity, but as we've seen in these three examples, frontend developers who adopt it may find value but should be aware of the risks. An AI is only as good as its underlying model. Knowing what kinds of problems a model was trained on and what data is likely to actually be in its training corpus is immensely helpful for thinking about its usefulness for a given task.
Industry News
CyberArk announced the launch of FuzzyAI, an open-source framework that helps organizations identify and address AI model vulnerabilities, like guardrail bypassing and harmful output generation, in cloud-hosted and in-house AI models.
Grid Dynamics announced the launch of its developer portal.
LTIMindtree announced a strategic partnership with GitHub.
Solace announced the addition of micro-integrations to its event-driven integration and streaming platform, Solace PubSub+ Platform.
GitGuardian has unveiled its NHI Security strategy, a transformative approach to securing the explosive growth of NHIs and the secrets they depend on.
Linkerd announced the release of Linkerd 2.17, a new version of Linkerd that introduces several major new features to the project: egress traffic visibility and control; rate limiting; and federated services, a powerful new multicluster primitive that combines services running in multiple clusters into a single logical service.
Amazon Web Services (AWS) announced new capabilities for Amazon Q Developer, a generative AI assistant for software development, that take the undifferentiated heavy-lifting out of complex and time-consuming application migration and modernization projects, saving customers and partners time and money.
OpenText announced a strategic partnership with Secure Code Warrior to integrate its dynamic learning platform into the OpenText Fortify application security product suite.
Salesforce announced a series of updates for Heroku, a platform as a service (PaaS) offering that enables teams to build, deploy, and scale modern applications entirely in the cloud.
Onapsis announced the expansion of its Control product line to include a new bundle that enhances application security testing capabilities for SAP Business Technology Platform (BTP).
Amazon Web Services announced new enhancements to Amazon Q Developer, including agents that automate unit testing, documentation, and code reviews to help developers build faster across the entire software development process, and a capability to help users address operational issues in a fraction of the time.
Amazon Web Services (AWS) and GitLab announced an integrated offering that brings together GitLab Duo with Amazon Q.
Tenable announced the release of Tenable Patch Management, an autonomous patch solution built to quickly and effectively close vulnerability exposures in a unified solution.
SurrealDB announced the launch of Surreal Cloud, a Database-as-a-Service (DBaaS) offering.