AI's Impact on Frontend Development: Feature Development, Accessibility QA and Testing
September 24, 2024

Winston Hearn
Honeycomb

It seems that 2024 is the year that AI is infiltrating the world. Everywhere you turn, companies are announcing AI features, rolling out new AI models to help with specific use cases, and promising that AI will revolutionize everything. GitHub led the way with its Copilot product, which helps developers automate writing code. A recent industry survey described Copilot as a tool that "is like pair programming with a junior programmer." Considering frontend development has long been a maligned skillset in the industry, it's an open question of how the domain will be affected as AI continues to mature.

Frontend developers are responsible for the code powering interfaces that users directly interact with through mobile and web UIs. As such, there's enormous pressure on them to avoid bugs, consider all users, and ensure the user experience is reliable and usable. In the past decade, frontend devs have also seen a massive explosion in code complexity; the adoption of frameworks like Vue, Angular and React means that devs are working in large codebases with many dependencies and abstractions. On top of this, their code is executed on users' devices, which have an untold number of variables at play, between different OS or Browser versions, screen sizes, and CPU and memory availability.

With all these pressures at play, the question of whether or not AI is a useful tool for frontend development is critical; it's the difference between reducing toil or making a complex job even harder. Let's look at three responsibilities inherent in frontend engineering to see how AI could help or hinder the work.

Feature Development

For frontend developers working on web apps, adding or updating features is a core job responsibility. Every new feature is going to be a mix of generic tasks (creating a component and setting up the boilerplate structure) and many more specific tasks (all of the business logic and UI). For most features, the generic tasks are at most 10% of the work; the bulk of the work is going to be all the tasks for building the thing that doesn't exist yet.

When considering whether AI is useful for a given context or not, the right question is: how much of this task can I assume is shaped like something in the corpus of training data the AI model was built on? The details matter, too; an AI is not necessarily generating code modeled on your codebase (some newer products are starting to do this, but not all offer it), it is generating code modeled on all codebases in the training.

Any developer who has worked at a few companies (or even at a few teams in the same company) knows — to steal a phrase from Tolstoy — that every service's codebase is unhappy in its own way. These factors — the global nature of the AI model training, and the unique specifics of your codebase — mean that automation of feature development will be sketchy at best. Expect a lot of hallucinations: function arguments that make no sense, references to variables that aren't there, instantiations of packages that aren't available. If you're using AI products for this type of work, pay close attention. Don't attempt to have it write large blocks of code, start with smaller tasks. Review the code, and ensure it does what you want. Just like any tool, you can't expect perfection out of the box, you need to learn what works well and what doesn't, and you always need to be willing to sign off on the output, since, ultimately, it has your name on it.

Accessibility

The issues above can have a compounding effect which is worth considering. A key responsibility for web and mobile developers is ensuring that every person who wants to use their UI can, which requires ensuring that accessibility standards are met. If you're automating your feature development code, you're going to run into two issues with AI.

The first is simply accessibility is not yet a domain where we can be prescriptive about how to make a specific feature accessible. Accessibility experts use their knowledge of a variety of user needs (kinds of disability and the UX that best works for them) to evaluate a given feature and determine how to make it accessible. There are some foundational rules — images and icons should have text descriptions, interaction elements should be focusable and in a reasonable order, etc — but often, it requires skilled reasoning about what a user's needs are and how to effectively achieve them through code. The contextual nature of accessibility means AI will, at best, get you started, and at worse, accidentally introduce barriers to access through simple mistakes that human developers regularly make.

The AI would do this because of the nature of model training: current-generation AI products cannot solve a problem that isn't accurately modeled in their training data. The unfortunate reality of the web is that it is horribly inaccessible at large, and the state of mobile apps is not much better. This is terrible for the millions of disabled people who are trying to navigate the modern world, and it is terrifying when we imagine a world with much more AI-generated code.

Disabled people are constantly encountering experiences that are inaccessible and prevent them from achieving their goals on the site or in the app. Their experiences will only be magnified and amplified in a future built on AI trained on the code that exists now. All the resulting AI models will be great at generating code modeled on the current state of web and mobile apps, which means the inaccessibility that is widespread today will be perpetuated and expanded through AI generation.

To prevent that, devs will need to be vigilant in testing to ensure accessibility remains and no regressions are shipped.

QA and Testing

Thankfully, developers have a robust set of practices available to them to ensure known issues aren't replicated. Once you have a defined set of requirements and functionality, you can write tests to ensure the requirements are met and the functionality is available.

Here, we find a really promising area for AI to improve work. Testing is one of the toils of frontend development; it involves the repetitive work of writing code that validates that a given component or feature does what it's supposed to. Here, we start to find a shape of problem that AI could genuinely be useful for. Tests are often very repetitive in structure, and the assertions are all based on explicit requirements or functionality. Plus, tests are all executed in a protected environment that succeeds or fails, so it's easy to know if the code is doing what you want.

Except, of course, many devs who have written tests have learned that sometimes a passing test doesn't mean the test is actually proving something works. Here, the comment that Copilot is "like pair programming with a junior programmer" is helpful to keep in mind. The key to success with this type of tool is extra attention to detail. AI products could be immensely helpful for writing test suites and improving code coverage, but extra care will need to be paid to ensure that every test is actually testing and asserting the things it claims to. One thing that the current gen of AI products is great at is coming up with edge cases; all the unexpected ways that a thing can break. Ensuring these cases are covered and don't regress is a key goal of software testing, so this sounds like a great way to leverage these products.

Conclusion

AI is booming in popularity, but as we've seen in these three examples, frontend developers who adopt it may find value but should be aware of the risks. An AI is only as good as its underlying model. Knowing what kinds of problems a model was trained on and what data is likely to actually be in its training corpus is immensely helpful for thinking about its usefulness for a given task.

Winston Hearn is Senior Product Manager, Honeycomb for Frontend Development
Share this

Industry News

September 23, 2024

ArmorCode announced the expansion of its platform with the launch of two new modules for Penetration Testing Management and Exceptions Management.

September 23, 2024

Less than a year after its $3M pre-seed round, Kestra, a unified orchestration platform, has raised $8M in seed investment.

September 19, 2024

Progress announced the speaker lineup for the MarkLogic World Tour US, taking place September 23-25, 2024, at the Bethesda Marriott in Maryland.

September 19, 2024

Citrix announced the general availability of Citrix VDA for macOS, expanding their desktop virtualization solutions, and MacStadium support this launch with its industry-leading IaaS offering, optimized for Citrix VDA for macOS deployments in the cloud.

September 19, 2024

Elastic announced the Elasticsearch Open Inference API now supports Hugging Face models with native chunking through the integration of the semantic_text field.

September 19, 2024

Codecov by Sentry, a dedicated code coverage reporting solution, announced Bundle Analysis and Test Analytics, two new solutions designed to accelerate workflows and arm developers with actionable insights to create a seamless development experience.

September 19, 2024

NightVision released API eNVy, an Application Programming Interface (API) solution that enables organizations to discover and document APIs in seconds.

September 19, 2024

Kong announced the global expansion of its Kong Konnect Dedicated Cloud Gateways.

September 18, 2024

MacStadium announced the General Availability of Orka Desktop 3.0, a powerful, user-friendly tool that allows developers, testers, and macOS admins to create, test, and manage macOS virtual machines (VMs) on local Apple silicon-based computers.

September 18, 2024

Komodor announced Klaudia, a Generative AI (GenAI) agent for troubleshooting and remediating operational issues, as well as optimizing Kubernetes environments.

September 18, 2024

Inflectra announced the launch of Rapise v8, a test automation solution that uses the power of Generative AI to deliver true autonomous testing.

September 17, 2024

Check Point® Software Technologies Ltd. has been recognized as one of theWorld’s Best Companies of 2024 by TIME and Statista.

Check Point made its debut on the list due to its strong employee satisfaction, revenue growth, and ESG efforts.

September 17, 2024

Oracle announced the availability of Java 23, the latest version of the programming language and development platform.

September 17, 2024

JFrog announced a new product integration with NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform.