What's Trending in Software Quality? It Depends on Who You Ask
November 29, 2022

Noel Wurst
SmartBear

Think about your favorite desktop or mobile app for a moment. It probably has a pretty nice user experience. Whatever that software was designed to do, or designed to help you do, it does. And when you want it to! And on whatever device(s) you currently own.

None of that is easy.


The people who create, test, and deliver those favorite, can't-live-without apps do so under immense pressure to deliver the next release even faster and at an even higher quality than the last. SmartBear conducted a survey to learn the methodologies, practices, and tools used by the software testing professionals worldwide who build, validate, and deliver software. And by doing so, we get a glimpse into emerging trends, fading trends of the past, and what the future may hold in software quality across various industries and geographies.

What did we discover? First, being able to keep pace with the increasing rates of release cycles — we saw it with quarterly and yearly release cycles this year — is a continuous challenge.

The major theme that keeps rising to the top is visibility — the need for engineering teams to be able to see the "whole picture" of software quality. Whether it's the root cause of bugs, the root cause of potential bugs, performance issues in pre-production before they become performance issues in post-production — increased visibility is needed throughout the software development lifecycle (SDLC). Having visibility and insight into the data gathered throughout the SDLC allows teams to retain quality while improving efficiency and increasing velocity — not sacrificing one for the other. Development teams have traditionally looked at post-production data to determine how well their applications are performing. However, this is increasingly becoming only part of the required story teams need to be able to tell.

The pace of business, meaning, business demands from both inside and outside of an organization, is simply too fast to be able to wait for post-production data to determine how well applications are performing. With more proactive steps, teams can utilize pre-production data to maintain and scale quality. This shift-left approach to data science and data storytelling gives developers and testers the confidence to increase release/deployment velocity. That's the big picture.

Test automation coverage rebounded after a dip last year. Overall, we have seen a shift in the bell curve to engineering teams using more automation with more than half of respondents in the 25-75% automation range and 42% of respondents reporting that more than half of their tests are automated. The biggest challenge to test automation has consistently remained a high frequency of application changes. Respondents also answered that a lack of time is the biggest challenge to their overall testing initiatives.

Half of respondents reported spending more than 70% of their week testing, and three quarters said they spend over 50%. Despite not having enough time to test, nearly two-thirds reported they were satisfied or very satisfied with their testing processes. Those who spend the least amount of time testing reported being the least satisfied with their testing processes.

Web apps and APIs continue to be heavily tested with mobile app testing also continuing upward. The three testing practices that respondents said they perform most were manual testing (78%), regression testing (66%), and automated testing (62%). Those practices were followed by end-to-end, integration, and exploratory testing.

For APIs, 41% reported their top concern is functionality, which comes as no surprise. This year, security (24%) is the second-highest concern, while availability (22%) and performance (13%) dropped to the bottom.

The trends in which roles are writing, performing, and managing software testing, whether it's being done by developers, testers, or a hybrid approach (just don't leave it to be done exclusively by your users!) continues to shift, but its critical importance is being more greatly understood. Whether an organization is looking to boost software quality, security, user experience, accessibility, or brand loyalty, a robust, holistic approach to software testing is mandatory, and not to be sacrificed for "speed."

Increased visibility of the impacts and potential impacts of code changes, test results, and load is becoming the path forward for organizations looking to create a true competitive advantage in any industry. The product a company provides to their customers will continue to take the lion's share of credit for such an advantage but the real innovators are unlocking advantages at every stage of the SDLC, long before that product ever makes it out the door.

Methodology: Smartbear gathered input on a variety of topics around software testing, including development and delivery models, testing and quality assurance challenges and practices, test management and performance testing trends, and API and UI testing tools and techniques. More than 1,500 manual testers, automation engineers, developers, consultants, QA managers, and analysts responded via a 61-question survey over the course of five weeks. This year's survey was well balanced with respondents from North America, Asia, and Europe.

Noel Wurst is Software Quality Evangelist at SmartBear
Share this

Industry News

April 25, 2024

JFrog announced a new machine learning (ML) lifecycle integration between JFrog Artifactory and MLflow, an open source software platform originally developed by Databricks.

April 25, 2024

Copado announced the general availability of Test Copilot, the AI-powered test creation assistant.

April 25, 2024

SmartBear has added no-code test automation powered by GenAI to its Zephyr Scale, the solution that delivers scalable, performant test management inside Jira.

April 24, 2024

Opsera announced that two new patents have been issued for its Unified DevOps Platform, now totaling nine patents issued for the cloud-native DevOps Platform.

April 23, 2024

mabl announced the addition of mobile application testing to its platform.

April 23, 2024

Spectro Cloud announced the achievement of a new Amazon Web Services (AWS) Competency designation.

April 22, 2024

GitLab announced the general availability of GitLab Duo Chat.

April 18, 2024

SmartBear announced a new version of its API design and documentation tool, SwaggerHub, integrating Stoplight’s API open source tools.

April 18, 2024

Red Hat announced updates to Red Hat Trusted Software Supply Chain.

April 18, 2024

Tricentis announced the latest update to the company’s AI offerings with the launch of Tricentis Copilot, a suite of solutions leveraging generative AI to enhance productivity throughout the entire testing lifecycle.

April 17, 2024

CIQ launched fully supported, upstream stable kernels for Rocky Linux via the CIQ Enterprise Linux Platform, providing enhanced performance, hardware compatibility and security.

April 17, 2024

Redgate launched an enterprise version of its database monitoring tool, providing a range of new features to address the challenges of scale and complexity faced by larger organizations.

April 17, 2024

Snyk announced the expansion of its current partnership with Google Cloud to advance secure code generated by Google Cloud’s generative-AI-powered collaborator service, Gemini Code Assist.

April 16, 2024

Kong announced the commercial availability of Kong Konnect Dedicated Cloud Gateways on Amazon Web Services (AWS).

April 16, 2024

Pegasystems announced the general availability of Pega Infinity ’24.1™.