Early Lifecycle Performance Testing and Optimization – Without the Grief
March 15, 2012

Steve Tack
Dynatrace

Today’s consumers have high expectations for exceptional website and web application speed, including during peak traffic periods like the holidays for retailers. A recent survey demonstrates that almost 90 percent of consumers believe it is important for websites and web applications to work well during peak traffic times. When they don’t, these consumers take action quickly: 75 percent who experience poor performance during peak periods go to a competitor’s site, and 86 percent are less likely to return to the website. Worse yet, many consumers flock to social networks where they spread the word on their disappointing web experience to the masses.

The majority of website visitors now expect websites and web applications to load in two seconds or less, and it has been estimated that for each additional two seconds in response time, abandonment rates jump by eight percent incrementally.

With so much riding on performance, you can’t afford to treat your real users as crash test dummies. If you leave performance testing to the final development stages, i.e. pre-production – and for the testers only – you’re in danger of doing just this.

You’ve got to make sure that your end-users are happy and this means building performance considerations into the entire application lifecycle, and conducting testing throughout the development process – not just at the end.

But the thought of adding yet more performance testing cycles into an already overstretched delivery team often elicits the same reaction as the five stages of grief – denial, anger, bargaining, depression then acceptance.

A good leader recognizes that this will be the reaction from their team and works to empower the team members to overcome it as follows:

Denial sets in when team members feel that the risks are not as great as you make out: perhaps they think that operations will be able to tune the servers to optimize performance; perhaps the use of proven third-party technology leads to overconfidence; worst case we think we can use end users as beta testers.

There are too many performance landmines in the application delivery chain to leave this to chance. Bad database calls, too much synchronization, memory leaks, bloated and poorly designed web front-ends, incorrect traffic estimates, poorly provisioned hardware, misconfigured CDNs and load balancers and problematic third-parties force you to take action.

But forcing them to address the situation elicits anger as the team considers the work required to test, and questions like how to test, what tools to test with and where the budget will come from. Teams also start to ask how they get actionable results in the limited amount of time left.

It’s easy to become overwhelmed at this stage and team members feel depressed as it all seems too much to do with the limited amount of time and resources that they have before go-live date.

Acceptance begins when developers make the realization that they simply can’t afford not to build performance considerations into the application lifecycle.

It can be a huge mistake to leave performance validation solely to testing teams at the end of product development. Performance must be an additional, integral requirement for all development, and all new features.

In the ideal world, if we really take testing seriously and if we are willing to take it so seriously that we actually integrate it into the entire application lifecycle, then we are able to make sure we get potentially shippable code with high quality and great performance so we can stay ahead of the competition. It’s something that truly needs to be done, otherwise organizations risk ending up with great ideas and great features which fail due to poor performance.

The good news is testing tools are more affordable and easier to use than ever before. Simple SaaS-based load testing tools now exist with pay-as-you-go models that eliminate costly upfront hardware and software that sits unused between testing cycles. Some solutions now offer developer-friendly diagnostic capabilities that improve collaboration between QA and development, drastically shorten problem resolution time and enable development to build in performance testing approaches earlier in the lifecycle with little to no resource overhead. The ability to layer in these capabilities into a siloed organization provides an incremental approach to building performance into the application lifecycle and gaining acceptance across all performance stakeholders.

Steve Tack is CTO of Compuware’s Application Performance Management Business Unit.

Steve Tack is Chief Technology Officer of Compuware's Application Performance Management (APM) business where he leads the expansion of the company's APM product portfolio and market presence. He is a software and IT services veteran with expertise in application and web performance management, SaaS, cloud computing, end-user experience monitoring and mobile applications. Steve is a frequent speaker at industry conferences and his articles have appeared in a variety of business and technology publications.
Share this

Industry News

April 18, 2024

SmartBear announced a new version of its API design and documentation tool, SwaggerHub, integrating Stoplight’s API open source tools.

April 18, 2024

Red Hat announced updates to Red Hat Trusted Software Supply Chain.

April 18, 2024

Tricentis announced the latest update to the company’s AI offerings with the launch of Tricentis Copilot, a suite of solutions leveraging generative AI to enhance productivity throughout the entire testing lifecycle.

April 17, 2024

CIQ launched fully supported, upstream stable kernels for Rocky Linux via the CIQ Enterprise Linux Platform, providing enhanced performance, hardware compatibility and security.

April 17, 2024

Redgate launched an enterprise version of its database monitoring tool, providing a range of new features to address the challenges of scale and complexity faced by larger organizations.

April 17, 2024

Snyk announced the expansion of its current partnership with Google Cloud to advance secure code generated by Google Cloud’s generative-AI-powered collaborator service, Gemini Code Assist.

April 16, 2024

Kong announced the commercial availability of Kong Konnect Dedicated Cloud Gateways on Amazon Web Services (AWS).

April 16, 2024

Pegasystems announced the general availability of Pega Infinity ’24.1™.

April 16, 2024

Sylabs announces the launch of a new certification focusing on the Singularity container platform.

April 15, 2024

OpenText™ announced Cloud Editions (CE) 24.2, including OpenText DevOps Cloud and OpenText™ DevOps Aviator.

April 15, 2024

Postman announced its acquisition of Orbit, the community growth platform for developer companies.

April 11, 2024

Check Point® Software Technologies Ltd. announced new email security features that enhance its Check Point Harmony Email & Collaboration portfolio: Patented unified quarantine, DMARC monitoring, archiving, and Smart Banners.

April 11, 2024

Automation Anywhere announced an expanded partnership with Google Cloud to leverage the combined power of generative AI and its own specialized, generative AI automation models to give companies a powerful solution to optimize and transform their business.

April 11, 2024

Jetic announced the release of Jetlets, a low-code and no-code block template, that allows users to easily build any technically advanced integration use case, typically not covered by alternative integration platforms.

April 10, 2024

Progress announced new powerful capabilities and enhancements in the latest release of Progress® Sitefinity®.