5 Keys for Effective Mobile Performance Testing
May 06, 2013

Eran Kinsbruner
Perfecto by Perforce

While mobile is quickly becoming the de-facto market platform for many of the business-critical applications deployed by banks, insurance companies and other enterprise organizations, the need to ensure an optimal end user experience mandates a robust mobile performance testing environment.

Accordingly, mobile enterprises require an environment which can provide insights into the key mobile application performance indicators - such as response time, availability and business critical transactions - on various devices (and operating systems) being used across different networks and carriers.

Let's begin by taking a look at the current situation, which clearly underscores the biggest challenge for enterprises in terms of ensuring successful mobile business apps.

According to recent surveys, it seems that end users are very conscious of application performance – in some cases even more so than functionality. The gaps in performance on different networks and locations are evident: the need to performance-test the application before launch date, on real devices, and provide sufficient insight to optimize it.

Based on our experience with mobile enterprises, building an efficient mobile performance test strategy should consist (at a minimum) of the following five pillars:

1. Defining the supported devices and operating systems

Mobile devices have a significant impact on application performance. Smartphones and tablets are, in essence, small computing devices and deliver powerful capabilities. On the other hand, however, they are highly constrained in terms of resources. The problem is that end users expect and demand the same level of performance (if not better) from their mobile apps as that which they get on their desktop computer. Therefore, selecting the right mix of mobile devices to test on prior to launch is one of the basic criteria for effective performance testing.

In our findings, different devices (and even the same device with a different OS version) may have a significantly different response to degraded network conditions or to server load.

Note that in order to stay in sync with your user community, the list of devices supported by your application should be dynamic and change in response to market trends (new devices, OS versions, etc.). This list should be updated on a quarterly basis, and your testing plan should take this into account.

2. Selecting the key business transactions

Select the functions of your application that users care about the most, and focus on testing against them using realistic and clear KPIs. In the initial testing stage, it is recommended to isolate your test environment and see how these business transactions work in a “clean room” environment. (Subsequently, these scenarios should also be tested in a production-like environment with interrupts such as incoming calls, messages, etc.).

3. Simulating various network conditions

Depending on the device and OS, mobile applications may behave differently, depending on the type of carrier network infrastructure/technology (3G, 4G, WiFi). According to Shunra, an authority in network virtualization for software testing, “Production network conditions such as inconsistent bandwidth, high jitter, increased latency and packet loss all work to degrade application performance."

Thus, it is imperative to analyze the impact of network conditions in your mobile performance testing scenarios. Your Performance team should simulate such network conditions, measure the user-facing KPIs (see above), and generate a network traffic sniffer (PCAP), which can serve as input to the optimization analysis phase as well as a baseline for the server load test script.

4. Building server load

When presented with material load, servers begin to perform in a way that may impact the end user experience. For instance, packets sent to the mobile device may be delayed, lost, de-sequenced, etc. The server load test measures the mobile end user experience on a real device while applying material load on the server farm.

The measurements on real devices are done simply by running recurring tests on the real devices repeatedly throughout the test session and measuring the KPIs users care about. You can make the synthetic traffic load “mobile-relevant” by converting the network traffic sniffer PCAP file into a load script. Make sure you are able to isolate device issues from network issues and application functional defects, which may be specific to a device or mobile OS.

5. Analyzing, debugging and optimizing

The output of the performance cycle based on the steps above would typically be a detailed report showing the key statistics (response time, availability and other) per transaction and per real mobile device under different conditions. Such a report should highlight and present the bottlenecks in terms of network traffic, detailed device vitals, etc. Your Performance team needs to analyze and triage these reports in order to pinpoint the root cause of any performance issues (which may be network-related problems or issues related to the device itself).

Bottom Line

As the mobile market continues to grow, new operating systems and a growing number of vendors are joining this rapidly expanding ecosystem. In the world of enterprise mobility, the importance of mobile performance testing continues to increase and has necessarily become a key part of the software development life cycle. By employing the right testing strategy and tools, and enabling access to the widest variety of real devices and simulated networks, your organization can meet the challenges of mobile application performance, assure end-user satisfaction and optimize business results.

Eran Kinsbruner is Chief Evangelist of Test Automation at Perfecto by Perforce
Share this

Industry News

May 06, 2024

Red Hat and Oracle announced the general availability of Red Hat OpenShift on Oracle Cloud Infrastructure (OCI) Compute Virtual Machines (VMs).

May 06, 2024

The Software Engineering Institute at Carnegie Mellon University announced the release of a tool to give a comprehensive visualization of the complete DevSecOps pipeline.

May 06, 2024

Synopsys has entered into a definitive agreement with Clearlake Capital Group, L.P. and Francisco Partners.

May 02, 2024

Parasoft announces the opening of its new office in Northeast Ohio.

May 02, 2024

Postman released v11, a significant update that speeds up development by reducing collaboration friction on APIs.

May 02, 2024

Sysdig announced the launch of the company’s Runtime Insights Partner Ecosystem, recognizing the leading security solutions that combine with Sysdig to help customers prioritize and respond to critical security risks.

May 02, 2024

Nokod Security announced the general availability of the Nokod Security Platform.

May 02, 2024

Drata has acquired oak9, a cloud native security platform, and released a new capability in beta to seamlessly bring continuous compliance into the software development lifecycle.

May 01, 2024

Amazon Web Services (AWS) announced the general availability of Amazon Q, a generative artificial intelligence (AI)-powered assistant for accelerating software development and leveraging companies’ internal data.

May 01, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux 9.4, the latest version of the enterprise Linux platform.

May 01, 2024

ActiveState unveiled Get Current, Stay Current (GCSC) – a continuous code refactoring service that deals with breaking changes so enterprises can stay current with the pace of open source.

May 01, 2024

Lineaje released Open-Source Manager (OSM), a solution to bring transparency to open-source software components in applications and proactively manage and mitigate associated risks.

May 01, 2024

Synopsys announced the availability of Polaris Assist, an AI-powered application security assistant on the Synopsys Polaris Software Integrity Platform®.

April 30, 2024

Backslash Security announced the findings of its GPT-4 developer simulation exercise, designed and conducted by the Backslash Research Team, to identify security issues associated with LLM-generated code. The Backslash platform offers several core capabilities that address growing security concerns around AI-generated code, including open source code reachability analysis and phantom package visibility capabilities.

April 30, 2024

Azul announced that Azul Intelligence Cloud, Azul’s cloud analytics solution -- which provides actionable intelligence from production Java runtime data to dramatically boost developer productivity -- now supports Oracle JDK and any OpenJDK-based JVM (Java Virtual Machine) from any vendor or distribution.