MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.
While mobile is quickly becoming the de-facto market platform for many of the business-critical applications deployed by banks, insurance companies and other enterprise organizations, the need to ensure an optimal end user experience mandates a robust mobile performance testing environment.
Accordingly, mobile enterprises require an environment which can provide insights into the key mobile application performance indicators - such as response time, availability and business critical transactions - on various devices (and operating systems) being used across different networks and carriers.
Let's begin by taking a look at the current situation, which clearly underscores the biggest challenge for enterprises in terms of ensuring successful mobile business apps.
According to recent surveys, it seems that end users are very conscious of application performance – in some cases even more so than functionality. The gaps in performance on different networks and locations are evident: the need to performance-test the application before launch date, on real devices, and provide sufficient insight to optimize it.
Based on our experience with mobile enterprises, building an efficient mobile performance test strategy should consist (at a minimum) of the following five pillars:
1. Defining the supported devices and operating systems
Mobile devices have a significant impact on application performance. Smartphones and tablets are, in essence, small computing devices and deliver powerful capabilities. On the other hand, however, they are highly constrained in terms of resources. The problem is that end users expect and demand the same level of performance (if not better) from their mobile apps as that which they get on their desktop computer. Therefore, selecting the right mix of mobile devices to test on prior to launch is one of the basic criteria for effective performance testing.
In our findings, different devices (and even the same device with a different OS version) may have a significantly different response to degraded network conditions or to server load.
Note that in order to stay in sync with your user community, the list of devices supported by your application should be dynamic and change in response to market trends (new devices, OS versions, etc.). This list should be updated on a quarterly basis, and your testing plan should take this into account.
2. Selecting the key business transactions
Select the functions of your application that users care about the most, and focus on testing against them using realistic and clear KPIs. In the initial testing stage, it is recommended to isolate your test environment and see how these business transactions work in a “clean room” environment. (Subsequently, these scenarios should also be tested in a production-like environment with interrupts such as incoming calls, messages, etc.).
3. Simulating various network conditions
Depending on the device and OS, mobile applications may behave differently, depending on the type of carrier network infrastructure/technology (3G, 4G, WiFi). According to Shunra, an authority in network virtualization for software testing, “Production network conditions such as inconsistent bandwidth, high jitter, increased latency and packet loss all work to degrade application performance."
Thus, it is imperative to analyze the impact of network conditions in your mobile performance testing scenarios. Your Performance team should simulate such network conditions, measure the user-facing KPIs (see above), and generate a network traffic sniffer (PCAP), which can serve as input to the optimization analysis phase as well as a baseline for the server load test script.
4. Building server load
When presented with material load, servers begin to perform in a way that may impact the end user experience. For instance, packets sent to the mobile device may be delayed, lost, de-sequenced, etc. The server load test measures the mobile end user experience on a real device while applying material load on the server farm.
The measurements on real devices are done simply by running recurring tests on the real devices repeatedly throughout the test session and measuring the KPIs users care about. You can make the synthetic traffic load “mobile-relevant” by converting the network traffic sniffer PCAP file into a load script. Make sure you are able to isolate device issues from network issues and application functional defects, which may be specific to a device or mobile OS.
5. Analyzing, debugging and optimizing
The output of the performance cycle based on the steps above would typically be a detailed report showing the key statistics (response time, availability and other) per transaction and per real mobile device under different conditions. Such a report should highlight and present the bottlenecks in terms of network traffic, detailed device vitals, etc. Your Performance team needs to analyze and triage these reports in order to pinpoint the root cause of any performance issues (which may be network-related problems or issues related to the device itself).
Bottom Line
As the mobile market continues to grow, new operating systems and a growing number of vendors are joining this rapidly expanding ecosystem. In the world of enterprise mobility, the importance of mobile performance testing continues to increase and has necessarily become a key part of the software development life cycle. By employing the right testing strategy and tools, and enabling access to the widest variety of real devices and simulated networks, your organization can meet the challenges of mobile application performance, assure end-user satisfaction and optimize business results.
Industry News
Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.
Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.
Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.
Kubiya announced Captain Kubernetes, an AI-powered teammate designed to simplify Kubernetes management with natural language interaction and autonomous, self-healing capabilities.
Solo.io is donating its open source API Gateway, Gloo Gateway, to the Cloud Native Computing Foundation (CNCF) to further its mission of building a complete omni-gateway connectivity solution.
LaunchDarkly announced a new approach to software delivery—Guarded Releases—that empowers organizations to ship with confidence and manage risk proactively.
Diagrid announced details of the upcoming release of Dapr 1.15, a Cloud Native Computing Foundation project maintained by Diagrid, Microsoft, Intel, Alibaba, and others.
Fermyon™ Technologies announced the release of Spin 3.0, enabling enterprises to quickly move toward more sophisticated production applications based on WebAssembly (Wasm).
Mirantis announced Mirantis Kubernetes Engine (MKE) 4, the latest evolution in its long-established product line that sets the standard for secure enterprise Kubernetes.
Cequence Security announced the launch of its new API Security Assessment Services.
Pulumi announced improvements including major updates to the EKS provider supporting Amazon Linux 2023 and Security Groups for pods, the release of Pulumi Kubernetes Operator 2.0 with dedicated workspace pods, Pulumi ESC integration with External Secrets Operator, and a new Kubernetes-native deployment agent for enhanced security and scalability.
Loft Labs announced the public beta of vCluster Cloud, a managed solution that simplifies and reduces the costs of Kubernetes clusters.
DevZero announced DXI (Developer Experience Index), an initiative aimed at transforming developer productivity by unifying engineering throughput and operational metrics.
Horizon3.ai announced the release of NodeZero™ Kubernetes Pentesting, a new capability available to all NodeZero users.