OutSystems announced the general availability (GA) of Mentor on OutSystems Developer Cloud (ODC).
While mobile is quickly becoming the de-facto market platform for many of the business-critical applications deployed by banks, insurance companies and other enterprise organizations, the need to ensure an optimal end user experience mandates a robust mobile performance testing environment.
Accordingly, mobile enterprises require an environment which can provide insights into the key mobile application performance indicators - such as response time, availability and business critical transactions - on various devices (and operating systems) being used across different networks and carriers.
Let's begin by taking a look at the current situation, which clearly underscores the biggest challenge for enterprises in terms of ensuring successful mobile business apps.
According to recent surveys, it seems that end users are very conscious of application performance – in some cases even more so than functionality. The gaps in performance on different networks and locations are evident: the need to performance-test the application before launch date, on real devices, and provide sufficient insight to optimize it.
Based on our experience with mobile enterprises, building an efficient mobile performance test strategy should consist (at a minimum) of the following five pillars:
1. Defining the supported devices and operating systems
Mobile devices have a significant impact on application performance. Smartphones and tablets are, in essence, small computing devices and deliver powerful capabilities. On the other hand, however, they are highly constrained in terms of resources. The problem is that end users expect and demand the same level of performance (if not better) from their mobile apps as that which they get on their desktop computer. Therefore, selecting the right mix of mobile devices to test on prior to launch is one of the basic criteria for effective performance testing.
In our findings, different devices (and even the same device with a different OS version) may have a significantly different response to degraded network conditions or to server load.
Note that in order to stay in sync with your user community, the list of devices supported by your application should be dynamic and change in response to market trends (new devices, OS versions, etc.). This list should be updated on a quarterly basis, and your testing plan should take this into account.
2. Selecting the key business transactions
Select the functions of your application that users care about the most, and focus on testing against them using realistic and clear KPIs. In the initial testing stage, it is recommended to isolate your test environment and see how these business transactions work in a “clean room” environment. (Subsequently, these scenarios should also be tested in a production-like environment with interrupts such as incoming calls, messages, etc.).
3. Simulating various network conditions
Depending on the device and OS, mobile applications may behave differently, depending on the type of carrier network infrastructure/technology (3G, 4G, WiFi). According to Shunra, an authority in network virtualization for software testing, “Production network conditions such as inconsistent bandwidth, high jitter, increased latency and packet loss all work to degrade application performance."
Thus, it is imperative to analyze the impact of network conditions in your mobile performance testing scenarios. Your Performance team should simulate such network conditions, measure the user-facing KPIs (see above), and generate a network traffic sniffer (PCAP), which can serve as input to the optimization analysis phase as well as a baseline for the server load test script.
4. Building server load
When presented with material load, servers begin to perform in a way that may impact the end user experience. For instance, packets sent to the mobile device may be delayed, lost, de-sequenced, etc. The server load test measures the mobile end user experience on a real device while applying material load on the server farm.
The measurements on real devices are done simply by running recurring tests on the real devices repeatedly throughout the test session and measuring the KPIs users care about. You can make the synthetic traffic load “mobile-relevant” by converting the network traffic sniffer PCAP file into a load script. Make sure you are able to isolate device issues from network issues and application functional defects, which may be specific to a device or mobile OS.
5. Analyzing, debugging and optimizing
The output of the performance cycle based on the steps above would typically be a detailed report showing the key statistics (response time, availability and other) per transaction and per real mobile device under different conditions. Such a report should highlight and present the bottlenecks in terms of network traffic, detailed device vitals, etc. Your Performance team needs to analyze and triage these reports in order to pinpoint the root cause of any performance issues (which may be network-related problems or issues related to the device itself).
Bottom Line
As the mobile market continues to grow, new operating systems and a growing number of vendors are joining this rapidly expanding ecosystem. In the world of enterprise mobility, the importance of mobile performance testing continues to increase and has necessarily become a key part of the software development life cycle. By employing the right testing strategy and tools, and enabling access to the widest variety of real devices and simulated networks, your organization can meet the challenges of mobile application performance, assure end-user satisfaction and optimize business results.
Industry News
Kurrent announced availability of public internet access on its managed service, Kurrent Cloud, streamlining the connectivity process and empowering developers with ease of use.
MacStadium highlighted its major enterprise partnerships and technical innovations over the past year. This momentum underscores MacStadium’s commitment to innovation, customer success and leadership in the Apple enterprise ecosystem as the company prepares for continued expansion in the coming months.
Traefik Labs announced the integration of its Traefik Proxy with the Nutanix Kubernetes Platform® (NKP) solution.
Perforce Software announced the launch of AI Validation, a new capability within its Perfecto continuous testing platform for web and mobile applications.
Mirantis announced the launch of Rockoon, an open-source project that simplifies OpenStack management on Kubernetes.
Endor Labs announced a new feature, AI Model Discovery, enabling organizations to discover the AI models already in use across their applications, and to set and enforce security policies over which models are permitted.
Qt Group is launching Qt AI Assistant, an experimental tool for streamlining cross-platform user interface (UI) development.
Sonatype announced its integration with Buy with AWS, a new feature now available through AWS Marketplace.
Endor Labs, Aikido Security, Arnica, Amplify, Kodem, Legit, Mobb and Orca Security have launched Opengrep to ensure static code analysis remains truly open, accessible and innovative for everyone:
Progress announced the launch of Progress Data Cloud, a managed Data Platform as a Service designed to simplify enterprise data and artificial intelligence (AI) operations in the cloud.
Sonar announced the release of its latest Long-Term Active (LTA) version, SonarQube Server 2025 Release 1 (2025.1).
Idera announced the launch of Sembi, a multi-brand entity created to unify its premier software quality and security solutions under a single umbrella.
Postman announced the Postman AI Agent Builder, a suite empowering developers to quickly design, test, and deploy intelligent agents by combining LLMs, APIs, and workflows into a unified solution.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of CubeFS.