Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
APMdigest asked experts from across Application Performance Management (APM) and related markets for their recommendations on the best ways to ensure application performance before app rollout. The second set of six recommendations includes testing and analytics.
Start with Part 1 of this list
7. PERFORMANCE TESTING EARLY IN THE DEVELOPMENT LIFECYCLE
The best way to minimize the chances of performance defects creeping into production is to implement a comprehensive performance assurance strategy across IT. Performance risk assess every new project and change request as early as you can in the application lifecycle. Make performance testing a mandatory quality gate for all releases.
Ian Molyneaux
Head of Performance, Intechnica
Ensure app performance before rollout by enabling continuous user testing, performance testing and load testing as early in the development cycle as possible. Doing this means getting the most basic end-to-end functionality of any app up and running as quickly as possible even if its against mock back-end services. This allows the business, testers, developers and operations to see the whole and avoid performance surprises by interacting with a working 3D model of the product while working towards the "minimum likable product" or first release.
Ken Godskind
Chief Blogger and Analyst, APMexaminer.com
Too often performance monitoring is added as an afterthought instead of being baked into the application during the deployment process. Then, when problems arise they must be resolved without reference to performance baselines using structured data for infrastructure dependencies. To ensure top performance, dont wait to add performance monitoring!
Rajesh Krishnan
Director of Marketing, GroundWork
8. LOAD TESTING
Load testing with APM is a must. You need to make sure that all of the tiers of your application – including server-side applications and microservices, the JavaScript layer running in a browser (if a web app), and the native client layer (if an iOS or Android app) – perform well when subjected to multiple users. It is particularly helpful to use production load profiles to accurately determine what amount of load to use in your tests.
Al Sargent
Sr. Director, Product Marketing, New Relic
Undoubtedly the best process to ensure peak performance in production is to load test the application with production like load in QA environment. This is actually easier said than done, primarily because QA and Prod environments are very different in most organizations in terms of server resources, amount of data and perhaps network configuration. But if you don't want any surprises in production, it makes sense to thoroughly vet the application in a lower environment. A reliable load testing tool and an APM tool are must. Be meticulous in recording and reporting performance metrics.
Karun Subramanian
Application Support Expert, www.karunsubramanian.com
Establish a test environment to simulate user transactions with user loads for capacity and stress testing. Trace transactions to identify real-time performance bottlenecks via dynamic code instrumentation for library, method, or SLQ invocation. And correlate application degradation and failures with the infrastructure (network or storage) to determine root causes.
Sridhar Iyengar
VP Product Management, ManageEngine
Ensure that the appropriate tooling is in place, ideally in Dev, Test and Production, to provide clear visibility and rapid triage of application performance under load.
Ian Molyneaux
Head of Performance, Intechnica
9. DATABASE TESTING
Everyone understands the importance of optimizing application performance. But if you ignore the performance of the database that drives your application, your end user's experience will suffer. Setting performance baselines for your database, and monitoring them as you roll out your application, is absolutely essential. These may include running production system stress tests to ensure your database can handle the new data loads, setting thresholds to avoid inefficient or poorly performing queries, and tracking real user response times to ensure a consistent user experience through-out the roll-out process.
Josh Stephens
VP of Product Strategy, Idera
10. CLOUD TESTING
Organizations adopting SaaS apps like Office 365 or Google Apps often dont realize that their internet connectivity isnt up to the increased traffic. This can totally derail their migration to the cloud. To avoid this, teams need to thoroughly test cloud app availability and performance from each of their locations before, during, and after roll-out begins, so they can detect and correct configuration and bandwidth issues.
Patrick Carey
VP Product Management and Marketing, Exoprise
11. IT OPERATIONS ANALYTICS (ITOA)
Ensuring optimum application performance requires implementing a tool that provides real-time, automated data collection with deep analytics insights that allow for swift remediation. Prior to deployment, this kind of performance analytics solution can also be valuable in forecasting future capacity demands, and serve as the single source of truth by which DevOps and IT administrators collaborate more closely to ensure lean operational team processes are factored into new product rollout plans and designs.
Atchison Frazer
VP of Marketing, Xangati
Between new projects and updates to existing ones, DevOps teams can deploy up to 20 applications each day! To ensure optimal performance, rolling out this code requires testing for glitches, which can be tedious and time consuming. Automating this process with machine learning-powered anomaly detection software allows DevOps teams to identify issues in real-time before the apps go live. This eliminates the need to write numerous rules and set several thresholds in the hope of anticipating all potential problems, not only making the process faster, but more accurate. Because the technology flags all anomalous behavior, you can find and fix issues you didn't even think to look for – not just those related to a pre-determined set of KPIs.
Kevin Conklin
VP of Marketing, Prelert
There have been many approaches that have sought to ensure application performance, like Performance Testing and Capacity Management. However, despite having automated tools for streamlining the release management effort, errors can still slip into the process. The fact that you first release to a test environment, then to production, won't ensure that these releases are consistent, since there are many different configurations and dependencies specific to each environment. Release Validation based on IT Operations Analytics is an essential step to helping ensure application performance. Analyzing consistency across production and pre-production environments, you can make certain that your planning and testing efforts are based on the right configuration. Verifying what was already checked and updated, you can identify any changes that were not implemented through the automated deployment tool once a change is certified in pre-production.
Sasha Gilenson
CEO, Evolven
Dynamic virtualized environments are already straining the capabilities of legacy monitoring tools, and container technology, while speeding deployment of new code, will also increase time spent troubleshooting exponentially. So what can DevOps teams do to prevent glitches from making it into production? Incident management tools, used to monitor events and alerts across the stack in production, continue be extended into the QA environment to help catch change-related snafus before they affect the service or application in production. These tools, employing autonomic, machine learning and advanced analytics, are well suited to the pre-production phase where “black swan” incidents are most common. They can identify anomalous activity without reliance on burdensome rules or models, and are ideally suited to the dynamic nature of container technology. As IT Operations Analytics (ITOA) tools become more sophisticated, this will prove critical to predict problems during the QA stage, saving considerable time and resources. And with it, the promise and scale of container technology will be fulfilled.
Phil Tee
Chairman, CEO and Co-Founder, Moogsoft
12. LOG ANALYSIS
How do you ensure optimum application performance before the app goes live? Try creating and monitoring application logs to troubleshoot pre-production issues.
Sridhar Iyengar
VP Product Management, ManageEngine
Offering analytics on log data before day 1 of your application ship date is critical to determine what components are contributing to an issue and allows you to react quickly.
Brandon Hale
Product Manager, Advanced Networking, SevOne
Industry News
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.
Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.
Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.
Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.