Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
The end of 2019 is almost in sight, which makes this the perfect time to review the financial impact that DevOps has had on your business thus far. Formulating your lessons learned will help you make the best adjustments and get the most out of 2020.
For many organizations, it is challenging to quantify the return on investment in adopting DevOps processes. This may be due to a variety of reasons; most notably lack of consistent data, focusing on metrics that aren't aimed at goals or process issues, and not collecting data quickly enough to enable real-time feedback loops. You want to validate the success of your DevOps transformation, but you need to figure out how to put a numeric value on it as a whole.
Among the many benefits of DevOps, two in particular stand out: the ability to deliver value more quickly, and to do so with consistently high-quality and security. These translate into a direct benefit to both your business and to your customers.
A primary enabler of these benefits is automation, especially test automation. With the "shift-right" of testing towards production, together with the "shift-left" to earlier in development, automated testing is a fundamental and pervasive DevOps activity that directly impacts ROI, and provides concrete metrics that allow organizations to monitor and improve it.
How to Quantify ROI
The nature of DevOps means that we can measure the effect of our actions very quickly. With updates delivered into production almost continuously, it's possible to directly measure the impact of a software update on the end-user and show how a change to the software delivery process affects the efficiency of the software delivery pipeline.
Since testing-related metrics can be directly linked to value, they should be tracked by product teams and aggregated to guide strategy. CIOs and other senior executives must connect and map this data across their portfolios to optimize business decisions and align execution with planning. Here are three categories of measurements that will help you understand how testing is impacting your business outcomes.
Delivery Speed
The quicker the team is able to deliver a user story into production, the more productive the team. In many organizations, throughput is hampered by manual testing, which is typically time-consuming and error-prone. While there is still a place for manual exploratory testing, repetitive checking is an ideal candidate for automation. After the initial investment in automating a test, it can be run quickly every time a code change is made and deliver significant productivity benefits.
While automating tests and measuring the time a test takes to run is a good start, teams should also be looking at considerations such as:
■ Is your testing environment and framework sitting idle a lot of the time? Think about ways to maximize your utilization to get the most benefit.
■ How long does it take to provision testing environments? Can you reduce the amount of time it takes, by automating the deployment and configuration of the environment and test data?
■ How long does it take to recover if there's a failure, whether due to an issue with the testing framework, a bug in the software, or a configuration issue? How can you reduce that time?
Quality Level
Testing by itself is not enough. With limited resources, we have to ensure that whatever we do delivers maximum value. In addition to testing-related data that can be captured during development, there are some useful quality-related metrics that can be captured from production and used to optimize testing in development. Two key measurements that should be regularly evaluated are:
■ Does testing correlate to the most critical parts of the application? Your efforts should be weighted towards testing those parts of the application that are frequently used, or key processes that the end-user performs within the application.
■ How many defects are escaping into production, and what is the impact of these defects? Use that information to reduce the number of high-impact defects remaining undetected and unresolved before reaching the end-user and minimize downtime in production.
Value to the End-User
No matter how quickly you deliver a flawless feature to your customers, if they don't want it, don't need it, or don't use it, it has no value. The work that went into developing, testing, delivering and operating the feature becomes waste, which DevOps abhors. Avoid this by keeping your end-user involved throughout your software delivery process. Understand their needs, and continuously reevaluate and adapt to changing requirements to ensure that you deliver the right thing:
■ Once a feature is in production, monitor how it's being used, solicit feedback, and use that information to guide further investment.
■ If end-users are simply unaware of the feature, consider how you can proactively update them about new features and behaviors, to leverage investment to the maximum.
Look Back to Look Forward
While it's true that you can't manage something that you aren't measuring, just capturing measurements for the sake of it will not help you improve. Consider the metrics that will help you deliver more value to your customers, and make sure that you are monitoring them on an ongoing basis. Focus on consistently and reliably obtaining relevant and actionable data and use that information to improve your bottom line in 2020.
Industry News
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.
Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.
Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.
Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.