GitLab announced the general availability of GitLab Duo with Amazon Q.
As the old adage goes, what gets measured gets done. Measurement is the key enabler to any DevOps transformation and yet it's an oft-neglected aspect of projects. Organizations struggle to get beyond the starting blocks when learning how to measure DevOps. As a result, in today's blog, I will share important DevOps metrics your team can use to get started on your journey to measuring positive change.
A common challenge I frequently hear from DevOps teams is that there is no clear starting place, no benchmark to begin measuring from. They ask, "If you don't know where you are starting, how do you measure improvement from that place?" My advice is to simply start. Start measuring and your yardstick will appear. You will see and be able to show improvement over time. Instead of measures like, "the release made it out to production," you'll start being able to report on metrics for DevOps that meaningfully impact the business.
Why DevOps Metrics Matter
DevOps metrics are important as they help inform data-driven decisions that can guide continuous improvement efforts. And, with the right measures, you can link DevOps improvements with measurable impact to greater goals like digital transformation efforts. DevOps Research and Assessment (DORA) group helpfully provides us with clear metrics to track and even more insights with its latest report, Accelerate State of DevOps 2019(link is external).
DORA's Research-Driven Guidelines
Over the past six years, DORA has worked to develop four DevOps measurements indicative of an organization's software delivery performance, and ability to meet its DevOps goals. This year the group has enriched its research by identifying the capabilities that drive improvement in each of these four key areas. Using DORA's four key metrics as a foundation, let's explore the options and tools available for gathering metrics in DevOps.
Deployment Frequency
This metric gauges the throughput of your software delivery process, telling you how often and how quickly new services or features are deployed to production. This measure tells you quite a bit about your process effectiveness. For example, if there are bottlenecks in the process, measuring deployment frequency will help you unearth them by asking key questions such as:
■ Are there unnecessary steps in the process or are these steps in the wrong order?
■ What can we automate?
■ Are we the right team to manage this part of the process?
■ Do upstream issues exist that affect our responsiveness?
■ Do we have access to the tools we need to ensure timely deployments?
Over time deployment frequency should remain even — or increase. In the spirit of continuous improvement, decreases or dips should be reviewed closely to identify (and remediate when possible) the root cause. DORA identifies elite performers as those able to deploy on-demand, multiple times a day. Conversely, low performers deploy once every one to six months.
Lead Time for Code Changes
Along with deployment frequency, this metric measures the throughput of the software delivery process. DORA recommends measuring the lead time for code changes from the point in time when code is checked-in to the point it is released. This measure can also help you gauge the efficiency of your processes, supporting system effectiveness, and the general capabilities of your development team. For example, lengthy lead times can unearth inefficiencies in the development process or deployment bottlenecks.
As your team becomes more familiar and efficient with its DevOps processes, you should expect to see your lead time for changes metric decrease over time. Elite performers lead time is less than one day whereas low performers need between one and six months.
DevOps Change Failure Rate
DORA flags the change fail rate as a measure of the quality of the release process. It gets to the heart of how many application or service changes, builds or deployments that create a service issue large enough that it requires remediation. The change fail rate would ideally be managed down to as close to zero as possible. And, indeed, all but low performers have a change fail rate between zero and 15%.
The IT ticket system is an effective tool for measuring fail rates, tracking for each change its success rate, the impact of the failure and any required remediation. For example, your ticket systems can report if an approved change led to a service outage that required a rollback of the change.
Time to Restore Service
Once a service-impacting incident is detected, how long does it take to remediate and restore the service? This question measures system stability. Naturally, you'll want to restore services as quickly as possible as the cost of service outages can be extreme to the business. A Fortune 1000 survey by IDC found that the average cost of an infrastructure failure is $100,000 per hour.
When it comes to this measure, DORA research finds a significant gap between elite and low performers. Elite organizations are able to restore services on average in less than one hour whereas low performers report taking between one week and one month. High and medium performers are able to restore service within a day.
If you issue tickets for system repairs, your ticket system should be able to report on time to repair service. Tracking this metric will give you a distinct trend line illustrating progress over time. This is just one way to measure this metric. Often, a look at the monitoring tools that come with your cloud resources will give you this information. In the best scenarios, failures are self-healing and take milliseconds to fail over.
Business-Impacting
While these four metrics are a very helpful starting place to measure DevOps improvement and success, it is absolutely critical that teams take the initiative to link these metrics to the business. For example, increased deployment frequency allows the DevOps team to address new customer requests faster, growing customer satisfaction. Tracking key metrics is important to the business and even more so if you can show the business how DevOps processes are driving improvement over time that directly impacts key corporate goals.
Some tools allow for value stream mapping which directly ties code changes to features released. In some cases, e.g. retail applications, you can directly tie new feature introductions with impact to revenue.
DevOps Dashboard
With these four key metrics in hand, you are now in a position to build a dashboard for ongoing tracking and reporting. Theirs is a range of commonly used DevOps metrics dashboard tools available — both commercial and open-source, suitable for most needs and budget.
DORA's four key metrics will not only allow you to show progress and highlight areas for improvement for the DevOps team but by using these four common key metrics, you will be able to benchmark your team against their peers for external validation of their progress. And, you'll have a genuine numbers-based response when your boss drops by to ask how the team is progressing. Most importantly, with this data in hand, you will be prepared to change course quickly when you don't see a benefit in something you have built, leverage the insights you gain from your experiments, and capitalize on your successes helping the business reach its ultimate goals.
Industry News
Perforce Software and Liquibase announced a strategic partnership to enhance secure and compliant database change management for DevOps teams.
Spacelift announced the launch of Saturnhead AI — an enterprise-grade AI assistant that slashes DevOps troubleshooting time by transforming complex infrastructure logs into clear, actionable explanations.
CodeSecure and FOSSA announced a strategic partnership and native product integration that enables organizations to eliminate security blindspots associated with both third party and open source code.
Bauplan, a Python-first serverless data platform that transforms complex infrastructure processes into a few lines of code over data lakes, announced its launch with $7.5 million in seed funding.
Perforce Software announced the launch of the Kafka Service Bundle, a new offering that provides enterprises with managed open source Apache Kafka at a fraction of the cost of traditional managed providers.
LambdaTest announced the launch of the HyperExecute MCP Server, an enhancement to its AI-native test orchestration platform, HyperExecute.
Cloudflare announced Workers VPC and Workers VPC Private Link, new solutions that enable developers to build secure, global cross-cloud applications on Cloudflare Workers.
Nutrient announced a significant expansion of its cloud-based services, as well as a series of updates to its SDK products, aimed at enhancing the developer experience by allowing developers to build, scale, and innovate with less friction.
Check Point® Software Technologies Ltd.(link is external) announced that its Infinity Platform has been named the top-ranked AI-powered cyber security platform in the 2025 Miercom Assessment.
Orca Security announced the Orca Bitbucket App, a cloud-native seamless integration for scanning Bitbucket Repositories.
The Live API for Gemini models is now in Preview, enabling developers to start building and testing more robust, scalable applications with significantly higher rate limits.
Backslash Security(link is external) announced significant adoption of the Backslash App Graph, the industry’s first dynamic digital twin for application code.
SmartBear launched API Hub for Test, a new capability within the company’s API Hub, powered by Swagger.
Akamai Technologies introduced App & API Protector Hybrid.