Key Benchmarks and Considerations for High-Performing Developer Teams
August 24, 2022

Michael Stahnke
CircleCI

In software delivery, there is no question that speed is important. When software teams move fast, good things happen and business value is delivered more frequently.

But, speed comes with a tradeoff: complexity.

As this complexity grows, how can engineering teams succeed?

After analyzing millions of workflows from more than 50,000 organizations across the world, I've outlined some ways teams can start optimizing their software delivery for high performance.

Identify and Meet These 4 Benchmarks

To help teams optimize their software operations for efficiency, CircleCI's latest State of Software Delivery Report examined more than two years of data from over a quarter of a billion workflows, representing more than 50,000 organizations, to gain insights into the DevOps practices used by software teams globally. As a result, the research identified four key benchmarks that the most successful engineering teams routinely meet:

Throughput: Prioritize being in a state of deploy-readiness state most or all of the time, rather than the number of workflows run.

Duration: Reach workflow durations between five to ten minutes on average.

Mean Time to Recovery: Recover from any failed runs by fixing or reverting in under an hour.

Success Rate: Achieve success rates above 90% for the default branch of an application.

Every software team is different. However, the software delivery patterns observed on our platform, especially the data points from top delivery teams, show key similarities that suggest valuable benchmarks for teams to use as goals.

Now let's break down what these four benchmarks really mean.

The number of workflow runs matters less than being in a deploy-ready state most, if not all, of the time. Rather than the number of workflow runs, the most successful teams prioritize being deploy-ready.

The second item that teams should focus on is Duration, which is the time it takes for a workflow to run. Most successful teams achieve workflow durations of five to ten minutes on average.

Third, Mean Time to Recovery describes what it takes for a workflow to become successful again after a failure has occurred. The data shows teams that recover from failed runs in under an hour are the most resilient.

And finally, Success Rate, which is the number of successful runs divided by the total number of runs over a period of time. The most successful engineering teams achieve success rates above 90%.

Prioritize Team Structure and Culture

Prioritizing team structure and culture is essential to improving software delivery metrics. While the ideal team structure and culture will vary depending on the organizational goals, keeping developers in flow is essential to keeping them as productive as possible. That means scheduling meetings at times that don't conflict with peak productivity hours, which the data shows is between 6 a.m. and 7 a.m. PR on Wednesdays.

It is equally important to determine the number of people on your team. Three out of four of our key metrics show a correlation between larger team size and better engineering performance. The research shows the ideal number of code contributors to aim for is between five and twenty, depending on your team's goals, the scope of your responsibilities, as well as other variables. A larger team is also the best way to avoid burnout, and during a time when developer talent is coveted is especially important to consider.

Test, Test, Test

Regardless of your team size, teams prioritizing test-driven development (TDD) can confidently rely on their tooling during market swings, seasonal fluctuations, and times of uncertainty — such as the pandemic. TDD helps companies ensure bad code gets resolved and that organizations can remain safe and resilient.

TDD includes extensive testing, quality checks, and systems that prevent bad code from being put into production. For example, if bad code gets written into your pipeline, TDD can act as a fail-safe when headcount is low. It's the key to preventing bad code from being put into production and staying competitive, regardless of team size.

Great software delivery is a constant loop, not a linear process. The goal for developer teams isn't to make updates to your application, but to constantly innovate on your software while preventing the introduction of faulty changes. Great developer teams that meet the benchmarks outlined above are key to helping businesses differentiate from their competitors and deliver digital products to consumers as fast as the market demands and as often as it changes.

Michael Stahnke is VP of Platform at CircleCI
Share this

Industry News

May 02, 2024

Parasoft announces the opening of its new office in Northeast Ohio.

May 02, 2024

Postman released v11, a significant update that speeds up development by reducing collaboration friction on APIs.

May 02, 2024

Sysdig announced the launch of the company’s Runtime Insights Partner Ecosystem, recognizing the leading security solutions that combine with Sysdig to help customers prioritize and respond to critical security risks.

May 02, 2024

Nokod Security announced the general availability of the Nokod Security Platform.

May 02, 2024

Drata has acquired oak9, a cloud native security platform, and released a new capability in beta to seamlessly bring continuous compliance into the software development lifecycle.

May 01, 2024

Amazon Web Services (AWS) announced the general availability of Amazon Q, a generative artificial intelligence (AI)-powered assistant for accelerating software development and leveraging companies’ internal data.

May 01, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux 9.4, the latest version of the enterprise Linux platform.

May 01, 2024

ActiveState unveiled Get Current, Stay Current (GCSC) – a continuous code refactoring service that deals with breaking changes so enterprises can stay current with the pace of open source.

May 01, 2024

Lineaje released Open-Source Manager (OSM), a solution to bring transparency to open-source software components in applications and proactively manage and mitigate associated risks.

May 01, 2024

Synopsys announced the availability of Polaris Assist, an AI-powered application security assistant on the Synopsys Polaris Software Integrity Platform®.

April 30, 2024

Backslash Security announced the findings of its GPT-4 developer simulation exercise, designed and conducted by the Backslash Research Team, to identify security issues associated with LLM-generated code. The Backslash platform offers several core capabilities that address growing security concerns around AI-generated code, including open source code reachability analysis and phantom package visibility capabilities.

April 30, 2024

Azul announced that Azul Intelligence Cloud, Azul’s cloud analytics solution -- which provides actionable intelligence from production Java runtime data to dramatically boost developer productivity -- now supports Oracle JDK and any OpenJDK-based JVM (Java Virtual Machine) from any vendor or distribution.

April 30, 2024

F5 announced new security offerings: F5 Distributed Cloud Services Web Application Scanning, BIG-IP Next Web Application Firewall (WAF), and NGINX App Protect for open source deployments.

April 29, 2024

Code Intelligence announced a new feature to CI Sense, a scalable fuzzing platform for continuous testing.

April 29, 2024

WSO2 is adding new capabilities for WSO2 API Manager, WSO2 API Platform for Kubernetes (WSO2 APK), and WSO2 Micro Integrator.