Outcomes, Not Outputs: Making Measurement Meaningful
February 08, 2024

Dave Laribee
Nerd/Noir

During British colonial rule in India, authorities in the snake-ridden city of Delhi offered rewards for dead cobras. The offer backfired when people recognized the incentive as a business opportunity and started breeding cobras, resulting in overpopulation when the scheme ended. This came to be known as the "Cobra Effect."

I've seen a similar thing happen when leaders set core metrics to evaluate individuals and product and engineering teams. Metrics get abused when they become a target, as Goodhart's Law states.

Or they're opaque and arbitrary because developers aren't involved in setting the measurement strategy. Or they're irrelevant, better suited to describing manufacturing work (launch new model by end of the year) than software development (setting a target for the number of commits).

No matter the issue, metrics are rarely the best way to understand how the work developers are doing drives the business forward.

Of course, teams need some way to quantify progress. So how can leaders measure and understand how well individuals and teams are working?

Outcomes are a better way to gauge success than outputs. Outcomes shift the focus away from arbitrary numbers like lines of code and toward the real impact on customers and industries.

How Metrics Fall Short

Contrary to what McKinsey consultants claim, we have plenty of well-established metrics for software development.

Most of our established metrics measure productivity (how much a team or individual produces in a given timeframe) or performance (the degree of skill with which tasks are completed). We can use velocity, deployment frequency, or any number of other metrics to illustrate what our teams are doing.

But these metrics don't help us understand what's going wrong when the numbers aren't what we expect. Metrics don't tell us if certain code is difficult to work with or if clunky release processes impact productivity.

As the Cobra Effect demonstrates, metrics can also create perverse incentives. Developers may not be breeding snakes to earn a bounty, but they may game metrics if pressured to meet them, such as by taking on only simple tasks to pad velocity.

This is especially true if metrics are employed solely for the benefit of leadership. Individuals are expected to carry out a strategy with no insight into its development and no avenue to provide feedback.

Worst of all, metrics don't actually tell us whether developer work matters. A team might execute an activity repeatedly with proficiency, even excellence — but if that activity isn't relevant to organizational goals, then what's the point?

Work to Achieve Outcomes, Not to Juice Metrics

When I'm helping a large-scale software development organization transform how they work, one of my main goals is to shift their focus from metrics to outcomes. Leading with outcomes is a more effective way to assess performance and productivity.

An outcome is a result that will happen — not the lines of code a team will produce, but rather the effect that code will have on users or businesses.

Outcomes help developers understand their work in terms of value. Too often, developers don't understand exactly who they're building for or why. Outcomes contribute to a stronger mental model of a customer or user and show how technology is expected to drive business results.

We're not suggesting throwing out metrics altogether. Instead, we're using outcomes to determine which metrics we should look at based on what we're trying to achieve. Then we use those metrics — which may not be the same for every team in the organization — to gauge progress.

How to Start Measuring in Terms of Outcomes

Outcomes often begin as a hunch. We take a page from the product manager's discovery playbook to get from gut feeling to validated outcome: we ask questions, gather data, and sift through insights to make sure that our outcome is relevant and valuable. The ideal outcome is small and achievable, yielding a path toward larger ones.

For example, say an engineering manager thinks her team could deliver value at a more predictable pace. She validates her hunch with data from project tracking software, which shows a wide variance in story size.

Then she uses a developer experience platform to dig deeper into drivers that influence productivity and assess her team against industry benchmarks.

She discovers that her team has low satisfaction scores on requirements quality and batch size, indicating that they're struggling to release early and often. She scopes her original outcome to something more concrete and immediate: "We work on small stories to ensure a consistent pace of delivery."

Now she can work backward with her team to find the contextual metrics that indicate progress toward those outcomes — like decreased average story size and increased iteration completion percentage.

Once the initial outcome is achieved, the team can shift to a new outcome — which likely means new metrics. This iterative process facilitates continuous improvement.

A Consensus-Driven Approach to Delivering Real Value

Notice that in the example above, the engineering manager works with her team to develop their strategy for measuring productivity. The shift from metrics to outcomes isn't just about pointing a team toward results that meaningfully help users or businesses. It's about creating a culture of transparency by involving developers in the process of defining what success means for individuals and the team.

That includes educating developers on what outcomes are and why they matter, as well as actively soliciting and responding to feedback. Done right, this shift — from metrics to outcomes, and from top-down mandate to bottom-up empowerment — will give developers a new level of agency in innovating and solving problems.

When developers have the opportunity, resources, and motivation to improve, everyone wins: teams, leaders, and the people and companies who will eventually use our products.

Dave Laribee is Co-Founder and CEO of Nerd/Noir
Share this

Industry News

February 04, 2025

Check Point® Software Technologies Ltd. announced new Infinity Platform capabilities to accelerate zero trust, strengthen threat prevention, reduce complexity, and simplify security operations.

February 04, 2025

WaveMaker announced the release of WaveMaker AutoCode, an AI-powered plugin for the Figma universe that produces pixel-perfect front-end components with lightning fast accuracy.

February 04, 2025

DoiT announced the acquisition of PerfectScale, an automated Kubernetes (K8s) optimization and governance platform.

February 03, 2025

Linux Foundation Europe and OpenSSF announced a global joint-initiative to help prepare maintainers, manufacturers, and open source stewards for the implementation of the EU Cyber Resilience Act (CRA) and future cybersecurity legislation targeting jurisdictions around the world.

January 30, 2025

OutSystems announced the general availability (GA) of Mentor on OutSystems Developer Cloud (ODC).

January 30, 2025

Kurrent announced availability of public internet access on its managed service, Kurrent Cloud, streamlining the connectivity process and empowering developers with ease of use.

January 29, 2025

MacStadium highlighted its major enterprise partnerships and technical innovations over the past year. This momentum underscores MacStadium’s commitment to innovation, customer success and leadership in the Apple enterprise ecosystem as the company prepares for continued expansion in the coming months.

January 29, 2025

Traefik Labs announced the integration of its Traefik Proxy with the Nutanix Kubernetes Platform® (NKP) solution.

January 28, 2025

Perforce Software announced the launch of AI Validation, a new capability within its Perfecto continuous testing platform for web and mobile applications.

January 28, 2025

Mirantis announced the launch of Rockoon, an open-source project that simplifies OpenStack management on Kubernetes.

January 28, 2025

Endor Labs announced a new feature, AI Model Discovery, enabling organizations to discover the AI models already in use across their applications, and to set and enforce security policies over which models are permitted.

January 27, 2025

Qt Group is launching Qt AI Assistant, an experimental tool for streamlining cross-platform user interface (UI) development.

January 27, 2025

Sonatype announced its integration with Buy with AWS, a new feature now available through AWS Marketplace.

January 27, 2025

Endor Labs, Aikido Security, Arnica, Amplify, Kodem, Legit, Mobb and Orca Security have launched Opengrep to ensure static code analysis remains truly open, accessible and innovative for everyone: