Outcomes, Not Outputs: Making Measurement Meaningful
February 08, 2024

Dave Laribee
Nerd/Noir

During British colonial rule in India, authorities in the snake-ridden city of Delhi offered rewards for dead cobras. The offer backfired when people recognized the incentive as a business opportunity and started breeding cobras, resulting in overpopulation when the scheme ended. This came to be known as the "Cobra Effect."

I've seen a similar thing happen when leaders set core metrics to evaluate individuals and product and engineering teams. Metrics get abused when they become a target, as Goodhart's Law states.

Or they're opaque and arbitrary because developers aren't involved in setting the measurement strategy. Or they're irrelevant, better suited to describing manufacturing work (launch new model by end of the year) than software development (setting a target for the number of commits).

No matter the issue, metrics are rarely the best way to understand how the work developers are doing drives the business forward.

Of course, teams need some way to quantify progress. So how can leaders measure and understand how well individuals and teams are working?

Outcomes are a better way to gauge success than outputs. Outcomes shift the focus away from arbitrary numbers like lines of code and toward the real impact on customers and industries.

How Metrics Fall Short

Contrary to what McKinsey consultants claim, we have plenty of well-established metrics for software development.

Most of our established metrics measure productivity (how much a team or individual produces in a given timeframe) or performance (the degree of skill with which tasks are completed). We can use velocity, deployment frequency, or any number of other metrics to illustrate what our teams are doing.

But these metrics don't help us understand what's going wrong when the numbers aren't what we expect. Metrics don't tell us if certain code is difficult to work with or if clunky release processes impact productivity.

As the Cobra Effect demonstrates, metrics can also create perverse incentives. Developers may not be breeding snakes to earn a bounty, but they may game metrics if pressured to meet them, such as by taking on only simple tasks to pad velocity.

This is especially true if metrics are employed solely for the benefit of leadership. Individuals are expected to carry out a strategy with no insight into its development and no avenue to provide feedback.

Worst of all, metrics don't actually tell us whether developer work matters. A team might execute an activity repeatedly with proficiency, even excellence — but if that activity isn't relevant to organizational goals, then what's the point?

Work to Achieve Outcomes, Not to Juice Metrics

When I'm helping a large-scale software development organization transform how they work, one of my main goals is to shift their focus from metrics to outcomes. Leading with outcomes is a more effective way to assess performance and productivity.

An outcome is a result that will happen — not the lines of code a team will produce, but rather the effect that code will have on users or businesses.

Outcomes help developers understand their work in terms of value. Too often, developers don't understand exactly who they're building for or why. Outcomes contribute to a stronger mental model of a customer or user and show how technology is expected to drive business results.

We're not suggesting throwing out metrics altogether. Instead, we're using outcomes to determine which metrics we should look at based on what we're trying to achieve. Then we use those metrics — which may not be the same for every team in the organization — to gauge progress.

How to Start Measuring in Terms of Outcomes

Outcomes often begin as a hunch. We take a page from the product manager's discovery playbook to get from gut feeling to validated outcome: we ask questions, gather data, and sift through insights to make sure that our outcome is relevant and valuable. The ideal outcome is small and achievable, yielding a path toward larger ones.

For example, say an engineering manager thinks her team could deliver value at a more predictable pace. She validates her hunch with data from project tracking software, which shows a wide variance in story size.

Then she uses a developer experience platform to dig deeper into drivers that influence productivity and assess her team against industry benchmarks.

She discovers that her team has low satisfaction scores on requirements quality and batch size, indicating that they're struggling to release early and often. She scopes her original outcome to something more concrete and immediate: "We work on small stories to ensure a consistent pace of delivery."

Now she can work backward with her team to find the contextual metrics that indicate progress toward those outcomes — like decreased average story size and increased iteration completion percentage.

Once the initial outcome is achieved, the team can shift to a new outcome — which likely means new metrics. This iterative process facilitates continuous improvement.

A Consensus-Driven Approach to Delivering Real Value

Notice that in the example above, the engineering manager works with her team to develop their strategy for measuring productivity. The shift from metrics to outcomes isn't just about pointing a team toward results that meaningfully help users or businesses. It's about creating a culture of transparency by involving developers in the process of defining what success means for individuals and the team.

That includes educating developers on what outcomes are and why they matter, as well as actively soliciting and responding to feedback. Done right, this shift — from metrics to outcomes, and from top-down mandate to bottom-up empowerment — will give developers a new level of agency in innovating and solving problems.

When developers have the opportunity, resources, and motivation to improve, everyone wins: teams, leaders, and the people and companies who will eventually use our products.

Dave Laribee is Co-Founder and CEO of Nerd/Noir
Share this

Industry News

December 19, 2024

Check Point® Software Technologies Ltd. has been recognized as a Leader in the 2024 Gartner® Magic Quadrant™ for Email Security Platforms (ESP).

December 19, 2024

Progress announced its partnership with the American Institute of CPAs (AICPA), the world’s largest member association representing the CPA profession.

December 18, 2024

Kurrent announced $12 million in funding, its rebrand from Event Store and the official launch of Kurrent Enterprise Edition, now commercially available.

December 18, 2024

Blitzy announced the launch of the Blitzy Platform, a category-defining agentic platform that accelerates software development for enterprises by autonomously batch building up to 80% of software applications.

December 17, 2024

Sonata Software launched IntellQA, a Harmoni.AI powered testing automation and acceleration platform designed to transform software delivery for global enterprises.

December 17, 2024

Sonar signed a definitive agreement to acquire Tidelift, a provider of software supply chain security solutions that help organizations manage the risk of open source software.

December 17, 2024

Kindo formally launched its channel partner program.

December 16, 2024

Red Hat announced the latest release of Red Hat Enterprise Linux AI (RHEL AI), Red Hat’s foundation model platform for more seamlessly developing, testing and running generative artificial intelligence (gen AI) models for enterprise applications.

December 16, 2024

Fastly announced the general availability of Fastly AI Accelerator.

December 12, 2024

Amazon Web Services (AWS) announced the launch and general availability of Amazon Q Developer plugins for Datadog and Wiz in the AWS Management Console.

December 12, 2024

vFunction released new capabilities that solve a major microservices headache for development teams – keeping documentation current as systems evolve – and make it simpler to manage and remediate tech debt.

December 11, 2024

CyberArk announced the launch of FuzzyAI, an open-source framework that helps organizations identify and address AI model vulnerabilities, like guardrail bypassing and harmful output generation, in cloud-hosted and in-house AI models.

December 11, 2024

Grid Dynamics announced the launch of its developer portal.

December 10, 2024

LTIMindtree announced a strategic partnership with GitHub.