From Quantity to Outcomes: Rethinking How We Measure Progress in Modern Delivery
Why flow and qualitative insights outperform traditional metrics—and how breaking lifelong “more is better” conditioning transforms teams and organizations
Kathy Lovan · April 15, 2026
From a young age, we’re taught to equate “more” with “better,” and we carry that belief into adulthood.
As kids, we absorb the message that quantity defines our worth:
More extracurriculars means you’re more accomplished
More gold stars from teachers means you’re a “good kid”
More trophies means you’re more talented
More advanced classes mean you’re “ahead”
More people knowing your name at school means you’re seen as “somebody”
From grades to friendships to material things, the pattern is the same: more is always presented as better. More achievements, more recognition, more belongings—quantity becomes the metric for value.
That mindset doesn’t disappear when we grow up—it evolves. The scoreboard changes, but the message stays the same: quantity still equals worth.
More hours worked means you’re more dedicated
More promotions means you’re more successful
More projects on your plate means you’re more valuable
More money earned means you’re more accomplished
More square footage in your home means you’re doing better in life
In the workplace, this mindset shapes which metrics get attention—and which ones actually matter. In every project I’m brought in to rescue, every crisis I’m asked to help fix, it becomes immediately clear: the wrong metrics are being tracked. Everything is based on quantity—how much, how many, how often—because that’s the default we’ve all been conditioned to trust. But without meaningful qualitative measures, you end up with misguided conversations, misguided decisions, and misguided outcomes.
And this is the real challenge: we have to unlearn what we’ve been taught our whole lives. We have to push back against the conditioning that told us quantity defines value.
Counting Without Understanding
Recognizing this shift isn’t just a leadership responsibility in the traditional sense. Culture is shaped by the behaviors, actions, and results we reward, not by job titles. If you help set expectations, guide decisions, or shape any team’s rhythm, you’re already leading. And that means you share responsibility for noticing when the focus has shifted from meaningful progress to measuring activity and “busyness.”
These are the metrics I see most often, the ones that show up repeatedly in status reports, executive decks, and team and departmental meetings—each one signaling a culture centered on output rather than outcomes that matter:
Number of story points estimated vs. completed
Number of features delivered
Number of projects completed
Total dollars attached to a project
Amount of upfront project plan definition: tasks, dates, and “known” risks
Number of Scrum teams launched
Percentage code coverage
Number of customer support tickets closed
Utilization percentage (how “busy” people are)
We lean on these measures because they’re familiar, easy to count, and they’ve been treated for years as the “proper” way to track progress. They reward volume, not understanding. None of them tell us whether the work actually created value or how it moved through the system. They create the illusion of progress while masking delays, rework, bottlenecks, and the real experience of the people doing the work. In other words, they tell us how much we produced, how much capacity and bandwidth we consumed, and how much money we invested—not what actually happened.
Even Ron Jeffries, who “invented” story points, has apologized for it—which says a lot about how far some measures have likely drifted from their original intent.
What Real Change Requires
Recognizing the wrong metrics is only half the work. The real change comes from a two-part shift: the cultural habits that shape behavior, and the technical choices that shape what we measure. Cultural shifts without technical changes become performative; technical changes without cultural shifts result in mechanical adherence rather than meaningful change.
Cultural Shift
The behaviors, incentives, and unwritten rules that quietly dictate what people pay attention to have to change.
It begins with:
Leadership choosing substance over optics—asking harder questions, challenging the comfort of volume-based measures, and responding to transparency with curiosity instead of punishment or retreating at the first sign of pushback.
Rethinking how performance is measured altogether. As long as metrics and rewards are tied to individual targets, people will naturally optimize for themselves instead of the team. It creates a culture where everyone protects their own scorecard, even though teams and organizations only succeed—or fail—together. The culture has to reflect that reality, and so do the metrics: collective outcomes over isolated output.
Resisting the urge to cosmetically “fix” problems by rebaselining projects to reset expectations or defaulting to renegotiating scope, budget, and timelines the moment things start to slip. Those moves might make status reports look better, but they do nothing to address what’s actually going wrong. The same goes for falling back on requests for more people, more hours, more oversight, more process. Adding people and bureaucracy to problems only masks the underlying issues. Instead, this is the moment to pause and engage in genuine root-cause analysis, because the quick, comfortable fix often costs more in the long run than taking the time to get to the actual source of the problem.
Technical Shift
The technical shift is about changing what we measure and how we interpret it. It’s not about throwing out quantitative metrics; it’s about pairing them with qualitative and time-based context so the numbers actually mean something.
Start by:
Measuring cycle time, work in progress, and the age of work in progress. In other words, flow metrics. Successful teams and organizations are comfortable with the discipline of balancing work in progress and deliberating managing it daily so it never goes stale (the longer work sits unfinished, the riskier it becomes and the less likely it is to still be valuable). There’s no universal “good” or “bad” number—only what’s right for a team, a product, and the customer. The goal is to find the pace that’s sustainable, where quality doesn’t erode and people aren’t burning out just to keep the numbers pretty. And because every team’s context, constraints, and domain are different, these metrics should never be used to compare teams against each other. Flow metrics are indicators that guide discussion and learning, not scores to hit.
Measuring stakeholder and customer engagement, experience, and satisfaction—in a way that’s actually meaningful. The most important context can’t be internal. If all our validation loops stay inside the organization, we end up grading our own homework. The real signal has to come from the outside—from customers, users, partners, and the environment we’re building for—so our metrics reflect reality rather than our own assumptions. Not the generic surveys, vanity scores, or once‑a‑quarter check‑ins that tell us nothing. I’m talking about thoughtful, qualitative signals that reveal whether we’re solving real problems, whether expectations are aligned, and whether the experience matches what people actually need. These measures should capture depth, not volume: the quality of interactions, the clarity of feedback, the consistency of engagement, and the degree to which people feel their needs are understood and met. When done well, this kind of measurement becomes a source of truth that numbers alone can’t provide.
Automating data collection and reporting wherever possible. When metrics update themselves and live in dashboards that anyone in the organization can access, they’re far more likely to be used—and used often. Manual reporting creates lag, distortion, and selective visibility; automated data removes delays, rework, and human filtering—you get a reliable, real-time picture of the work rather than a curated snapshot. This also means going beyond the default, built-in widgets provided by tools like Jira or Azure DevOps. While convenient, they are not for understanding flow, aging work, systemic bottlenecks, or cross-team behavior over time. Dashboards must be intentionally designed around the questions people actually need to answer—not just what is easy to display.
Choosing to See Differently
In the end, moving from output to outcomes means going against a lifetime of conditioning that more activity, more detail, and more quantity equals better. Making that shift takes intentional change and a real appetite for learning—and yes, failing—because missteps are part of the shift, but doing nothing locks in the status quo. Stop resorting to the same old metrics expecting different results. It’s about choosing transparency over theater, learning over defensiveness, and real signals over curated stories. When organizations move away from quantity‑driven habits and start looking at outcomes and time‑based indicators, they finally see what’s actually happening in their environment and delivery pipeline—not just what appears to be happening.
Take the examples in this article and deliberately apply both the cultural and technical shifts to every report, every retrospective, every team huddle, every conversation, and every departmental meeting. It’s ongoing work—progress happens through consistent practice and choosing to keep seeing and responding more clearly over time. I say this because I’ve lived it—I’ve been part of projects that struggled, and I’ve helped turn them around by doing exactly this. When you do it consistently and visibly, you’ll see the evolution take place.
Thanks for reading. You can review how your data is used in the Privacy Policy and see what’s allowed when sharing or quoting content in the Terms & Conditions. If you’d like to request permission to reuse content or discuss a collaboration, please use the contact form.