Cycle Time: The One Metric Every PM Should Track
I used to report velocity to stakeholders. Then I realized velocity measures effort, not outcomes. A team can have rising velocity and declining delivery. The metric that actually matters is cycle time.
What cycle time tells you
Cycle time is the duration from when work starts to when it's deployed. Not when it's "done" in Jira — when it's in production, delivering value.
This single metric captures everything: development speed, code review efficiency, testing bottlenecks, deployment friction, and process overhead. When cycle time increases, something in your pipeline is degrading. When it decreases, you're genuinely getting faster.
How I use it
Trend over absolutes. I don't care if our cycle time is 4 days or 6 days in isolation. I care about the direction. A team that went from 8 days to 5 days over a quarter is improving. A team stuck at 3 days is probably healthy.
Segment by work type. Bug fixes should have shorter cycle times than features. If they don't, your prioritization process is broken. We track cycle time separately for bugs, features, and tech debt.
Investigate outliers. When a ticket takes 3x the average, I want to know why. Not to blame anyone — to find systemic issues. Our longest cycle times consistently traced back to unclear requirements. That was a PM problem, not an engineering problem.
What it replaced
We stopped reporting story points to leadership entirely. Instead, they see: average cycle time (trending), throughput (tickets completed per week), and deployment frequency. These three metrics tell the delivery story without the theater of sprint burndowns.
The conversation it enables
When a stakeholder asks "why is this taking so long?" I can show them exactly where time is spent. Code review takes 2 days. QA takes 1.5 days. The actual development was 1 day. Now we're having a productive conversation about process, not blame.
Measure what matters. Cycle time matters.
←Back to all posts