Skip to main content

The ‘Essence’ of Agile Metrics. Part 2.

Let’s go back to the example I gave at the end of my last post. What is wrong with measuring story volatility? On the surface it appears fine. We can see how well the product owner staged the sprint and also how well the team is interpreting stories for execution. For example, if they “signed up” for 10 stories, but only delivered seven because of story (requirement) volatility, then they have a lot of improvement to do…right? Well, that could be the case.

But it could also be the case that the team trimmed those stories because they were superfluous to the customer’s needs. Meaning they were essentially waste. Or another positive interpretation of it could be the team became very creative and figured out a way to meet the spirit of the sprint goal with fewer stories. In fact, they may have even trimmed out some previous code in order to simply the application – so the overall related LOC count might be quite low too.

Can you see the point I’m making? I find that leading indicator metrics can sometimes be more prone to metrics dysfunction than trailing, results oriented metrics.

In his book, Measuring and Managing Performance in Organizations, Robert Austin coined and explored the term metrics dysfunction. In simple terms, it’s a metric whose collection influences the very behavior being measured, leading to metrics dysfunction. One of the classic examples of it is measuring software testers by the number of bugs they find. In this case, testers would report more trivial, non-germane bugs simply to hit the metric. In the meantime, they’d miss truly important bugs. Clearly not what the measurer intended!

In our example, the leading metrics might influence the team towards not changing stories that need to be changed, simply to hit the metric of stories being consistent with the views from planning. I can see nothing more destructive to the agile mindset of the team than having metrics that drive negative behavior within the team simply to meet or game the metrics.

So, what are some unhealthy leading metrics to avoid within agile teams?

  • Planning quality – Measuring how well you planned. An important part of this is change control, i.e. measuring how little or much change is ongoing.
  • Requirement quality – Measuring whether each requirement meets some sort of baseline definition of completeness. Or that each has a specific number of acceptance tests.
  • Estimation quality – I’ve sort of beaten this down in the article. Effectively it’s anything that tries to measure estimation variance with actual effort.
  • Arbitrary results – Counting LOC produced, or Bugs found, or Requirements written, virtually anything that is materially produced by the team that negates the quality of the result and the application of common sense, since not all results need the same level of attention and thinking.

Conversely, what are some healthier trailing metrics to concentrate on within agile teams?

  • Team agitation levels – capturing multi-tasking events, # of committed hours per sprint and variance from expected levels, recruiting support for team members
  • Team velocity levels – trending over time, improvement trending (implying improved team dynamics), paying attention when team composition changes
  • Impediment handling – #’s per team, avg. time to resolve, # that impacted the Sprint Goals
  • Retrospective actions – is the team improving themselves based on retrospective results, how many improvements per sprint, average time to resolve
  • Escapes per sprint – bugs found post-sprint, adherence levels to Done-ness Criteria
  • Sprint Success / Failure – from the perspective of the Sprint Goal. Not so much focused at a Story or Task completeness level, but at an overall work delivered level

One of the things you’ll notice with trailing indicator metrics is that there is an inherent real-time nature to them. We want to be sampling them (observing, discussing, driving action) each and every day during each iteration or sprint.

It’s this level of team awareness of their performance (metrics) and active engagement towards continuous improvement that is a key for functional metrics in an agile context.

Comments (7)