One of the last things Mike Cohn recommends in his book Succeeding with Agile is that the company should have a set of criteria to determine how a team is doing. Only when you measure the team against this set of criteria a year or so later would you be able to identify if a team has followed “continuous improvement”.
He says most companies fall into the trap of measuring only one aspect to see if a team is improving or not. For example, one might look at the number of issues reported by customers for a particular release and see if this number reduces in further releases. However, a team can cut the number of issues by drastically cutting down on the number of features delivered.
Hence, Mike suggests that any team should be measured against a set of criteria. Some criteria can be measured immediately while others can be measured only after some time(post-event).Some of these criteria might be -
1) Number of features(immediate) – He suggests to keep the statistics simple. Even though some features are large and some are small, it is difficult to compare one feature with another. Hence he suggest just to count the number.
2) Number of issues found by customers (post-event)- I think, our team is currently focussing waaay too much on this criteria. I find that we are paranoid of adding any new features even in the second sprint! What if we find issues later? So, soon small becomes medium and medium becomes large….
3) Percentage of product backlog reduced after release (immediate)- Suppose the backlog has 100 items and this release covered 15, then the figure is 15%. Ideally, this figure should stay steady or increase. If it reduces, it means that some of the backlog items will never be reached.
4) Percentage of test cases automated (immediate)- I think this is an excellent criteria to measure. This will clearly show any improvement done by the team.
5) Percentage of user stories implemented using pair programming(immediate) - Should be a decent number and increasing steadily initially,but stabilizing at a certain stage.
I completely agree that measurement is required before we can improve. In fact, I have always believed that only way to improve our productivity is to measure it first. I had designed a complicated process through which a team’s productivity could be measured earlier. However, after reading this book, I realise that a few flaws with it and can be fixed using the general guidelines provided by Mike. Here is the new updated method to measure the productivity of a team.
I will first list out the criteria that I think needs to be measured
- Value Points completed - Mike Cohn suggested to just count the number of features, but he does not know about Value points, does he? Every feature should have points associated with it by the product owner. If a feature is highly valued by customers, it gets more points, else it gets less points. Difference between this and my older plan is that this is assigned at feature level by product owner. Hence it is less work for PO and easier to implement.
- Number of automated test cases added - Automated testing is a very important part of agile development methodology. Hence, measuring this, especially in Pivotal CRM, where we support lots of environments is very important.
- Number of issues found by regression QA - More the number, more time is wasted in rework. Ideally no issues should be found by regression QA.
- Keeping up with the Release date - This is one criteria I think we are tracking currently. We need to give a high weightage to this and not slip on this while trying to improve on other factors.
- Number of SIs that require code changes(post-event) - This reveals a gap in testing and is an important criteria to measure.
I don’t want to keep any weightages in percentages since some of the criteria like delay in release should hurt the team’s productivity score much more than other criteria. Also, I would not like to consider post-event criteria(no. of SIs) since it would then become difficult to arrive at a score immediately after the release.
Here is my suggestion to arrive at the Team’s Productivity Score -
Lets say a team did 75 value points worth of work. Regression QA found 5 issues and they automated 50 of the test cases. They were also late by one week( 5 days).
Now the team’s productivity score would be ((75 – 5 ) + 50)/(5) = 24.
Another team that does the same thing as above but release it on time will have ((75-5) + 50)/1 = 120 points. This gives immense weightage to the release date which I think is fair because it affects the customer.
Each automated test case is equivalent to one value point since once a particular test case has been automated there will be no way that functionality be broken in future releases. This encourages teams to come up with innovative means of automating.
The number of SIs filed have not been considered in the formula as it is a post-event. I feel a team that regularly delivers buggy features, will eventually get bogged down in releasing hot fixes rather than providing more features. So, this criteria is automatically considered in the above formula.
The greatest issue of measuring productivity of a knowledge worker has been that once the employees figure out what the management is tracking, we usually find a way of artificially boosting up those numbers. However, with the above formula, as a developer, I don’t see an easy way to boost my numbers without actually improving my productivity.
I invite you to blast a big loop-hole in the above formula. Would you be happy if management starts measuring your productivity using the above formula? Do comment!