There’s a lot that could be said about metrics. I’m quite skeptical in general in the value metrics gives you in product development or running a department/organization. At the same time I feel that metrics could help you understand the health and status of your group/organization or project, and to know the effects the changes you implement have on the performance. During the years I have used a lot of different software metrics, both targeting the product development performance and the code and design quality. Most of them have been quite complex, and they have in reality given me little value or understanding of how things really are working.
But I have also used a few ones that I feel has helped me see things during product development, metrics that says something about the performance and also direct you to possible improvement areas. Below I briefly describe a few ones that I like.
- The happiness and stress indexes tells you about the health of the team
- The code duplication and test coverage tells you about the health of the code base
- The release burn down and lead time tells you about the health of the project goal
I read about the Happiness index in a blog by Henrik Kniberg. This is an index measuring the level of happiness in a group or organization at a given moment. The level of happiness says quite a lot about a group and how well everything is going regarding its goals, and I find it to be a very nice metric to use to monitor the team during product development. A happy team and members are productive and are working towards their mutual goal to deliver the product. At least that is my belief. We all want to contribute and do our best to deliver our promises. If there is something that stops us from succeeding, the happiness level will decrease. And a team where everybody has positive feelings and having a good time at work will also have a lot of energy and belief in what they are trying to accomplish together.
In one project I let the team members update the happiness index at the end of the daily scrum meeting. Each one updated the chart and explained shortly the reason for their current index value. It became a good way to understand how everybody felt about the project and their commitments. When someone’s index dropped we talked about it immediately and we could resolve lot of issues early on, and help each other if possible to mitigate the reason for a low happiness value.
When someone put a value outside the normal variation I wrote a short note of the reason for it. This was both useful for people outside the team looking at the index; they got knowledge about impediments just by looking at the paper. But it was also useful at the retrospectives where the index gave the team the history of the past iterations which helped them remember what had gone well and not so well.
In the same project were I used the Happiness index I also used a Stress index. I wanted to monitor how much negative stress the team felt, and to be able to early on catch any increasing stress feelings regarding the project goal.
There was some correlation between the stress and the Happiness indexes, as can be seen in the picture. The diagram displays both indexes aggregated from each team member’s index values for each day. The green line at the top is the Happiness index and the red line at the bottom is the stress index. When something happened that was outside the control of the team, the stress feelings increased and the happiness decreased. This was useful to show and discuss at the sprint review as a way to make organizational impediments more visible and understood by stakeholders.
In some cases, on an individual level, the two indexes did not always follow each other. There were members that selected a very high stress index but also a high happiness index signaling that they enjoyed their work but had a lot to do, in most cases too much to do. Because of the metric and the discussions during the scrum meetings we found out this quite early and could discuss how to mitigate the situation. I think we would have noticed this in the Happiness index after a while but we caught the problem earlier since we had the stress index, which was very nice.
A couple of projects that I have been working in have used the tool Simian to measure the level of code duplication. The level of duplication in a code base is the best measurement of the quality level that I know of. High duplication really indicates that you need to do something with the current design and implementation. The tool gives the teams a great way of finding the duplications and hints to areas that need attention. We had integrated the tool in the CI environment so we got frequent feedback on how the code base developed, and could early react on any new duplication. Highly recommended!
I prefer and believe in TDD as the way to drive my design and coding. The technique help you create a design with a lot of small loosely coupled components with clear responsibilities. Using TDD you will get very high test coverage of the code base, but even so it is easy to miss important areas or branches. Coverage tools that report your coverage and also highlight missing areas are therefore really great and recommended to use. You and the team can assess the coverage to see if you have missed any important areas, and you also have a nice tool that show if your coverage starts to decline due to some reason.
Release burn down
Burn down charts is really great in showing the status towards a goal. As I have worked with Scrum, I have used sprint burn down charts. But in the organization I have been at previously I haven’t found them so useful. We didn’t have releases to customers after each sprint, only internal ones to our product owners and other internal stakeholders. The projects were general quite long, most of them between one and two years. Under these conditions I moved away from creating and estimating a sprint backlog to instead focusing on getting a flow of implemented and integrated stories included in the upcoming release. We relatively estimated the stories and broke them down into activities when we implemented the functionality, but we did not estimate them further than that. To monitor our progress we used release burn downs that included all work/stories that we targeted for the current release. This is an extremely important and valuable metric, both for the team and all stakeholders. It is easy to both interpret and update, which is great.
Lead time is one of the recommended metrics to use if you are using Kanban. You use it to measure how long time it takes for things to get through your system. It may be a user story or something else that specify a bit of functionality valuable for the stakeholder or user. This is a great metric that says quite a lot of how smooth your team is working and collaborating to get the user stories implemented as quick as possible.
There are many different uses of the metric; for example, you can use it to measure the impact of your improvements to see if they really improved the way you are working or not. Using the metric you may find that your improvements only attacked the symptoms with little real effect on the throughput as a result. You can also use it to communicate expected delivery times for your user stories, if you calculate the average lead time for your user stories you can tell the stakeholders what they may expect from your team regarding deliver times from stories just started.
It is great to have a small set of metrics that you and the team can use to measure different aspects of your work. The ones I have described here measure both soft and hard things in the project and they are also quite easy to collect and use. Used as a system of measurements they will help you see how things are working from different angles, which could help you avoid sub-optimizations that will look good in the short run but burn you at the longer timeframe. For example, you may decrease your lead time by hacking and pushing out code, but this will most definitely increase the level of code duplication and lower the test coverage of the code base.
Beside the metrics listed above there are a few others that I would like to try out someday: technical dept index, collaboration level index and maybe also a release confidence index. All these could be used and asked for during the scrum meeting and they could help you get an early indication on when the team members feelings about a subject are changing and use it as a tool to start discussion the reason for it.