Depth of Kanban – A Good Coaching Tool

 

I really got inspired reading Håkan Fors’ article on “Are the Kanban practices in the right order”. Not only did he linked to a presentation Johan Nordin and I did at Lean Kanban Central Europe 2011 (which is always good), he also presented a way to visualize the maturity – or depth – of a Kanban implementation using a radar/spider/kaveat diagram.

The Depth of Kanban Graph

This came at a time when I was struggling to grasp the status of the 50+ teams that Johan and I had kick-started within Sandvik IT. We saw that some teams were more mature than others and were improving on their own, while other still struggled to get the basic in place. Of course that was to be expected, having so many different teams existing in so many different contexts. But how could we see and understand what each team needed to go further?

A simple and elegant solution to the problem is the “Depth of Kanban” radar/spider/kaveat diagram.  It is a graph with axes for each Kanban properties (visualize, limit WIP, manage flow, explicit policies, feedback loops, improve). It helps answering the following questions:

  • How far does each team “implements” each of the Kanban principles? 
  • What has been the impact so far from using Kanban?
  • What is the one smart thing to focus on right now to go further?

 

My Version of the Graph

In this blog entry I want to present my take on the “Depth of Kanban”. I have based my “Depth of Kanban” spider on Håkan’s graph as well as on David J. Anderson’s. I have adapted them to fit the context of Sandvik IT. I present here the latest version of the graph (which had had many revisions), but I believe it needs more work to reach elegance and simplicity. Do you have ideas to make it better? Let’s hear it!

Here is the graph (click for bigger):

Here are the current questions to answer (yes/no) to understand the depth of the implementation (click for bigger):

Analysis

What are these different colors used for?
The primary goal for the use of Kanban at Sandvik IT is to engage the teams to improve, continuously and in a sustainable way. Now, in order to start improving a team must have some minimum capability to see and understand what is going on (clarity of execution) as well as knowing what to aim for (clarity of purpose). The red area on the graph is my attempt to define the minimal depth a team must reach in order to start improving on its own. While the team is “in the red” it cannot improve. That is a clear signal to the coach that action is need a.s.a.p. The other colors indicate other “levels” of depth, the greener the better, called “Improving Sustainably” (you want to team to be there a.s.a.p), “Excellence” and “Lean”. Note that the Kanban Kick-start (as it will be described in the coming field guide), will aim at getting the team directly in the “Improving Sustainably” area. Though, depending on the context, some teams might not manage maintain that condition and fall into the red.

So rather than a sequential implementation of Kanban (visualize, then limit WIP, etc. etc.), would you say it might be better to implement all the reds, then all the yellows, then all the greens? (Question by Philip Ledgerwood)
It depends what you are after when implementing a Kanban system. Teams that need to get in control of their situation need to focus on visualization, teams that have worked “ad hoc” for ages probably need to focus most on policies first, while others drowning in WIP must – of course – limit WIP. So, you current condition dictates how you will grow your Kanban system. What I am after is to give the team I coach the capability to improve/evolve on their own. That is where the “red” area comes from: a team needs to, at least, have these attributes to be able to understand their current condition and evolve from there. Then, the other colored areas are simply here as a reminder to the team: does it makes sense for us to go further with one principle when we neglected others? As the cost of going “deeper” is probably higher than the cost of addressing “surface” principles, and as the principles affect each-others, the RoI is probably better when addressing all “orange” first, then all “light green”, etc. So, yes, I am advocating a “spiral” model to implementing Kanban rather than a linear model.

Why do the axes have different scales?
Some principles are more demanding than others and it should be reflected on the graph.

Some questions you have there are sooo wrong!
Good that you’ve noticed that! Let’s fix it together (give me feedback).

Isn’t there too much details?
Probably! This is the very first version and I wanted to have “everything” in it. It is a coaching tool and the coach is needed to moderate the discussion and explain some of the concepts. So, currently this tool is not made for the teams to evaluate themselves. I will continue to use the tool and improve it (simplify it). Future versions will be posted on this blog.

Are these Toyota Kata’s 5 questions I see there?
Yes! I have renamed David’s Improve category to Effects (seeing evidenced of…) and used Improve for the improvement kata. I felt that it better matched the approach we have had at Sandvik IT to Kanban as a tool for boot-strapping continuous improvements.

How do you use the graph?
I use the graph in three ways (sometimes all at once):

 
  • To present what is Kanban to a team with hands-on experience. Kanban can be presented in many ways. I have found out that this description is very useful to a team that has used Kanban for a while, as it manages to be too abstract and too concrete at the same time for green teams.
  • To find out the depth of a Kanban implementation. I usually sit down with the team lead or the whole team and go through the questions. It is important for the coach to moderate the answers from the team to get constructive answers. For example: Team “Yeah, we definitely get great feedback from the customer!”, Coach “In what form and how often?”, Team “Every 6 months the customer answer a the question ‘How pleased are you with the team (graded 1 – 5)!”. The coach may in this case answer ‘no’ for the team (even if the team thinks it is a ‘yes’) and explain the rationale behind.
  • To inspire a team to improve. The “depth” on its own is actually not that interesting, what is really interesting is what do we do about it? The team may get some improvement ideas simply by going through the questions. But, you get can in situations where the team is really inspired to improve their visualization when they do not even limit WIP. So, to help the team evolve in the right direction, you could use the graph to bring balance to the team’s Kanban system by setting target conditions related to areas that are left “behind”. Here are some examples from two teams:
Team that needs to get better by creating more feedback loops and makes policies explicit.
Team that improved dramatically over 2 months by using the improvement kata to get better policies and flow. Now the focus must be on limit WIP.


If I took the exact questions and used them against the most mature Kanban implementation I’ve been part of we’d be in red (mostly) and yellow, with the sole exception of WIP limits. Would that mean that the implementation was shallow? By no means. It’s just the model that is broken. (Comment by Pawel Brodzinski).
True! You will get confusing/useless results from this tool if you use it as an evaluation tool or a compliance tool, especially in other contexts than the one I design it for (enterprise).
But that is not the intent and it is not how I use it. I use it as a coaching tool to trigger discussions with the team related to different aspects of Kanban. The coach moderates the discussions with the intent to understand the team’s current condition and what the team could focus on improving. So, depending on the team and its context, I have had situations where some questions were not relevant (e.g. work size is never an issue, no upstream or downstream partners, work items are never dependents) or plain wrong (e.g. we will never be able to do swarming). Other times, a team does not “do” certain practices as described but their purpose is anyway fulfilled (e.g. no daily meetings, instead plenty of interaction with other team members during the whole day). In practice this can result in me giving a “high score” regardless on what the actual template (the questions) tells. This means that the questions are simply guideline and the coach is *key* to understand the team and adapt to its context.
To summarize: It is not my intention to create an equivalent to the Scrum Nokia test. This tool is not an evaluation or compliance tool. It is a coaching tool to find areas a team could focus its improvement to get a balanced team.


Wow, cool! Now you can actually compare two teams to set the right salaries based on how much they score!
STOP! The goal here is not to compare teams between them. It really is only a coaching tool for each team to understand the next smartest improvement to do for having a well-functioning Kanban system. Considering the wide variety of team compositions, context, challenges and technologies I really think it is plain wrong (dangerous, irresponsible, insane?) to use the “depth of Kanban” to rank teams for management comparisons. The only valid usage is to compare a team with itself over a period of time. Did we get better? If no, why? If yes, why?

 

I am confused here: there are both “Improve” and “Effects” axis on the graph, what’s the difference?
  • “Improve” really is about installing a habit of improvement within the team. That is to say that the team understands its current condition (based on visualization, measurements, etc.), where it is heading (the target condition) and also has a way/method to systematically get closer to the target by removing impediments in the way. The Kanban method provides in itself a good base to understand the current condition. The trick is to make to team use that to drive improvements. Note that the Kanban method provides a natural target condition: limit WIP. Though, this could be anything else, like very generic (better quality, speed, etc.) or very specific (e.g. using the “depth of kanban” to identify a next target condition).
  • “Effects” is only about what new behaviors the team has exhibited and sustained since starting to use the kanban system. They can be seen as bi-effects or the result/symptoms of the team using kanban. This means that it could be sufficient to measure the maturity of a (kanban) team by simply looking/measuring the “Effects”. The rest of the assessment is giving you input into what might be missing and therefore explaining why the team hasn’t shown the desired overall effects.

 

A Good Coaching Tool

To conclude, I find the “Depth of Kanban” graph to be a really useful coaching tool. It has helped me to re-ignite the improvement lust for teams that had reached a plateau. It also has helped me to present Kanban in a different/provocative way to experienced teams (and get some “Aha!” reactions). Finally, it is a good tool for me to understand the maturity of each team and what to focus my coaching on.

As I said, I am very interested in getting feedback on that one. So, don’t hesitate!

Here are the slides:

9 responses on “Depth of Kanban – A Good Coaching Tool

  1. Hi Christophe,
    very interesting post.. i would like to try it out. Is it based on a excel file and are you willing to share it?

    Alex

  2. I have a problem with such an approach. I mean if I took it and tried to apply it I'd end up with things like:

    – We don't have standups (because we don't need them). It doesn't matter that we are co-located, have no-culture, run retros and generally kick ass in terms on day-to-day communication. I guess it means that we suck with feedback loops and managing flow big time. A bummer.

    – We don't care about estimation and often have little control over how the work is broken down so it seems we won't shine with explicit policies either. Ouch.

    – But wait, classes of service were always something very natural for us and not only do we use them but we basically started with them on the day one. Now, I'm confused.

    Should I continue?

    Such explicit order may only make sense in the context of a single organization, and even then not all the time. Bring the model to a different context and it doesn't make sense anymore.

    Except of course of having some sort of compliance metric, which for specific organizations may bear some value, although I'd discuss whether the value is real. I mean how much does it differ from Nokia test then?

    If I took the exact questions and used them against the most mature Kanban implementation I've been part of we'd be in red (mostly) and yellow, with the sole exception of WIP limits. Would that mean that the implementation was shallow? By no means. It's just the model that is broken.

  3. Hi Pawel, thanks for your comment!
    And you are absolutely right: you will get confusing/useless results from this tool if you use it as an _evaluation_ tool or a _compliance_ tool, especially in other contexts than the one I design it for (enterprise).
    But that is not the intent and it is not how I use it. I use it as a *coaching* tool to trigger *discussions* with the team related to different aspects of Kanban. The coach *moderates* the discussions with the intent to understand the team’s current condition and what the team could focus on improving. So, depending on the team and its context, I have had situations where some questions were not relevant (e.g. work size is never an issue, no upstream or downstream partners, work items are never dependents) or plain wrong (e.g. we will never be able to do swarming). Other times, a team does not “do” certain practices as described but their purpose is anyway fulfilled (e.g. no daily meetings, instead plenty of interaction with other team members during the whole day). In practice this can result in me giving a “high score” regardless on what the actual template (the questions) tells. This means that the questions are simply guideline and the coach is *key* to understand the team and adapt to its context.
    To summarize: It is not my intention to create an equivalent to the Scrum Nokia test. This tool is not an evaluation or compliance tool. It is a coaching tool to find areas a team could focus its improvement to get a balanced team.
    Thanks again for your comment, as it helps me refine how to describe the tool in the best way! I will update the post to reflect this discussion.

  4. The questions that we have and thier sequence need not necessarily be the same given here. They should be adjusted to reflect the realities of your situation. But as a concept this is excellent. The more difficult part is going to be preventing this from becoming a comparison tool (how ever good the intentions are).

  5. Hi Hrishikesh, thanks for your comment. Right now I address the risk of it becoming a comparison tool by simply not advertising it to top management. It is a tool for the coach and the teams.
    My colleague Johan Nordin is experimenting with another coaching tool for managers. More on that later on!

  6. Hi Chris,
    Thank you for great work. Could you please help me to differentiate between "Improve" and "Effects"? I'm a bit confused at this point. Thanks.
    Quang

Leave a Reply to Christophe Achouiantz Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.