Tag Archives: planning

A Scrum Product Owner Checklist as a mind map

Posted on by

If you wonder what a Scrum Product Owner need to do, here’s the checklist (in form of a mind map) for you!

read more »

Facilitating the Elephant Carpaccio Exercise

Posted on by

One of the best exercises I know of on how to learn and practice User Story slicing techniques is the so called Elephant Carpaccio exercise. At Spotify it is something of a staple as it it is (often) used when introducing new employees (now a days).

The exercise is about creating a quoting application which includes different markets, tax and discounts. If you have not done this before your initial slices will probably be pretty large. The aha moment is when you realize how SMALL you can actually make them. You can can dry run this exercise by only creating and discussing the backlog. It’s also very friendly to actually do it for real by programing the application; even excel can be used to do that.

Henrik Kniberg has written an excellent guide on how to facilitate this exercise. Here’s my slides based on that presentation to make it a little bit easier to remember and run it in a classroom.

Let the User Story Flow

Posted on by

One of my biggest surprises when I first met the squads I where going to work with at Spotify was that none of them were using User Stories. At first I observed to see their alternative. Unfortunately there was none. Instead most of the work got done as big chunks of work (what I would tend to call Epics) that was sliced into a todo-list of tasks (named that way by the developers) and also divided according different platforms.

Squad focus on technical tasks

A typical board contained one or more business cases and lanes for each developer/platform with tasks that were executed upon. These big “busses” where on the board blocking other works for weeks, which of course meant there needed to exist one or more emergency lanes for all expedite work (in the long run, most work).

This is a setup that does not foster collaboration, focus on value and art-of-the-possible. From an agile fluence point of view I would say it is a way of working that does not even reach fluence level 1 (Christian and I will describe agile fluence in more depth in a follow up blog post). From my experience focusing on User Stories is a great way of fostering the above values, and reach fluence level 1.

read more »

Time vs Story Points Estimation

Posted on by and

One of the most common questions we get is whether to estimate in time or points. It seems like points are used only “to avoid thinking about time” and they are essentially the same. Wrong.

Let us give you the travel metaphor to give you an idea about how we are thinking.

read more »

Crisp for breakfast?

Posted on by

What’s this? An alien invasion? A publicity stunt? A way of sneaking Crisp into your cereals?

You make the call. We Go Lean at breakfast, lunch and dinner! 🙂

This picture was forwarded by Troy Magennis, a Monte Carlo pioneer.

PS: Troy is coming to Stockholm to show us how to use Monte Carlo to forecast delivery of software. A small revolution to how we do it today. If you are a program manager, manager, project manager or someone who needs to learn when I can get my things, this is the course to join.

Addressing critical in deliveries from subcontractors

Posted on by

In software, one of our favorite tool to deal with uncertainty is iterations. But is it always the better option?

The last week I’ve got the question two times of how to address critical in deliveries from subcontractors. For example: hardware, preparation of land, machinery, buildings or third party platform updates.  How can these be addressed? Do iterations hold the answer? Are there better options?

Let me introduce lean flow thinking and show how it can be used to improve the outcome of critical third party in deliveries in your projects.

Project with indelivery

read more »

Doing 45 minute sprint planning

Posted on by

  1. Hand out the epics (we typically have 2-3 per week)
  2. Assign one senior and one junior for each epic
  3. Forming a group per epic by letting rest of team choose from interest, but allow no larger than four members in per group
  4. Let each epic team do breakdown and size estimation
  5. Swap epic between the teams and review (we typically let one person stay and explain)

45min sprint planning

This has worked out really well. The energy and intensity of the discussions was raised significally. So far we have managed to keep our 45 time window almost every time. .A bonus is that the review almost every time have results in an changed task or a reestimation.  Thereby showing the value of a second opinion with an outside angle.

Multi-team sprint planning

Posted on by

Here are the slides from my session "Multi-team sprint planning" from Scrum Gathering 2008 in Stockholm.

Here is all the other material from the Scrum Gathering. Interesting stuff!

Planning ahead in Scrum

Posted on by

In Toyota Production System (TPS) action takes place through the Plan-Do-Check-Act-Cycle

Plan-Do-Check-Act

If we relate this to Scrum we see a lot of emphasis on the Do, Check, Act part. But not as much on the plan part. Planning is usually explained as a an activity that takes place on sprint planning day where both team and product owner are participating.

However, how do we deal with situations like:

  • stories that require a multi team effort
  • interfaces to third party systems

..and given a definition of  done "running in production" are we answering questions like:

  • are we implementing the right solution to our problem?
  • is our delivery useful to our recipients?
  • have we got access to necessary resources?
  • can we test? can we deploy?

Basically we need continuously run a thought process in the sprint before we do the implementation. Typically a development team spend around 20% of their time planning for their implementation (Mary Poppendieck).

Thinking ahead
The deliverable for the thought process is "story has business value and is estimatable"
So how do we go about?

Option 1: Use scrum master as an analyst
Appoint the Scrum master as the responsible person for dealing with forward thinking.

Pro’s:

  • SM is already a contact point
  • He does not suffer as much from interupt control as other team members

Con’s

  • Not always the correct technical person for solutions
  • Team can feel overrun

Option 2: Use an external analyst
Get a dedicated analyst in the team that always looks forward.

Pro’s:

  • Releaves team from communication stress

Con’s

  • Adds an extra handover step (waste)
  • Needs to be a superb communicator with team.
  • Analysts tend to lean over to business focus
  • Communication issues remain hidden

Option 3: Assign a "look ahead" story for each sprint
Insert a story which makes next sprint stories estimatable.
Team can pick tasks from this story in same way as on normal stories.

Pro’s:

  • Team can choose among the stories in normal scrum way
  • Communication issues are surfaced
  • Bonding to outside parties is preserved in team

Con’s

  • Product owner needs to have a forward vision
  • Team members might need training in communication skills
  • Implementation might suffer if extensive travel is required

My personal preference is with the option #3. But I have also seen option #1working in Scrum teams. The bottom line is that you need to think this through or you might end up building software that does not deliver intended business value.

How to catch up on test automation

Posted on by

(this entry is now available as an article on the scrum alliance site as well)

The test automation problem

Many companies with existing legacy code bases bump into a huge impediment when they want to get agile: lack of test automation.

Without test automation it is very hard to make changes in the system, because things break without anybody noticing. When the new release goes live, the defects are discovered by the real users, causing embarrassment and expensive hotfixing. Or even worse, a chain of hotfixes because each hotfix introduces new unanticipated defects.

This makes the team terribly afraid to change code, and therefore relucant to improve the design of the code, which leads to a downward spiral of worse and worse code as the system grows.

What to do about it

Your main options in this case are:

  1. Ignore the problem. Let the system decline into entropy death, and hope that nobody needs it by then.
  2. Rebuild the system from scratch using test-driven development (TDD) to ensure good test coverage.
  3. Start a separate test automation project where a dedicated team improves the test coverage for the system until it is adequate
  4. Let the team improve test coverage a little bit each sprint.

Guess which approach usually works best? Yep, the last one – improve test coverage a little bit each sprint. At least in my experience.

The third option may sound tempting, but it is risky. Who’s going to do the test automation? A separate team? If so, does that mean the other developers don’t need to learn how to automate tests? That’s a problem. Or is the whole team doing the test automation project? In that case their velocity (from a business perspective) is 0 until they are done. So when are they done? When does test automation “end”?

No, let’s get back to the fourth option. Improve test coverage a little bit each sprint. So, how to do that in practice?

How to improve test coverage a little bit each sprint

Here’s an approach that I like. In summary:

  1. List your test cases
  2. Classify each test by risk, how expensive it is to do manually, and how expensive it is to automate
  3. Sort the list in priority order
  4. Automate a few tests each sprint, starting from the highest priority.

Step 1: List your test cases

Think about how you test your system today. Brainstorm a list of your most important test cases. The ones that you already execute manually today, or wish you had time to execute. Here’s an example from a hypothetical online banking system:

Change skin
Security alert
See transaction history
Block account
Add new user
Sort query results
Deposit cash
Validate transfer

Step 2: Classify each test

First classify your test cases by risk. Look at your list of tests. Ignore the cost of manual testing for the moment. Now what if you could throw away half of the tests, and never execute them? Which tests would you keep? This factor is a combination of the probability of failure and cost of failure.

Highlight the risky tests, the ones that keep you awake at night.

Test case Risk
Change skin
Security alert  x
See transaction history
Block account  x
Add new user
Sort query results
Deposit cash  x
Validate transfer  x

Now think about how long each test takes to execute manually. Which half of the tests take the longest? Highlight those.

Test case Risk Manual test cost
Change skin
Security alert  x
See transaction history  x
Block account  x x
Add new user
Sort query results  x
Deposit cash  x
Validate transfer  x  x

Finally, think about how much work it is to write a automation scripts for each test. Highlight the most expensive half.

Test case Risk Manual test cost Automation cost
Change skin  x
Security alert  x  x
See transaction history  x
Block account  x  x
Add new user
Sort query results  x  x
Deposit cash  x
Validate transfer x  x  x

Step 3: Sort the list in priority order

So, which test do you think we should automate first? Should we automate “Change skin” which is low-risk, easy to test manually, and difficult to automate? Or should we automate “Block account” which is high risk, difficult to test manually, and easy to automate? That’s a fairly easy decision.

But here’s a more difficult decision. Should we automate “Validate transfer” which is high-risk, hard to test manually, and hard to automate? Or should we automate “Deposit cash” which also is high-risk, but easy to test manually and easy to automate? That decision is context dependent.

You basically need to make three decisions:

  • Which do you automate first? The high risk test that is easy to test manually, or the low risk test that is difficult to test manually?
  • Which do you automate first? The test that is easy to do manually and easy to automate, or the test that is hard to do manually and hard to automate?
  • Which do you automate first? The high risk test that is hard to automate, or the low risk test that is easy to automate?

Those decisions will give you a prioritization of your categories, which in turn lets you sort your list of test cases by priority. In my example I decided to prioritize manual cost first, then risk, then automation cost.

Test case Risk Manual test cost Automation cost
Block account  x  x
Validate transfer  x  x  x
See transaction history  x
Sort query results  x  x
Deposit cash  x
Security alert  x  x
Add new user
Change skin  x

So that’s it! A prioritized backlog of test automation stories.

You could of course also invent some kind of calculation algorithm. A simple such algorithm is that each highlighted cell = 1 point. Then you just add upp each row and sort. Or just sort the list manually using gut feel.

You could also use a more specific unit for each category, if my simple binary scheme isn’t sufficient.

Test case Risk
Manual test cost
(man-hours)
Automation cost
(story points)
Block account high 5 hrs 0.5 sp
Validate transfer high 3 hrs 5 sp
See transaction history medium 3 hrs 1 sp
Sort query results medium 2 hrs 8 sp
Deposit cash high 1.5 hr 1 sp
Security alert high 1 hr 13 sp
Add new user low 0.5 hr 3 sp
Change skin low 0.5 hr 20 sp

Remember though that our goal for the moment is just to prioritize the list. If you can do that with a simple and crude categorization scheme then there’s no need to complicate things right? Analysis is useful but over-analysis is just a waste of time.

Step 4 – Automate a few tests each sprint

Irregardless of the stuff above, each new product backlog story should include test automation at the story level. That’s the XP practice known as “customer acceptance tests”. Not doing that is what got your system into this mess in the first place.

But in addition to implementing new stories, we want to spend some time automating old test cases for other previously existing stories. So how much time do we spend? The team needs to negotiate that with the product owner. The agreement will typically take on one of the following forms:

  • “Each sprint we will implement one test automation story”
  • “Each sprint we will implement up to 10 story points of test automation stories”
  • “Each sprint we will spend about 10% of our time implementing test automation stories”
  • “Each sprint we will finish the product backlog stories first, and then spend the remainder of the time (if any) implementing test automation stories”
  • “The product owner will merge the test automation stories into the overall product backlog, and the team will treat them just like any other story.”

The exact form of the agreement doesn’t matter. You can change it every sprint if you like. The important thing is that the test automation debt is being gradually repaid, step by step.

After finishing half the stories on your test automation backlog you might decide that “hey, we’ve paid back enough debt now! Let’s just skip the rest of the old test cases, they’re not worth automating anyway”, and dump the rest. Congratulations!

So this solves the problem?

Wishful thinking. No, this pattern doesn’t magically solve your test automation problem. But it makes the problem easier to approach :o)

Index card generator – version 2!

Posted on by

Many people use a spreadsheet to house their Scrum Product Backlog. That works quite fine. However, during sprint planning meetings it is usually much more effective to use physical index cards. See my book Scrum and XP from the Trenches for the reasoning behind this.

Here’s a simple tool that generates printable index cards in A5 format directly from your Excel-based product backlog. Thanks Stefan Nijenhuis for making this available!

There is nothing to install. This is simply an Excel document containing a product backlog and a “generate index cards” button.


For more info, see the readme inside the document.

This version requires only Microsoft Excel. The previous version required both Excel and Access.

Update (2008-01-04)

Here’s another excel-based index card generator from Claudio Gambetti. Contains a few extra features such as tracks (a.k.a. themes) and components, as mentioned in my book. You choose your flavor. Thanks for contributing this Claudio!

Update (2011-06-23)

Here’s another version from Nathalie Beauguerlange. He says:

“I have made a little modification on it, because the cards were a bit too large for our scrum board, so I’ve resized the template and changed the cards number per page, allowing me to print 4 cards per A4 format page, so that we can use less paper.”

Update (2011-06-28)

Here’s a Google Docs version of this tool (and instruction video), for those who use Google Spreadsheets to house their backlog. I usually prefer Google Spreadsheets over Excel, since it is multiuser and in the cloud. And requires no installation. And no payment :o)

Thanks David Vujic for making this available.

Planning Poker

Posted on by

I’ve written up a page with a pretty graphical summary of what Planning Poker is.

http://www.crisp.se/planningpoker/

Planning poker


Prediction Markets och Scrum Sprint Planning

Posted on by

Jag har vagt känt till begreppet Prediction Markets, där man sammanfattar många individers förutspåelser om aktiekurser, vilken teknik som kommer att lyckas och misslyckas, och andra svåra frågor där enskilda individers kunskap ofta inte är tllräcklig, eller något man generellt sett inte litar på. Vem litar på enskilda aktieanalytiker, till exempel.

Stötte nyligen på en artikel i ämnet som sammanfattar ett event i Silicon Valley om just detta ämne, anordnat av Yahoo. Det visar sig vara ganska mycket på gång, bland annat finns det ett open source ramverk skrivet i Java med namnet Zocalo. En kille från Microsoft berättade att man använt tekniken för att förutspå testplanering, där det visat sig att man på detta sätt förutspått att planen inte skulle hålla tidsramen. Och då slog det mig att man skulle kunna använda anonyma formulär på detta sätt för att fråga medlemmarna i ett Scrum team om de tror på tidsramarna för en sprint. Kanske tror man inte på tidsplanen, men vill inte säga något om det för att inte verka negativ eller långsam. Får man svara på detta anonymt, så kanske det kollektiva svaret blir bättre än det som framkommer under mötet då tidsplanen tas fram?