The top 3 biggest forecasting and planning errors

In my consulting and training engagements I get to see the impact where planned delivery dates are missed. It’s never because people just aren’t trying or working hard enough. This post gives you my top 3 real reasons traditional Agile planning and the dates produced by them fail.

Number One Reason: The Assumed Start Date is Missed

Sounds obvious right. To give an estimated delivery date you add the estimated duration to a starting date. Rarely do I see anyone track or adjust for the eventual start date for any initiative. Often the definition of “started” isn’t clear. 

The conditions needed to be really started NEED to include –

  1. Start dates need to be triggered by the delivery of prior work. Query start dates at nice even calendar boundaries – Often the first day of a quarter or month or sprint is chosen.  This, of course, ignores that the team may not nicely complete the prior commitments by the day before that date. 
  2. The team is present and dedicated to the work of this imitative – Often the prior project or other production issues absorb key staff members.  An easy warning sign is how many “experts” you have on the team. Experts are more often than not called upon to help in issues and train other teams. 
  3. Team dependencies also need to “Start” on this date. One team in the delivery chain available isn’t good enough. Often work is immediately blocked on a dependency. All dependencies need to be ready to accept and complete work.

Key point: Get better at forecasting START date
and worry less about forecasting the END date

A technique I use in training and workshopping planning sessions is to brainstorm reasons that will inhibit starting or making progress once started. Once you have a good list of these, set about understanding ways to eliminate or reduce their impact. Facilitate a retrospective on a prior initiative and It often becomes clear that the assumed start date is many months before the actual started date, meaning initiatives are in trouble before work begins.

The number one inhibitor for starting on-time is missing or inadequate staff skills for the new work. The root cause of this is often not managing the skill and training pipeline to match upcoming work. Teams are well skilled for the “prior” work but not ready for the “new” work.

Number Two Reason: Team to Team and External Dependencies

The larger the organization delivering the product, the more teams that need to collaborate to deliver an initiative. Estimating and forecasting the time for any single team in isolation is fairly “easy.” Anticipating how work flows onto other teams is hugely more complex. 

For every step of building something (dependencies), that step can be on-time or delayed. If we want to know the chance of each dependency being on-time, it will be 1 chance in 2n where n = number of dependencies. Commonly I see dependency chains of 7 steps (7 different teams or approval steps) between the first design or dev team and production.  This means that the chance of delivering on-time is 1 chance (blue square) in 128! It is 127 times (orange squares) more likely to be delayed by one or more teams.

To help visualize and teach this, have a play with the online calculator explaining the math and letting you enter your own dependency numbers:

I see dependencies being poorly managed (or not managed at all) in almost all cases. If we could be slightly better at one thing, improving how we anticipate and manage dependencies. The solution isn’t to eliminate all dependencies by skilling up every team with every skillset, that would be prohibitively expensive. The solution is to minimize where it economically makes sense and better anticipate and plan the hand-offs where it isn’t. 

Key point: Get better at anticipating and planning the hand-offs between teams

Often the biggest gap causing dependency delays is that priority of work in each teams’ queue isn’t universally agreed. The most “important” work at an organizational level is queued waiting for something else locally assumed “more important.” Making priority obvious based on the organizational cost of delay, and encouraging teams to pull work in that order will dramatically reduce the waiting time for items that have the highest cost of delay at an organization level.

The other obvious way to improve your chances is to specifically have a meeting between teams to co-ordinate work hand-offs.

Number Three Reason:  High Utilization

It seems obvious that the more utilized a process, the longer new items might wait before being processed. We seem to intuitively know not to run our production servers at 100% and expect continued good response time, but we knowingly plan our teams’ portfolios to operate at or above 100% (overtime) without impact. 

This over-planning given the uncertainty of the work we deliver contributed hugely to wait times that are impossible to estimate in advance. At a certain point of utilization, our delivery process flow collapses causing delays time to grow exponentially. So, we work harder, push more, causing the process to falter even more. I see evidence of this collapse in almost every significant process I observe.

Key point: Excessive utilization is hugely expensive. The cost of adding resources is MUCH LESS than forcing work into a constrained team.

As a thought experiment, consider which type of traffic in the picture below is better able to predict their travel time, the bicycle or the motorized traffic? Even though traveling slower, the only factor impacting travel time for the bicycle is its speed and distance (effort and size in our world). The cars and motorcycles are in such a congested system it is impossible for them to predict how long their trip will take in advance. In congested systems (like we manage), it stops being about the work itself (story points and size) and begins to be about the system delays. 

To help visualize and teach (especially managers and executives) the wisdom of avoiding high states of utilization for uncertain work, I give them this online calculator to play with using their own system process specifics.

This graph shows how queued time (time spent waiting for someone or something to do that work) on the Y-Axis, and percentage utilization on the x-axis. Although it will vary depending on the variability of your work (how innovative and new to the team) a huge escalation in waiting time happens around 80%. It isn’t possible to forecast accurately in any system where significant time is spent in this region, a very small change in workload could cause a 10x increase in queue time. 

And if “queue time” doesn’t scare them, computing delay cost into a significant dollar amount using this calculator often starts the right economic discussion.

The ignorance of the cost of queues is a major contributor to not solving the overcommitment problem almost universal in the software development industry. We have to get better at communicating the economics in order for it to be obvious that for highly uncertain work, the more headroom in the capacity we need to reserve to avoid these hefty economic impacts. It’s a huge problem and also a key contributor to the prior Dependency impact – higher queue time due to utilization means more chance of delays for dependent teams.


The top three errors in forecasting and planning have NOTHING to do with the size or effort of the work itself. These three system-level factors cause more error than any estimation error we might ever achieve. I encourage organizations to solve these three issues before having the teams argue is it a 3 point story or a 5 point story!

When we begin to forecast using data as taught in the upcoming workshop in Stockholm we practice observing and solving these errors. We practice taking a systems view of work planning rather than an item by item view. This means we get achievable plans and happier customers.

Want to learn more? Attend my workshop on November 6th in Stockholm! You can also find me on Twitter: @t_magennis

2 responses on “The top 3 biggest forecasting and planning errors

  1. In my world the fundamental problem is that we just don’t know how long things will take. We have those other problems also, of course. But in software development everything is new and unknown. Even if we’re highly confident that we know what’s going to happen, we’re usually wrong.

    I did a Twitter thread about a similar challenge – predicting who’s going to win the World Series (USA major league baseball). We have a huge amount of data about the teams and players, yet we still can’t predict who’s going to be *in* the series at this point (four weeks before it starts) much less who’s going to win.

  2. Great point Nils,

    We are in a very uncertain world when estimating or forecasting how long software ideas might take, but still expected to give an answer.

    I turn the forecasting paradigm around a little. If we don’t have an expectation of how long something might take, how can we know we are off target? I like to make a rough forecast, compare to an intermediate reality, see why my forecasting model went wrong, fix it a little and then compare again in a short while. I feel like I don’t know where I am unless I have even a personal expectation of where I thought I should be.

    Perhaps that how we need to approach our forecasts – the goal is “To know earlier that we are off-target” so that we can adapt expectations before they grow into immovable politics.

    I loved your thread on baseball. A field with lots of data and we still fall victim to uncertainty. I think for similar reasons. A team is a system, and trying to forecast using individuals metrics won’t get you good forecasting results!

    And – good and bad luck just happens!


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.