Some software teams struggle in the doldrums. Output is low, and so is the quality of the produced software; releases are erratic and infrequent, and the users/customers unhappy or apathetic to the developers’ efforts. Managers responsible for hiring and sourcing have a few things to ponder:
- Does the team need an agile coach to help it with “agile practices?”
- Does the team need to learn “test-driven development” to solve its quality issues?
- Does the team need a devops person who would “introduce CI/CD or help the team to “move to Kubernetes?”
- Does the team need a test automation engineer to help it “automate testing” so that its release cadence can become more predictable?
The good news is that there is a simple way out of the above situation; a way that doesn’t require deciding on what kind of expert the team needs to be augmented with, and a way that is organic – A Silver Bullet.
* Drum roll *
Deliver software to production every sprint. Be it two or three week sprints, preferably two, but deliver every sprint. I think there is a word for it… hm…. Scrum! That’s it!
Intricacies and technicalities of Scrum aside, committing to delivering to production every sprint, regardless of circumstances and possible explanations (aka excuses) totally recenters the team. By what mechanics? Here’s how.
Teams that commit to a strict delivery cadence will suddenly be able to make use of their retrospectives in a very direct manner. From one day to another, the retro can be made very simple by focusing pretty much a single question: What’s the number one pain point that prevents us from delivering every sprint?
Obviously the pain point will vary depending on the industry, technology stack, team maturity, and the alignment of Saturn in relation to Jupiter’s third moon, but generally speaking it will be one of these:
- Deployment is scary because it’s manual = time consuming and error prone
- Code cannot be changed within a short time frame because it’s an untestable, undocumented mess
- Related to the above; quality cannot be ensured, because the team’s manual testing cannot uncover unforeseen and surprising regressions
- The team doesn’t get anything done because it spends its time in unproductive meetings
- There’s no clear list or priorities and the team works on “everything at the same time.”
- At the end of the day, the team doesn’t know exactly what to deliver. User requests aren’t captured properly.
- There’s conflict in the team and it prevents work from getting done
Regarding the last one. It’s a topic on its own, but my short take on it in this context is: teams that deliver spend their time doing that, not fighting interpersonal wars.
Now, for instance, given that the conclusion of the retro is that “we can’t deliver 3-4 stories every third week because Derek (the team’s ninja) deploys the system manually”, a reasonable improvement would be to either write down all steps Derek is performing and have everyone on the team learn them, or to automate parts of it with a simple script… or have a peek at what a Jenkins pipeline does. A not so reasonable and not so helpful action would be: “implement continuous deployment to the Kubernetes cluster in the cloud.” Simply speaking: baby steps, realistic steps, steps taken from where the team is, not where it wants to be in an ideal future.
The strength of this approach is that the team is unlikely to get stuck on Derek’s deployment routines again. Next retro, while far from perfect, the deployment may no longer be the biggest pain point, while lack of confidence and fear of regressions in critical functionality surfaces instead. Again, the solution isn’t “test automation”, the solution is more likely somehow executing the critical scenarios of the application. WebDriver might be an option, or a JavaScript unit testing framework, or something like wget. Take it from there.
If code quality is an issue, how about establishing coding conventions, running a linter, or something that detects dead code? Again, take a small step towards a better state, evaluate, and attack the next pain point.
A similar approach can be taken in the areas of user stories, architecture, or any other components of software delivery. As long as the steps are small. Whether the improvement can be implemented within a single sprint is a good sanity check.
In a quarter or six months of work like this, the major pain points will not be that major anymore. And as time flies, who says that a single pain point should be addressed per retro. Why not two? Also, if an extra push is needed to make sure that people outside the team expect a strict delivery cadence, invite them to the sprint review’s demo. The more, the merrier. This will help the team to focus.
To wrap things up: Do proper vanilla Scrum and don’t be ashamed of it. Frame the retrospectives to address obstacles to frequent delivery. Do the soul searching and team dynamics retros later.