How to catch up on test automation

(this entry is now available as an article on the scrum alliance site as well)

The test automation problem

Many companies with existing legacy code bases bump into a huge impediment when they want to get agile: lack of test automation.

Without test automation it is very hard to make changes in the system, because things break without anybody noticing. When the new release goes live, the defects are discovered by the real users, causing embarrassment and expensive hotfixing. Or even worse, a chain of hotfixes because each hotfix introduces new unanticipated defects.

This makes the team terribly afraid to change code, and therefore relucant to improve the design of the code, which leads to a downward spiral of worse and worse code as the system grows.

What to do about it

Your main options in this case are:

  1. Ignore the problem. Let the system decline into entropy death, and hope that nobody needs it by then.
  2. Rebuild the system from scratch using test-driven development (TDD) to ensure good test coverage.
  3. Start a separate test automation project where a dedicated team improves the test coverage for the system until it is adequate
  4. Let the team improve test coverage a little bit each sprint.

Guess which approach usually works best? Yep, the last one – improve test coverage a little bit each sprint. At least in my experience.

The third option may sound tempting, but it is risky. Who’s going to do the test automation? A separate team? If so, does that mean the other developers don’t need to learn how to automate tests? That’s a problem. Or is the whole team doing the test automation project? In that case their velocity (from a business perspective) is 0 until they are done. So when are they done? When does test automation “end”?

No, let’s get back to the fourth option. Improve test coverage a little bit each sprint. So, how to do that in practice?

How to improve test coverage a little bit each sprint

Here’s an approach that I like. In summary:

  1. List your test cases
  2. Classify each test by risk, how expensive it is to do manually, and how expensive it is to automate
  3. Sort the list in priority order
  4. Automate a few tests each sprint, starting from the highest priority.

Step 1: List your test cases

Think about how you test your system today. Brainstorm a list of your most important test cases. The ones that you already execute manually today, or wish you had time to execute. Here’s an example from a hypothetical online banking system:

Change skin
Security alert
See transaction history
Block account
Add new user
Sort query results
Deposit cash
Validate transfer

Step 2: Classify each test

First classify your test cases by risk. Look at your list of tests. Ignore the cost of manual testing for the moment. Now what if you could throw away half of the tests, and never execute them? Which tests would you keep? This factor is a combination of the probability of failure and cost of failure.

Highlight the risky tests, the ones that keep you awake at night.

Test case Risk
Change skin
Security alert  x
See transaction history
Block account  x
Add new user
Sort query results
Deposit cash  x
Validate transfer  x

Now think about how long each test takes to execute manually. Which half of the tests take the longest? Highlight those.

Test case Risk Manual test cost
Change skin
Security alert  x
See transaction history  x
Block account  x x
Add new user
Sort query results  x
Deposit cash  x
Validate transfer  x  x

Finally, think about how much work it is to write a automation scripts for each test. Highlight the most expensive half.

Test case Risk Manual test cost Automation cost
Change skin  x
Security alert  x  x
See transaction history  x
Block account  x  x
Add new user
Sort query results  x  x
Deposit cash  x
Validate transfer x  x  x

Step 3: Sort the list in priority order

So, which test do you think we should automate first? Should we automate “Change skin” which is low-risk, easy to test manually, and difficult to automate? Or should we automate “Block account” which is high risk, difficult to test manually, and easy to automate? That’s a fairly easy decision.

But here’s a more difficult decision. Should we automate “Validate transfer” which is high-risk, hard to test manually, and hard to automate? Or should we automate “Deposit cash” which also is high-risk, but easy to test manually and easy to automate? That decision is context dependent.

You basically need to make three decisions:

  • Which do you automate first? The high risk test that is easy to test manually, or the low risk test that is difficult to test manually?
  • Which do you automate first? The test that is easy to do manually and easy to automate, or the test that is hard to do manually and hard to automate?
  • Which do you automate first? The high risk test that is hard to automate, or the low risk test that is easy to automate?

Those decisions will give you a prioritization of your categories, which in turn lets you sort your list of test cases by priority. In my example I decided to prioritize manual cost first, then risk, then automation cost.

Test case Risk Manual test cost Automation cost
Block account  x  x
Validate transfer  x  x  x
See transaction history  x
Sort query results  x  x
Deposit cash  x
Security alert  x  x
Add new user
Change skin  x

So that’s it! A prioritized backlog of test automation stories.

You could of course also invent some kind of calculation algorithm. A simple such algorithm is that each highlighted cell = 1 point. Then you just add upp each row and sort. Or just sort the list manually using gut feel.

You could also use a more specific unit for each category, if my simple binary scheme isn’t sufficient.

Test case Risk
Manual test cost
(man-hours)
Automation cost
(story points)
Block account high 5 hrs 0.5 sp
Validate transfer high 3 hrs 5 sp
See transaction history medium 3 hrs 1 sp
Sort query results medium 2 hrs 8 sp
Deposit cash high 1.5 hr 1 sp
Security alert high 1 hr 13 sp
Add new user low 0.5 hr 3 sp
Change skin low 0.5 hr 20 sp

Remember though that our goal for the moment is just to prioritize the list. If you can do that with a simple and crude categorization scheme then there’s no need to complicate things right? Analysis is useful but over-analysis is just a waste of time.

Step 4 – Automate a few tests each sprint

Irregardless of the stuff above, each new product backlog story should include test automation at the story level. That’s the XP practice known as “customer acceptance tests”. Not doing that is what got your system into this mess in the first place.

But in addition to implementing new stories, we want to spend some time automating old test cases for other previously existing stories. So how much time do we spend? The team needs to negotiate that with the product owner. The agreement will typically take on one of the following forms:

  • “Each sprint we will implement one test automation story”
  • “Each sprint we will implement up to 10 story points of test automation stories”
  • “Each sprint we will spend about 10% of our time implementing test automation stories”
  • “Each sprint we will finish the product backlog stories first, and then spend the remainder of the time (if any) implementing test automation stories”
  • “The product owner will merge the test automation stories into the overall product backlog, and the team will treat them just like any other story.”

The exact form of the agreement doesn’t matter. You can change it every sprint if you like. The important thing is that the test automation debt is being gradually repaid, step by step.

After finishing half the stories on your test automation backlog you might decide that “hey, we’ve paid back enough debt now! Let’s just skip the rest of the old test cases, they’re not worth automating anyway”, and dump the rest. Congratulations!

So this solves the problem?

Wishful thinking. No, this pattern doesn’t magically solve your test automation problem. But it makes the problem easier to approach :o)

13 responses on “How to catch up on test automation

  1. I’m sorry for pointing this out, but "irregardless" isn’t a word, you meant to say "regardless" 🙂   An excellent and very useful post otherwise!

  2. Thanks for pointing that out. Live and learn.

    I would like to fix the spelling error, but now I can’t because that would invalidate your comment…. :o)

  3. Great article.

    I read ALL your stuff after discovering SCRUM and XP from the Trenches.

    Thanks a lot for your effort. I want you to know that your words are helping a brazilian development team to take his first steps into SCRUM and XP, and we are really linking it :).

    Thanks again,

    F.

  4. Henrik,

    Nice article – enough information to implement in practice without being overly complex. Just like your book!

    I’m going to forward this to several teams that I have coached, who all face this problem of test automation debt.

    Regards,

    Peter

  5. Great article Henrik! I totally agree that point 4 is the best way to go. I’ve found quite few tools to help our team with that one. It would of course be nice if the tool used the same language and IDE as the actual application is developed with. When I google’d around the other day I actually found something promising for all .NET developers out there. A small UI test automation tool which is integrated into the Visual Studio, called TestAutomationFX. You can find more information here, http://www.testautomationfx.com
    You can find a nice movie at the site to see how it works. There is also a chance to sign up for their beta program, which of course I did. So far it looks good.

  6. I think that the Ranorex test automation framework might be interesting for you. It is a .NET based automation framework with a set of powerful tools. You can also test many different application types including Web 2.0, Winforms, Flash/Flex, AIR, WPF, Silverlight, Qt and much more.

  7. I have a diffrent story , Our Automation suite is robust but the pass percentile is low.My Management is very disappointed with this.
    Say I run 10 scripts 4 to 5 of them fail mainly due to GUI problem(script problem), but expected is just 2 failures.
    We have already 200+ testcases.
    Can you provide me points on which i can again checkback so as to make my automation suite usable.(i use QTP-QC framework–driver child framework).
    Thanks fro the help in advance..

  8. > Can you provide me points on which
    > i can again checkback so as to make
    > my automation suite usable.

    Hmmm. So you have tests failing. Is that because code that is supposed to work isn’t working? Well then fix the code so it stops failing.

    And if the code isn’t supposed to work the way the test expects it to, fix the test or remove it.

    Just my simplistic view :o)

  9. Hi!

    We are interested in the review of our test automation tool – RoutineBot, please, let me know if you can do this review and what is the price.

    Thank you!

  10. Hi

    Can you suggest an automation tool that easy to use & script but not very expensive.

    Thanks

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.