I recently spoke at a conference for testers arranged by SAST (Swedish Association of Software Testers) on the topic of Agile software development. Over 150 testers turned up, breaking all previous records for that association! I’m glad to see that you testers are interested in this stuff!
Anyway, trying to figure out a good opening statement for this conference I found the following angle that I’m pretty happy with afterwards, in a smug sort of way :o)
Don’t remember exactly what I said, but it was something like this:
"Is there anybody here who works with test?"
(Most hands raised. Mock surprised look on my face. "Wow, that many?" Sorry, couldn’t resist)
"Well I have news for you. In a truly agile project there is no role called Tester. There is no team called Test Team. And there is no project phase called Testing"
(A few seconds pause to take in the deathly silence and the look of 150 worried faces)
"Why is this? Is testing not important in agile projects? Have we magically done away with the need for testing?"
"Certainly not. Quite the contrary. In agile projects, testing is considered to be far too important to be confined to a single role, a single team, or a single project phase. Having a role called Tester implies that others don’t need to test. In an agile project almost everyone tests. Having a project phase called Testing implies that testing isn’t in the other phases. In an agile project testing is done almost all the time."
"Now I bet you are thinking but programmers suck at testing! Are there any programmers here?"
(a couple of hands up)
"Are you lousy at testing?"
(He nods and giggles a bit. Sorry, couldn’t resist again.)
"Well, I wouldn’t say programmers are bad at testing. It’s just that they test in a different way. In agile projects they pair program, for one. Yes, contrary to what most people think, pair programming is a testing technique (why would anyone think pair programming is a programming technique? :o). It is code review in real-time, the most efficient way of testing. A bug found while writing code can usually be fixed in a few man-seconds. Compare this to the man-hours or man-days of time needed to fix bugs discovered later on in the project, when you have no idea which part of the code caused the bug."
"The customer tests as well. He tests by trying out the system early and often, giving feedback on things such as usability and business value. Product owners test by helping the team define acceptance test criteria. Developers test by writing automated test code. Code who’s only purpose is to verify the correctness of other code. Not only that, they write the test code first, before they write the code that is to be tested! This is known as TDD, or Test-Driven Development, a tool who’s value is almost impossible to overstate."
"And then, of course, there’s you – the one they used to call The Tester. You are no longer The Tester. You are a fully integrated team member, and will share all of its successes and failures. You will do whatever is most important to your project at any given time – whether it is related to test or not. Welcome in!"
"Testing is your expertise, but on an agile team all of your talents will come to use, even talents nobody knew you had. You might help ensure that requirements (called User Stories) are testable from the very beginning. You might pair-program with developers to help them write better test code. You might define acceptance criteria together with customers. You might serve coffee. You might code. You might set up test environments. And last, but not least, you will test, just like everybody else"
"Since the programmers have automated all the boring, repetetive tests, you are free to focus on the hard stuff. The stuff that is difficult to automate. Exploratory testing. Usability testing. Performance test scripts. Figuring out those tricky test cases that only a veteran like you could come up with. And when you find bugs, you will help developers write code to demonstrate the bug and ensure it never happens again. You will coach the developers, teach them to think about things like boundary value analysis and equivalence partitioning and model-based testing and whatever other tricks you have up your sleeve. Your job is not to find bugs, your job is to help the team prevent them! The team is your team."
"Agile teams are cross functional, they are made up of generalizing specialists. People who are experts at certain areas, but have basic knowledge of a whole bunch of other areas as well. A team of generalizing specialists is extremely fast at learning and adapting."
"If you are a generalizing specialist already, then I congratulate you. If not, I hope you like to learn!"
"All in all, this Agile stuff should be a Good Thing for you guys! Testing is no longer just an afterthought, it is a critical part of the project and everybody is involved with it all the time. Your expertise is finally being given the credit it deserves!".
Then I went on to talk about the history and principles of agile software development, and show how Scrum and XP works in practice. Here are the slides if you are interested.
8 responses on “Manifest for the Agile Tester”
Hi, henrik….I have a question for you. I am a Software quality engineer and I am trying to implement scrum in the project that I’m working on. We are adding testing activities as a task in the springs. That means that the tester is actually a member of the team. For instance, one task is to update test cases in order to meet new functionality, another task is to test the functionality and other task it might be to regression the fixed bugs. Don´t you think this is possible? or that means that we are not using scrum. I think that it doesn´t matter if his task is testing or programming, what it matters is that we are getting an agile process.
Your setup sounds great, just the way a Scrum team should work. As long as anybody on the team (not only you) may take test tasks, i.e. the test tasks shouldn’t be only "your" tasks. I hope the tasks include writing test code, and not just specifications for manual testing.
I understand the point you are making, but are you not just strengthening the "Basic test" that is supposed to be done by the developer anyway?
The current idea of testing methodology is that when the "basic test" is streghthed by for example using a better test design, the testers role aims to do what it originally is supposed to do: static testing, verifying that the developer has understood the requirements (oh sorry) "user story" correctly and measuring the quality of the product.
> are you not just strengthening the ‘Basic test’
> that is supposed to be done by the developer anyway?
That wasn’t my intention. The team is responsible for doing whatever is necessary to ensure customer satisfaction. Assuming that the customer wants something that works well and is maintainable, this means high quality. Which in turn means testing – all types of testing needed to reach sufficient quality (but no more than that!).
Unit testing requires one type of skill set, exploratory testing requires another type of skillset, etc. The team as a group is collectively responsible for ensuring that sufficient testing is done, and that it is done continuously rather than saved until the end. They bring their different skills together – some are good at coding, some are good at test design, some are good at exploratory testing, etc.
The main difference between agile software development and waterfall-style development is that in agile projects we try to avoid fixed roles and handovers. The same group of people is responsible for the project from inception to release.
> the testers role aims to do
> what it originally is supposed to do: ….
Yes, but the difference is:
The following questions must be asked:
In an agile team the approach to these is typically:
Not sure if that answers your question or just classifies as general rambling from someone who should have gone to sleep hours ago… :o)
I am a project manager pretty new to agile methods, but very interested and curious about it.
We develop a complex MIS and I wonder (among other things) how a relevant rate of the product backlog shall be performed? We use JIRA and we have over 1000 living issues. I guess we could call JIRA our product backlog?
Every single day I guess we got like 10 new issues coming in. We have at least 20 stakeholders (customers, investors, partners etc) and I would say it is mission impossible to rate all these issues in a proper way (because when someone is done after a couple of weeks or so, the list is no longer up to date).
Besides this, one single issue can demand 1 year of development so I guess we need to divide them as well, which leave us in an even worse situation.
My intention is to start using Scrum (maybe not all the way from start but…) but we have some obstacles we need to get through first!
Another impediment is the testing. Every time we implement a new feature we need to test the entire system even if no one can see any connections what so ever to the other modules not involved in the new functionality. If we implement a new feature to the customer registry we are forced to test the purchase- and invoice functionality as well as the production planning and stock transactions. This is a huge task that requires a lot of time and it is a little bit hard to translate this in to a SCRUM-approach…
I am very aware that our situation is fucked up in many ways but I can’t see how to deal with the problems from a SCRUM point of view.
Johan, it is still possible to have a Scrum approach even in your situation, but it needs a bit of "tweaking"…
This Product Owner should be a person that understands the bussiness value of doing one of these 1000 items before another and has the authority (from the CEO?) to make these desicions.
The flow would be that after a sprint (which includes unit tests, functional tests and other tests that is normally executed within a Scrum team) there is a demo of what the team has done during the sprint, and the product would be ready to be delivered to the acceptance team. The acceptance team does all the complex and time consuming regression tests that you mentioned. That team delivers a succesfully accepted release to production.
Of course this contradicts what Henrik wrote (note – I’m not the same Henrik), but extremely large and/or complex products needs theese kinds of specializations of Scrum.
And as a result it adds complexity that makes the Scrum process harder to manage (=more work for the Scrum Master). One aspect that has to be addressed is how to handle a release that doesn’t get accepted by the acceptance team. Do you interupt the current sprint to take care of the previous sprints failed release? Do you put it back in the product backlog and bring it up during the next sprint planning? Do you just add the needed tasks to fix it to the current sprint? I guess it would be up to the Product Owner and team together to descide how to deal with it…
What I’m trying to say is that it can be done, but it’s certainly not easy and I would say that it would help if the Scrum Master is an experienced one and that the entire organization must understand and commit to the agile values and principles.
Mike Cohen also touches the issue of testing large systems in this blog entry:
Worth a read and there’s also good discussions in the comments…
All the tasks in the agile team should be subdivided so there is equal division of work.
Also the team gives there 100% effort