Here are some common misconceptions about TDD. I call them “myths” here, for short.
If this feels like talking to the dentists about your teeth, you are not alone. When I talk about tests sometimes people gets embarrassed about their habits, “I know you’re right but …”.
Myth: Simple code need no tests
“This code is so simple, just by looking at it, I can tell it works. Therefore, no tests are needed.”
That may be true, but I don’t want to look at your code, I want to trust it blindly. Things change fast and suddenly that simple code stops working anyhow.
But more important, the tests are not just there to check if the implementation works. Tests are also the specification on what the code should do while the implementation is how the problem is solved.
The day that the simple code needs updating, I will stand a better chance of understanding it when there are tests.
Myth: There will be bugs anyway, so tests are pointless
With tongue in cheek, I would say no, given that the definition of a bug is code not following specification and since the tests are the specification, there are no bugs in code that passes the tests.
So if your program does not behave as you intended, the specification is wrong, not the code. It works according to specification, the tests. Therefore, you should fix the specification (the tests), not give up on the idea.
With the safety net of tests, fixing one problem does not create new ones as easy.
As a side note, I have another definition of bugs, they destroy information. See this post for more on that. Your definition might be different, but as said, do not give up on tests.
Myth: TDD means write all tests first
I prefer to say that you write the tests with the implementation. It is a misconception that you write all tests first and then start with the implementation. You should write only as much test code that the test fails (or compilation error) then you write only as much implementation that the test passes.
It is the red-green-refactor cycle, it is important to understand it fully.
Myth: It is easier to write tests after the implementation is finished
You might think that it is easier to write tests later when you know how the implementation works. There are problems with this approach. If you write a test after, it will pass. But a test that is wrong may also pass, you can not easily tell whether your test actually checks anything. Also, unless you are skilled or lucky, your design may not be as testable as it should be, making it harder to write the tests. If you write the tests with the implementation, testability will not be where you fail.
Myth: Tests increase the cost of maintenance
That might be true. Tests can be a nightmare to maintain if they are written badly or tightly coupled with the implementation.
Bad code is expensive to maintain. Your test code should be as clean as your implementation code. Readable, every test case should have a proper name, no magic numbers in the test data, no duplication – factor out in utility methods.
Tests tightly coupled with the implementation may be a sign of less than perfect design. That is why advocates of TDD say that the design gets better with TDD. The lack of testability jumps into your face when you try to write tests.
Myth: Every class should have a unit test
No, there are classes that are simple data structures, used to pass data across some serial interface or stored in a database of some kind. But they are always used somewhere so they should be covered by your tests somehow.
Look at your test coverage, if those simple little classes do not get fully covered, it is a smell. Either your test are low on coverage or you have discovered some dead code, i e nobody cares about part of your little data structure.
Myth: With good integration tests, no unit tests are needed
You need both. Unit test are fast so running them while you code means fast feedback. The shortest feedback cycle, next to pair programming, is unit tests. They are also able to cover more of the code and more test cases than integration tests. Still, you need integration tests to see that units play well together. But the main coverage should come from your unit tests.
That’s it for today, folks. Here is a list of the myths:
- Simple code need no tests
- There will be bugs anyway, so tests are pointless
- TDD means write all tests first
- It is easier to write tests after the implementation is finished
- Tests increase the cost of maintenance
- Every class should have a unit test
- With good integration tests, no unit tests are needed
“TDD means test first” seems confusing. Suggest changing it to “TDD means writing all the tests first”
Changed it. Thanks!
Myth 8: TDD is a different way of technical implementation.
TDD is not only the change about your coding habbits but also the change of the way of thinking, from developer perspective more to the user perspective. This co-relates to the whole development framework. TDD can hardly be done with good backlog refinement.
I agree that it is a different way of thinking and I often do see people having a hard time to change their habits. However, the backlog does not get that importance in my mind. A well refined backlog is of course helpful, but even without that I wouldn’t hesitate to use TDD. 🙂
I agree that TDD can be done even though the backlog is not refined. However, developers have to make their own mind to imagin what exactly the feature would look/behave like, which can be far way from the user needs. PO will not approve those features, and then developers have to do it again and again cause there are no clear AC stated. With TTD, all the test case should be based on the feature, not the implementation, what if the feature (from developers’ perspective) are changing very often, they have to change their test cases again and again (without any needs change from the user). this is going to kill the habits of TDD.
I understand more of what you are saying now. It is an interesting reflection on how a lack of structure of the PO impacts the craftsmanship of developers.