Tests that pass are good. right?
That is such a given that it may seem strange to even question it! So here goes:
Tests only add value when they fail
There it is. Failing tests. A good thing. ok I’ve said it.
I’m already removing the arrows, ow! so let me explain:
The idea here is that tests that never ever fail indicate a problem. Tests are supposed to break – when you break the application. After all, that’s why they are there – as safety rails so you can develop in comfort, changing the application as needed and letting you know when your changes break existing functionality.
So what happens when they do break? Well here’s where the quality of the test itself also comes into play. It’s not too hard to write tests that, when they do break (yeah!) they give meaningful information.
Unfortunately it’s also fairly easy to come up with tests that break and tell you something like this:
“Expected true to be true but was false instead”
Sound good? Can you fix that now? Of course not.
So write a test whose failure looks like this:
“Expected the child to be a member of the customers family but they are not listed in the family plan” – now that’s something you can work with !
This is also support for the concept of making sure your tests fails first, i.e. Red, Green, Refactor. Making sure it fails first provides an opportunity to hone and refine the failure message.
The main caveat to this idea is that tests, if well written, act as documentation So even if they never fail they can still fulfill that function. Indeed I have learned about more than one system just from reading the test suite!
A downside to such ‘wordy’ tests is that we can have test code descriptions that don’t match, or get out of sync, with the code being called. This is why we avoid program comments whenever possible, in favor of meaningfully named code objects and function descriptions. However I believe the upside of good and meaningful tests outweighs this downside for these tests.