85% Unit Test Coverage

For decades we had it hammered into us: create unit tests! We (and our clients) focussed a lot on getting a certain code coverage for our unit tests. And we watch the unit test codebase grow, tests marked to be ignored, changes to the test cases every time code is checked in - and the software still doesn't do what it should.

Isn't it about time to review the practice of writing unit tests?

Why again are we doing unit testing?

  • Check the software continues to work as intended
  • Refactor code safely
  • Pass some code quality measurement (metrics)

Experienced developers may also add

  • Faster development speed (code blocks can be executed standalone, rather than starting up the whole application)
  • Document the usage of an API
  • A test-first approach aids the design (helps to form a good interface)
  • Verify a tricky piece of code works in all expected cases

Anything else? Perhaps

  • Increase progress visibility (assuming tests are created upfront progress can be visualised by these turning "green")

Hmm.

This sounds good. How come we still have quality issues? How come the tests are always "green", but the application doesn't work as expected?

Unit tests aren't a quality metric

Most unit tests I see don't actually test much - and some even don't test anything (worth testing). Before you jump to conclusions: this is not because the software developer doesn't know how to write software. No. Well, sometimes yes. The main reason is most of the units within any typical software application actually don't do much. They get input, perhaps validate it, and store it. And then they read it. Reading, writing, transforming - that's it.

What is there to test? And where is the quality?

Why does the software exist in the first place? Hopefully, the answer is that it has a function that is of some business value.

The application has a function - not the unit we are testing. A unit test passing has little to do with the software (correctly) performing a function. And often even less with how fast new functions can be added.

Quality then is how well an application performs its function, and how maintainable it is.

Are unit tests useless?

Let's go back to some of the initial reasons why we unit test:

  • Check the software continues to work as intended
  • Refactor code safely

There are some good developers out there who can write code that is perfectly modular - and refactoring means very isolated changes to any well-behaved code. Unfortunately, most of us are not that good. Refactoring means ripping half of the code out, and changing the design (perhaps to get to that perfectly modular code). The inevitable result? The unit tests don't only fail - they don't even compile, let alone are useful anymore. This occurs because unit tests are tightly coupled to design. And there goes our proof that we can refactor safely, or that our software continues to operate as intended.

We can even go one step further: Who cares if a unit test fails - as long as the application still performs its function?

So: We don't know if our code does what it should, as the test is decoupled from the user. And we don't have any safeguards either, as everything falls over as soon as we make changes other than renaming a few variables.

So don't write unit tests then?

I wouldn't go that far. I'd still subscribe to

  • Faster development speed
  • Document the usage of an API
  • A test-first approach aids the design
  • Verify a tricky piece of code works in all expected cases

And of these, the first two may warrant the unit test to be kept after the implementation has been completed. And when I say "may": Subsequent developers probably want to create their own tests, and for the documentation aspect: Maybe the API needs some more thinking if it needs to be documented?

What about the last statement, verifying tricky pieces of code? Fine, keep that one - if you really need that tricky code.

Write unit tests, but throw most away afterwards?

Yeah, much better.