I guess we are all familiar with the tried and tested (pun intended) approach to testing:
At some point in the not too distant past an agile preacher came along and said: You need to be agile! Do scrum! Smaller teams, short iterations, present your work regularly, get instant feedback!
So that's what we did:
Can we only have a single "done"?
That's not too hard (well, the idea isn't, the implementation mostly is): Create a cross-functional team (another agile term), and a feature is done when it is tested. Easy as.
Now testers are idle before development is complete, and developers afterwards. And because we left the sprint time the same we are now squeezing both development and testing, which had prior been spread across 6 weeks, into 2 weeks. Rework is to be expected. And we haven't actually solved the "done" issue if we are honest.
Let's extend the sprint duration. And, so that nobody sits around idle, or spends too much time on the oh-so-non-productive backlog grooming:
Let's create interleaved sprints
But how do we now keep code isolated within sprints, but still get changes from the previous sprint? Don't worry, only a tiny step further: Sprint branches.
Really perfect now.
But we can do even better: Re-test all before going into production? Urgent features that can't wait until the next sprint? How about releasing features independently?
Did you know there is a law against this?
The scenario just outlined is unfortunately not a fake one: I have seen it implemented this way, and I have to admit I had my hands in it. It is the logical way forward if you are in that hole, and unable to break existing patterns.
The root causes for this lie in the role separation of development and testing, and the linearity of the process. So how can we break the role separation, and make testing and development parallel processes?
Testers don't test
Separation of duties is a well-established business principle to prevent fraud and error. It is only natural to apply the same to software engineering - developers can't be trusted with testing the same piece of code they just wrote, right?
Separation of duties is about control and responsibility. What if testers control and take responsibility, but don't execute? What if testers write and improve specifications, and developers write and execute the tests? And ideally, test automation executes the tests?
The process of testing now becomes an iterative process itself: Test specifications are created based on the feature requirements and example cases (stories). These specifications are connected with coded tests. Exploratory testing reveals further testable scenarios - based on an increased understanding of the domain or misunderstanding from the developer's side. These are again turned into specifications, with connected coded tests.
A feature is complete when the tests pass - the original idea of test-driven development.
And all of this even comes with some bonus features:
- Tests remain executable, and with test automation, are executed even after the sprint is completed
- The adoption of automated testing improves, as developers are more inclined to writing automated tests than test analysts (often without a development background)
- Test analysts can focus on the essentials of testing: Understanding of the domain, and its edge cases, and exploring
- Tests are the only acceptance criteria, and the definition of "done". Therefore
- Tests automatically become more comprehensive.
- Tests also cover "unhappy" paths
- Test analysts are not paid any more to find bugs; they are paid to support developers to write a working solution.
This time without any sarcasm: Much better.
I have a few posts about test specifications in practice in the pipeline. Stay tuned!