I’ve written before about the importance of good testing in Agile development but a few things happened during our current sprint that raised this valuable issue for me again. Here are a few brief thoughts about this key topic.
To set the context for these thoughts let me tell you a little about our development team and how we work. Much of the time we work in loose pairs, with developers testing each other’s code and even writing tests in parallel with new code. We also work to enhance our automated test regression suite to cover the new features we develop; hundreds of these tests run nightly to verify that all new development across our code base has not broken existing functionality or caused any unforeseen results. On top of this we do regular manual testing and user acceptance testing to hit new development from multiple perspectives. We don’t always get as much of this testing in as we want – particularly in terms of developing additional automated tests for our nightly regression runs – but we are getting better.
I like failing tests because they show quickly that we have a problem. Tests from our nightly regression run fail and that highlights the fact that one piece of code accidently broke or unexpectedly impacted code in another area. We saw some of these ‘happy failures’ in the first couple of days of our current sprint, pointing out that some of our new development was not producing the results we expected. We resolved these items and then saw several days of clean regression runs before again encountering problems on the last day of the sprint; these too we addressed to ensure a clean set of coded features would go out with our latest deployment. Finding instances of failing tests early in our current sprint allowed us to address these issues in time to wrap up a key feature. And finding more of them on the last day of the sprint gave us a final pass at fixing all identified issues – which means we will release the new feature with confidence.
As an added benefit, having a thorough regression suite gives us the confidence to keep developing new code right up to the end of a sprint because we know that we will catch any unexpected issues in time to fix them before the release or remove them from the deployable code before it goes out.
On the flip side, not having broad regression coverage in another area meant finding a host of unexpected problems as we developed a new feature this sprint. This area of functionality did not have much automated test coverage in place to identify core use cases that had to work on the new code path. Because of this, coding items we estimated as relatively small became much larger than expected when our ‘small’ updates produced unexpected results. We ended up having to rework initial solutions – adding more coding time – and also taking time to add more test coverage so that next time we touch this feature we will have greater confidence that our enhancements aren’t breaking anything that works currently. If we had put better tests in place the first time they would have broken as soon as we changed things and we could have caught and fixed the issues right away.
Having broad regression coverage for our code base means that failing tests are extremely useful; they quickly identify potential issues and allow us to fix problems early rather than catching late in a sprint – or worse still not catching them until after deploying broken features. Paired programming and a heavy emphasis on test development might seem to slow down coding on highly anticipated new features, but it means that everything we build is developed with confidence and our code base becomes an increasingly stable foundation for new functionality. I’m happy when our tests fail because that gives us the chance to address issues quickly; and I really like when our thorough test suite doesn’t fail at the end of a sprint because it gives me confidence in what we release. There’s still lots of room for us to grow in our test coverage – as some of our experience this sprint made clear – so we will continue struggling to balance new feature development with expanding our test suite. With a hot market eagerly awaiting some of the functionality we are planning to release this year it might seem tempting to tip the scale toward a higher focus on developing new things, but if we want a solid and reliable product and a foundation we can build on with confidence we have to acknowledge that in truth it’s not that simple.