Back in 2012 when I joined in Vistaprint there were dedicated QA teams doing all testing manually or semi-manually. Developers were coding features, writing unit tests plus lightweight integration tests, and afterwards passing the features to the Quality Assurance engineers. Whenever there were bugs, QAs returned the features to developers for fixes. That wasn’t scalable, considering the number of new features developed and deployed. QAs had to retest systems for regressions and verify the integrity of the systems. The feedback cycle was slow between developers, QAs and business. It was a heavy, painful and time-consuming process.
TDD: test driven development
To reduce the time to retest systems and verify that customers get the best while shopping in the web-site, we realised the need to move away from silos between developers and QAs and to build automated tests from the beginning of our products. We believed this will enable us to verify the features in a more automated fashion and get feedback faster. Test driven development (TDD) fits this description, lowering the silos walls between developers and QA, and having full control of testing the expected system’s behaviour in synergy with the business.
TDD has two distinct styles of approaching the tests and designing the systems: inside-out (bottom-up) and outside-in (top-down or the mockist TDD). The article focuses on the latter.
How do we approach outside-in testing and designing of new systems at Vistaprint?
Developers start collaborating with the business to specify what is the expected behavior of the system and agree upon the business requirements, translating those into user stories. For some projects Specflow is used, for others Domain Specific Language (DSL) is created to express the user stories in code.
Acceptance and unit tests
The first tests written are the Acceptance tests (AT), focusing on the business rules (the domain) independently of the infrastructure. The ATs use the real implementation of the domain classes whereas the external world is mocked, i.e. dependencies such as HTTP or database calls. The entry point of an AT is the upper most public facing class(es) for both console applications or microservices. Once a failing AT is written, failing for the right reason, begin implementing each of the dependencies of the most public facing class until the AT passes. This process is called double loop, and is used when designing the system outside-in. Implementing the dependencies of the public facing class includes unit testing and implementing each of the dependent classes and their respective dependencies. While specifying and implementing the behaviour, it’s important to remain open for discussions with yours peers and the business about changes in the initial user stories definitions. TDD is a tool to facilitate those discussions, the ‘how’ you implement things depends entirely on you.
After passing the ATs you should have the confidence that the majority of the required business logic is implemented. A common question here is how do I assess the quality of the test suite to have enough confidence in it? Mutation testing comes into play as a useful tool. Make a change into your code and run the tests. If all tests remain green and that means the mutant hasn’t been caught. On the other side, if at least one test fails after a change in the code this demonstrates resilience to changes. This approach won’t guarantee you bug-free code and success in production. It will give you visibility which features have automated tests and where you may need to invest more to make the tests more robust.
So far so good, but you still don’t know if the system works in harmony with the external world. Up until now we’ve invested quite a lot to build the ATs. Can we reuse them to run Integration Tests (ITs)?
Yes, with some slight changes. Instead of mocking the external world, you can use the real implementations of the HTTP calls, databases and 3rd party dependencies. An important remark, sometimes while writing the ITs, you won’t be able to use the real implementations of all external dependencies. You can still keep some of them mocked while using the real implementations of others. The Integration tests will run definitely slower than the ATs, but they should give you even more confidence.
A common question that may arise is whether we’ll have duplication of the ATs user stories and ITs. ATs cover all of the business logic and the ITs focus on making sure the system works with the integration to the external world. Dependending on the team’s confidence level of releasing the feature, judge whether to repeat some of the test scenarios from the ATs or not. The duplication of scenarios in ITs is due to writing tests for the happy path(s), whereas some specific negative cases simulating failures from the external world may not be worth covering due to limitations in making the dependencies failing. The highest value of the ITs is in uncovering roadblocks as soon as possible while developing the system and to deliver what the customer needs.
After a system is launched in production and has been working for a while, it is a living organism. It changes due to business needs or bugs. Think carefully about where to add/alter the new functionality to the test suite. Once you reflect and have an idea what change to make, have a failing test and then implement the fix or the feature in the same way described above – outside-in with double loop.
Try it out
If you still haven’t tried TDD outside-in with double loop, consider it as a part of your developer’s toolkit. It will help you design a testable system from the very beginning where the business rules are clearly specified and you can re-run the tests to verify the behaviour of your system. An important aspect to consider is to not fully rely on the automated tests without any manual verification. TDD is not a universal solution to everything, guaranteeing you bug-free systems, use your critical thinking and judgement as tests do what you tell them to do! The more you work in a given project, you’ll understand better scenarios and make the test suite more effective!