This is the presentation I gave at the .Net System Event by bbv Software Services AG in Lucerne in June 2012:
When we start a new project, everything is small, nice and easy.
New features are added rapidly, the system is easy to understand and change.
But far too often and far too quickly, systems become something like this:
The team isn’t happy about how the code is structured, everything is hard to change and nobody has the overview anymore. The system rots because no one in the team risks to clean things up because things may break at any place.
Furthermore, nobody knows exactly what features are really in the software and how they behave in detail anymore.
So what can we do to prevent this from happening?
Let’s take a look at a typical system:
There is a user who interacts with the system (top), there are external services that the system is connected with (right) and there probably is a database persisting the system state (bottom).
To keep this system changeable, we use…
…Unit tests. Unit tests give us the possibility to refactor the code, clean things up and make them simple without breaking existing functionality.
A unit test checks a single class in the system – isolated from the rest. The unit test gives us a guarantee that when changing the internals of the class under test, the rest of the system is not compromised as long as the tests are passing. Therefore, we can keep the individual parts of the system from rotting.
With unit tests it’s easy to get lost in details. Therefore, we need something that gives us the big picture about our software system. Here acceptance tests come into play.
An acceptance test gives us feedback whether a feature is present and working inside our system. The sum of all acceptance tests gives us a list of all available features, which our system provides.
An acceptance test walks a single path through the system. It guarantees that the needed functionality is in the system and that the individual parts work together.
The red line is dotted at the system boundary because we use to replace all externals with fakes: the user interface, external systems, the file system and the database. This allows us to test special cases easily and to run these tests in a couple of seconds. Thus, providing quick feedback during development whether I did break an existing feature or not.
Of course, these two kinds of tests are not enough to deliver software with guaranteed quality.
But they are the fundament for further quality assurance measures and are the main driver of development by telling the developer whether the software works correctly (unit tests) and does the correct thing (acceptance tests).
There are additional automated tests – system and constraint tests – that we use for regression testing so we know that our software works at the end of each Scrum Sprint, every second week.
Manual tests are performed both to find gaps in the existing test suite and to test things that cannot be reasonably automated. But compared to unit and acceptance tests, the number of manual tests is very small.
But where do these acceptance tests come from?
We work mainly with user stories to gather requirements. These user stories come with acceptance criteria that were discussed with the product owner and other business representatives. These acceptance criteria are translated into acceptance tests.
The acceptance tests form the frame for the real coding done with Test Driven Development.
Finally, testing takes all these artifacts and extends them with test cases, test protocols and probably defects.
Acceptance Test Driven Development consists of the following steps.
First, we translate the acceptance criteria into an executable acceptance test. This test should fail because the functionality is not yet implemented. Then we implement this functionality with Test Driven Development: Write a failing unit test, make it pass, clean things up and start over.
At some point, all the code needed to fulfill the acceptance test is in place and the acceptance test passes.
We extended this process a little by…
…adding steps to make the messages of the failing tests as clear as possible. This is important because once an existing test fails in the future, we want to find the cause quickly without the need to use the debugger.
When writing an acceptance test, we specify how the functionality is invoked. For example by using a stub instead of the real service interface that we can call from the test code. And we define the effect that should be observed when the functionality was executed. For example by using a mock instead of a real database or external system.
This gives us the frame inside which we can develop the functionality.
Normally, things are a bit more complicated so that we have to introduce several stubs and mocks to provide the system with all external information that it needs and to check whether all operations were executed correctly.
Now it’s time for a demo. Our job is to implement a service method that takes a semicolon separated list of strings and returns the alphabetically smallest element. Furthermore, each call to the method should be logged.
I’ve prepared a sample solution containing three projects. One project for the productive code, one for the acceptance tests also known as specifications and one for the unit tests.
I’ll use Machine.Specifications to write the acceptance tests/specifications.
Video: Writing specification
First, I translate the acceptance criteria into an executable specification or acceptance test. The Subject is used to group specifications belonging together.
These specifications can be run and are shown as ignored, meaning that they are not yet implemented.
I define that the requested functionality should be provided by a method on a Facade. The Establish block is used to setup the acceptance test. The Because block executes the action to test. And the It blocks are used to check whether the functionality was executed correctly.
Video: Make specification runnable
In order to run the specification, I have to fix the compile errors by introducing a dummy Facade implementation.
Video: Let specification fail
When I run the specification, the acceptance criteria that it should return the smallest element fails, the logging acceptance criteria is still not yet implemented.
Now I can start implementing the required functionality using Test Driven Development.
While implementing the functionality with Test Driven Development, this design came up. The façade uses the Tokenizer to split the list of values, passes them to the SmallestElementFinder, which returns the smallest element, and returns the result back to the caller.
The solution has grown by these types and their interfaces.
Video: Run unit tests
I run the unit tests of the three classes that were implemented.
After updating the specification because the constructor of the Facade has changed, the specification for the result value passes. Note that the system internals are not mocked but the real implementations are used (Tokenizer, SmallestElementFinder).
Now it’s time to get the logging specification running.
First, I introduce the interface ILogger in order to abstract away system externals like log files.
Then, I implement a simple stub that is used to simulate the logger in the specification.
Video: Write logging specification
Now, the specification can be written. It states that there should be a log message containing the input and output values. It is not an exact match because I’m not interested in the details but whether there is a log at all. The unit tests deal with the details.
Video: Let specification fail
The logging specification fails because logging is not yet implemented.
Video: Extend Facade with ILogger
I extend the constructor of the Facade with the logger. Dependency injection is a simple way to make classes and whole systems easy to test.
The functionality is implemented using Test Driven Development. The design remains as specified in the acceptance test.
Video: Let acceptance test succeed
Finally, the acceptance test passes and I’m finished with the implementation.
Machine.Specifications allows me to create a report containing all specifications. The report for this small example looks like this:
We use the report to discuss whether we implemented the correct functionality with our product owner and as a reference when we are not sure how the system behaves in certain situations.
The result of using Acceptance Test Driven Development and Test Driven Development is software, which remains changeable.
While unit tests enable the team to keep the code clean and simple by using refactoring, acceptance tests both describe and check the functionality of the system. The description helps us to make decisions consistent with the existing software. Using the acceptance tests as regression tests helps us to keep the software functional.
Therefore, the software can be pushed forward without the risk of breaking existing functionality.
So that in the end, our software looks more like this…
…than the ruin I’ve shown in the beginning.