Introduction
Test-Driven Development is best described in the source work of the same name by Kent Beck. Our purpose here is to provide a practical exercise which illustrates the highlights of TDD using a Testing Framework we�ve developed in C#. There are already some excellent frameworks out there for TDD and C#, the most obvious are NUnit, mbUnit and csUnit. The Testing Framework we use here is different from these alternatives in several ways.
First, our Testing Framework is a Visual Studio solution, which can be opened and run, just like any other application. You work with the solution by adding the application project to be tested and a test project (where the tests reside). Development then proceeds normally, as you move back and forth between the running application and Visual Studio. As a result, this approach feels more natural than launching an external EXE to load and manage your tests.
Another difference, the Testing Framework introduces the concepts of Master and Monitor. A Master is a persistent representation of test results which can be used to validate the future results from the same test. This differs from hard-coding the expected results in the test itself, a technique which breaks down when "correct" is represented by multiple values in multiple objects across the system.
A Monitor is any kind of user-interface that helps us inspect the values in the Master. A typical testing sequence would be: run the test, view the results in the Monitor to make sure the test ran correctly, save the Master, and finally verify the Master to make sure it was saved properly and that it matches the live test results after deserialization.
None of these enhancements speak to the utility of stock TDD techniques. These are tried and true, and have already been proven a solid, workable technology. Rather, we developed these techniques within the context of UI testing, which initially lends itself well to traditional TDD, but moves a world away when tests involve multiple user actions.
Dreaming Up Tests
It may be a bit difficult to see the forest for the trees here, for a moment. Rest assured that our goal is production quality code, not tests for tests sake. If the initial exercise seems excessive, just remember that in normal development, you only setup a testing solution once every few thousand tests.
We will be developing a simple class: WidgetCollection
. WidgetCollection
will descend from System.Collections.CollectionBase
, enforcing uniqueness in the list. This class is necessarily trivial, so that we can focus on technique.
So let's think about some tests. First, easy tests. What should absolutely work?
- Add, should result in one object in the collection.
- Add then Remove, should result in no objects in the collection.
- Add the same object twice, should result in an exception stating the second object is not unique.
- Add then remove the same object twice, should result in an exception stating the object is not in the collection.
These are bread and butter tests, guaranteed to occur under normal use. It's also good to think of some edge cases. These are tests which you may not consider very likely, but are at least possible.
- Add a null object, should result in an exception.
- Remove a null object, should result in an exception.
For a while, we considered the adage "There's no such thing as a bad test". I think what we really meant was, "Any test is better than no test". Clearly, the smallest set of tests which exercise the greatest amount of code would be the best. Instead of worrying too deeply about this, we recommend falling back on the "Any test" adage, since the purpose of TDD is to move forward steadily in ways that make sense to you. Leave beating the Rubik's cube out of the equation; the experience gained from sheer volume will always make a better tester of you.
Readying the Solution
The next step is to configure the solution. We start by copying the stubbed out Testing Framework solution. We call this solution the CaseBase. CaseBase is organized like this:
The Main project is defined as a Windows application. The Testing Framework (an assembly written in C#) supports the authoring of tests and contains the UI invoked by Main. The MyTests project generates as an assembly, and is the repository for your tests. This assembly would usually be named based on the tests it contains. The MyLibrary project represents that portion of the application which is being developed in conjunction with the tests. It also generates as an assembly. We can either add code files to this project as we develop, or remove this project entirely and replace it with one from our application. If we keep this project, we usually give it an application specific name. Finally, the Testing.Core project contains types which can be shared across multiple testing assemblies. This core is where you put types that you do not want to declare again and again in the test assemblies, but do not belong in MyLibrary (the application itself) either.
Let's move forward and see how this works out.
First, we unzip CaseBase to some directory. Here we'll use "C:\CaseBase". Next, we open "CaseBase.sln" in Visual Studio. We rename myLibrary to "WidgetLibrary" (an application specific name), and we rename myTests to "WidgetCollectionTests".
Note: If you are experienced in Visual Studio, you know that renaming projects does not rename the underlying operating system folders in which those projects reside. We usually rename these folders in the Windows Explorer and then manually edit the ".sln" to reflect the changes. You can also exclude the projects from within Visual Studio, shut-down Visual Studio, rename the folders in the Explorer, launch Visual Studio, and then add the projects back into the solution using the Add | Existing Project... menu option. For our purposes here, it's okay to let the folder names and project names go out of sync. Normally, in a production environment, you don't want to do this though, as it adds considerable confusion to the source.
After configuring CaseBase, we should be able run the solution. Doing so displays some empty forms. On the left, the Cases form shows us all the cases currently under consideration. On the right, the Monitor shows us the results from the currently selected case. That's all there is to CaseBase setup. Now we're ready to code some tests.
Coding the Tests
All of our tests have something in common: they operate on a WidgetCollection
object. In TDD, tests which are similar enough to rely on the same setup routines are usually coded as a Fixture. We use the Fixture pattern to reduce the amount of code in our tests. Here, we implement the fixture pattern by changing the name of the stub case class in WidgetCollectionTests from myCase
to WidgetCollectionCase
and implementing the setup routine as follows:
protected override void Setup()
{
_Widgets = new WidgetCollection();
TestObject = _Widgets;
}
TestObject
is a special property declared in the Case
class. This object is used to extract values and build the master.
When the case is complete, we want the WidgetCollection
to be garbage collected. To ensure this, we code a teardown:
protected override void TearDown()
{
_Widgets = null;
}
Now our tests can descend from WidgetCollectionCase
, and need only contain the actual test code. For example, the first test, AddWidget, would be coded as:
public class Case0001AddWidget: WidgetCollectionCase
{
protected override void Test()
{
Widgets.Add(new Widget());
}
}
Widgets
is a protected property declared in WidgetCollectionCase
, giving all descendant cases access to the WidgetCollection
object being tested.
At this point, the solution will not build. We have not coded the WidgetCollection
or Widget
. Let's do that next.
Coding Widgets
We code just enough of the WidgetCollection
to pass the first test:
using System.Collections;
public class WidgetCollection: CollectionBase
{
public void Add(Widget aWidget)
{
InnerList.Add(aWidget);
}
}
And just enough of the Widget
class to get a clean compile:
public class Widget
{
}
Next, we build the solution and run it.
The AddWidget case does indeed appear in the Cases list. We select the case to run it, and the following appears in the monitor:
Notice that there are no values displayed, only starting and ending entries indicating that a WidgetCollection
object was created. To verify these results as correct, we need to know more. A count of the widgets in the collection would be sufficient. So let's code the classes necessary to show this value in the monitor.
Defining the State
The state of a case is a set of values which represent the test results. A state must descend from CaseState
and can contain anything you define as relevant for verifying the correctness of a given case type. The CaseState
is used to show case result values in the monitor and to save those values as a master. Case
and CaseState
are modeled like this:
To associate the WidgetCollectionCase
with a case state, we code:
public class WidgetCollectionCase
{
public override Type StateType
{
get
{
return typeof(WidgetCollectionState);
}
}
}
The WidgetCollectionState
would be defined as:
public class WidgetCollectionState: CaseState
{
public override void ExtractTestValues()
{
WidgetCollection widgets = (WidgetCollection) TestObject;
_Count = widgets.Count;
}
}
The state must also be serializable. We'll use the standard .NET framework serialization capabilities to handle this. To verify correctness, we need to compare the state to another state object of the same type. We also need to display the values in the state in the Monitor, so we can make sure the test results are correct. Both of these requirements are handled by the MatchStick infrastructure.
Coding a MatchStick
To pull the count from WidgetCollectionState
, we define two types: IWidgetCollectionCount
(which defines a read-only integer property) and WidgetCollectionCountMatchStick
. Once we've coded these two types, we register the MatchStick
as follows:
MatchboxRegistry.Register(typeof(IWidgetCollectionCount),
typeof(WidgetCollectionCountMatchStick));
Here we are associating the interface with the matchstick. After a given test completes, the interface will be pulled from the state object, and the matchstick will be created and given the interface. The matchstick will then create an entry containing the value of the count. If the matchstick is given two states, it will create an entry indicating the equality of the two counts. This entry is then displayed in the Monitor. Entries which do not match are displayed in red in the Monitor. Entries which match are displayed in green.
The test verification process is diagrammed here:
With our matchstick in place, we can now build and run the application again. This time, the Monitor shows the count when we select the case:
Finishing the Cases
We have a few more cases to code, and some of them cannot be mastered with a simple Count
property value. The cases that throw exceptions present two problems. We can't allow them to blow through the top of our application, and we need to verify that the exception thrown is the correct one. To handle these issues, we can code exception cases like this:
public class Case0003AddSameObjectTwice
{
protected override void Test()
{
Widget widget = new Widget();
this.Widgets.Add(widget);
try
{
this.Widgets.Add(widget);
}
catch (NonUniqueValueException e)
{
this.State.ExceptionMessage = e.Message;
}
}
}
Here, we catch the expected exception and save its message in a new property of WidgetCollectionState
. To display the message in the Monitor, we would also code an interface and matchstick that pull the exception string. The results of this case then display in the Monitor:
After inspecting the results of each case, we can generate masters for all cases and verify them. We do this by right-clicking over the Cases list and selecting "Generate All Masters". The framework then runs each case, saving the results. The icon next to each case changes to yellow indicating that masters exist but they have not been verified. We right-click again and select "Verify All". The framework runs each case again, comparing the current results with the saved results from the master. If the results match exactly, the case icon is set to green. If the case results do not match, the case icon is set to red, and the differing values are shown in red in the Monitor.
At this point, all of our cases are green, and we are ready to move forward with further development.
In the next article, we use CaseBase to begin the development of a UI platform written entirely in C#.
CaseBase Download
Links