1. Introduction
Unit testing is certainly one of the most overlooked steps in the software development cycle. More than often, developers create a very simple set of tests (like a console application) that ensures that the implemented functionality is working properly, before handing over their applications to others to test it. But the coding of unit tests usually doesn't get as much attention as the coding of the application itself, and the average developer certainly will not "waste time" maintaining test code. This is obviously not true for those who have embraced Test Driven Development (TDD), one of the offsprings of Extreme Programming. Let's enumerate just a few of the TDD principles:
- Do Simple Things: programmers should do the simplest thing that could possibly work.
- Code Unit Test First: programmers should always code automated tests before coding the functionality that will be tested.
- Once and Only Once: programmers should systematically refactor their code to avoid duplication and allow reusability.
That being said, it is no surprise that TDD enthusiasts have long created a set of programming tools and frameworks in order to simplify the coding of automated unit testing. Some benefits of automated unit tests:
- Maintainability: Just like the application code, automated unit tests are written in a set of classes, usually maintained in test projects, and mapped to the classes to be tested in a one-by-one basis.
- Reusability: Automated unit tests can be extended and executed as many times as the development team wishes. Besides, the successful execution of the unit tests can be a requirement for the pre-build phase.
- Visibility: Unit tests can be used by a code coverage tool so that developers can see how much of the application code has been tested by the unit tests.
- Developer's Confidence: If a code is properly tested and the automated tests can be easily executed, the developer can refactor or implement new code with much more confidence.
- Documentation: Unit tests can help in extending the class documentation, and in providing practical examples of its correct usage.
Unfortunately, there are many pitfalls in the path, and coding unit tests can be a very hard (if not impossible) task, due to bad design choices made by merely applying to new projects the same paradigms of the legacy projects. This is why software architectures should add testability as a requirement for their development cycles.
The rest of this article tries to demonstrate how the use of design patterns can allow your future applications to benefit from (and to be enabled for) unit testing, even if you don't choose a Test Driven Development approach from the beginning.
Since the sample application was written in C#, I will use the NUnit framework, which is a porting of jUnit for .NET. There are other interesting unit testing frameworks, like MBUnit and Visual Team System, but I'm going to focus only on NUnit, which is a member of the popular xUnit framework family.
I want to close this introduction by thanking the people who have inspired me to write this article, especially Martin Fowler, Hamilton Verissimo, Jean-Paul Boodhoo, Billy McCafferty, and Ayende Rahien.
2. Background
It might be said that an application can be deemed easy to be tested if its components (assemblies and classes) are easy to be tested.
What do you mean by "easy to be tested"?
An easy-to-be-tested class is one which can be easily isolated from the other classes it interacts with. That is, we should be able to test a class individually, without having to be concerned with other classes' implementation. Thus, if a unit test fails, it would be much easier to find the source of the bug. In unit testing, we isolate the class being tested by creating mocks of the classes it depends on. Mocks are fake instances of a class/interface, and stand for concrete objects. Mocks are critical tools for unit testing isolation. But before going on with mocks, let's say you have created an Invoice Application with two classes, InvoiceServices
and TaxServices
:
public class InvoiceServices
{
TaxServices taxServices = new TaxServices();
public InvoiceServices() { }
public float CalculateInvoice(params int[] productCodes)
{
float invoiceAmount = 0;
return invoiceAmount + taxServices.CalculateTax(invoiceAmount);
}
}
public class TaxServices
{
public TaxServices() { }
public float CalculateTax(float invoiceAmount)
{
float tax = 0;
return tax;
}
}
You have also another class, named InvoiceServicesTest
, which tests the InvoiceServices
class:
[Test]
public void CalculateInvoiceTest()
{
InvoiceServices invoiceServices = new InvoiceServices();
float totalAmount = invoiceServices.CalculateInvoice(10001, 10002, 10003);
float expectedTotalAmount = 105.35F;
Assert.AreEqual(expectedTotalAmount, totalAmount,
string.Format("Total amount should be {0}",
expectedTotalAmount.ToString());
}
Now, let's say you discover that the CalculateInvoice()
function is returning an incorrect value. How can you tell which of the two classes (InvoiceServices
and TaxServices
) is not correctly doing its job? The answer is, you should refactor your test code and introduce a mock for the TaxServices
class, so that your InvoiceServices
class could be isolated. Unfortunately, you can't do it easily. The problem relies on the fact that InvoiceServices
instantiates directly the object TaxServices
uses. This is called "tight coupling", and is a bad design, which hinders the testability of your application, because you will not be able to easily isolate the two classes. Most mock frameworks can't work with tight-coupled classes. Certainly, there are tools (like TypeMock) which can do that job. But since we are trying to apply best practices in our application, and avoiding to rely on a specific mock framework, we should refactor our code to use loose coupling through the Dependency Injection Pattern, also known as Inversion of Control (IoC).
The Dependency Injection Pattern (a.k.a. Inversion of Control)
The Dependency Injection Pattern, or Inversion of Control (IoC) is a pattern which enables loose coupling in our applications. To enable loose coupling, we should refactor our InvoiceServices
class: first, by removing the construction of a concrete instance of TaxServices
inside InvoiceServices
. Second, by creating an instance of TaxServices
outside of InvoiceServices
and passing it via the constructor or a property setter injection. Let's see how the refactoring looks like:
public class InvoiceServices
{
ITaxServices taxServices;
public InvoiceServices(ITaxServices taxServices)
{
this.taxServices = taxServices;
}
.
.
.
}
Now that we don't create a TaxServices
instance anymore, we have to do it outside InvoiceTax
and pass it via the constructor. So, let's refactor our InvoiceServicesTest
class, to create a "mocked" TaxServices
object with the NMock framework:
using NMock2;
using NUnit.Framework;
using System;
using System.Collections.Generic;
using System.Text;
namespace InvoiceServices.Test
{
[TestFixture]
public class InvoiceServicesTest
{
[Test]
public void CalculateInvoiceTest()
{
Mockery mockery = new Mockery();
ITaxServices mockTaxServices = (ITaxServices)
mockery.NewMock<itaxservices>();
InvoiceServices invoiceServices =
new InvoiceServices(mockTaxServices);
Expect.Once.On(mockTaxServices).
Method("CalculateTax").With(100.00F).
Will(Return.Value(5.35F));
float totalAmount =
invoiceServices.CalculateInvoice(10001, 10002, 10003);
float expectedTotalAmount = 105.35F;
Assert.AreEqual(expectedTotalAmount,
totalAmount,
string.Format("Total amount should be {0}",
expectedTotalAmount.ToString()));
}
}
}
So, what's new in our refactoring? First, we added a reference to the NMock framework (in the line "using NMock2;
"). Then, we instantiated a Mockery
object. Mockery
is the NMock class which acts as a mock factory. Then, we instantiated the mock object mockTaxServices
from the ITaxServices
interface, using the Mockery
object. The other line is an "expectation", that tells the mocked object mockTaxServices
to wait for a call in its CalculateTax
function. Since the mock object doesn't have any code, the function would return null
, and our test would be useless. Fortunately, the Expect
line we've just added also tells the NMock framework to return the value $5.35 if the invoice amount equals to $100.00. This shows how Dependency Injection enables you to create mocks for unit testing isolation. Please note that even though I used NMock, it must be said that there are better options, like Rhino Mocks and TypeMock, which implement expectations in a strong-typed manner and use the so-called Record-Replay model, allowing Intellisense and refactoring tools to rename the mock members. The reader is certainly welcomed to experiment with these other mock frameworks.
3. The Northwind Solution: Putting Things Together
I decided to create an easy-to-be-tested Windows Forms application based on the Northwind sample database for SQL Server 2000.
The sample application deals only with the Northwind tables: customer and order. Please note that the application only allows you to retrieve, create, update, and delete rows of the Customers table, while the Order table is used as a lookup table for each customer.
The application architecture looks like this:
In the above diagram, you can notice that I not only made a clear distinction between the UI and the presentation, but also put them in separate layers. Why is that? To answer this question, let's see another design pattern.
The Model-View-Presenter pattern
It is a well-known problem that user interfaces are difficult to be tested. If you maintain both user interface and presentation logic in a view object (e.g., a Windows form) and you need to apply automated unit tests to that view object, you'll be in trouble. To solve this problem, there is a set of design patterns such as Model-View-Controller and Model-View-Presenter. I prefer using the MVP approach because it enhances testability.
The participants of a MVP pattern are:
- Model: The domain model objects (e.g., the
Customer
class). - View: A lightweight user interface (e.g., Windows form).
- Presenter: A layer coordinating the interaction between the view and the model.
Martin Fowler describes the view and the presenter components of MVP as Passive View and Supervising Controller. This means that, instead of simply calling the presenter, the view hands off its events to the presenter. Then, the view becomes passive, because now the presenter is responsible for coordinating all presentation logic. Making the view as simple as possible enables us to create automated unit tests on the presentation layer. In addition, there will be very little risk of non-tested view functionality.
Usually, in MVP implementations, the presenter is responsible for updating the model. But in this Northwind sample, I used a services layer as a mediator for every access to the model. I'll discuss the Services layer soon in this article.
Separated Interface pattern
Taking a closer look on the interaction between the view and the presenter in the package diagram below, we'll see that the Northwind sample uses a Separated Interface pattern. The Windows form named CustomerView
implements an ICustomerView
interface, which is placed in another assembly (the presenter package). The CustomerPresenter
object knows it will have to control a view, but since it holds a reference to the ICustomerView
interface, it really doesn't know how the concrete view is going to look like. This pattern is another good design for enabling unit testing, because we'll be able to easily inject a "mocked" view in the presenter, thus performing isolated unit tests on presentation logic.
Widgets
Since our CustomerView
Windows form is a passive view, it should be stripped from any presentation logic, and this includes not only the form-level logic, but also the presentation logic needed for its constituent controls (a.k.a. "widgets"). This is made by exposing the widgets as properties to be accessed from the CustomerPresenter
object. These properties return UI controls (see the UI Controls package) that act as wrappers for real Windows Forms controls. These wrapper classes implement a separated interface located in the TransferObjects package. To clarify this process, let's imagine how the CustomerPresenter
controls the behavior of the btnSaveButton
of the CustomerView
Windows form:
- In the TransferObjects package, there is an
IActionButton
interface, with a Click
event and an Enabled
property.
public interface IActionButton
{
event EventHandler Click;
bool Enabled { get; set; }
}
In the UI Controls package, there is a WinActionButton
class, that implements the IActionButton
interface.
public class WinActionButton : IActionButton
{
public event EventHandler Click;
.
.
.
public bool Enabled
{
get { return underlyingButton.Enabled; }
set { underlyingButton.Enabled = value; }
}
}
In the Presentation package, there is a IViewCustomerView
, with a property named SaveActionButton
, which returns an IActionButton
object.
public interface IViewCustomerView
{
.
.
.
IActionButton SaveActionButton { get;}
.
.
.
}
In the Presentation package, there is a CustomerPresenter
class, with a public method SaveCustomer
, which is responsible for calling the save functionality in the Services layer.
public class ViewCustomerPresenter
{
.
.
.
public void SaveCustomer()
{
.
.
.
}
}
In the UI package, the CustomerView
Windows form implements the IViewCustomerView
interface.
public partial class ViewCustomers : Form, IViewCustomerView
Inside the ViewCustomers
constructor, it creates a CustomerPresenter
instance using Dependency Injection via CustomerPresenter
's constructor. The view injects itself in the presenter, and then explicitly wires the Click
event of its btnSaveButton
to the SaveCustomer
public method of CustomerPresenter
.
public ViewCustomers()
{
InitializeComponent();
presenter = new ViewCustomerPresenter(this);
.
.
.
this.SaveActionButton.Click += delegate { presenter.SaveCustomer(); };
.
.
.
}
Presentation Layer Test Results
The Services Layer
Some lines above, I said that in the Northwind application, the presenter doesn't modify the model directly. Instead, it should call methods in the Services layer. The Services layer is a good design option, because it exposes well defined processes of the application. Besides, it is a good place where you can define transaction scope; and, if you want to implement transactional updates on domain objects, you can implement the Command pattern in the Service layer (the Command pattern would enable you to support undoable operations on domain model objects).
In fact, the presenter doesn't event know the domain model. It only knows the Transfer Objects (see the TransferObjects package). The transfer objects in the application sample are read-only copies of the actual domain model. Every interaction between the Presenter and the Services layer is made through transfer objects. It should be emphasized that only the Services layer knows how to manipulate the domain model objects in a transactional way (even though I don't use transactional object manipulation in the Northwind sample...). The Services layer also protects the domain model from being improperly updated by the Presentation layer.
The CustomerServices
class implements the ICustomerServices
interface, which exposes the CRUD methods (Create, Retrieve, Update, and Delete), and one more method for customer listing:
NewCustomer(string customerId)
GetDetailsForCustomer(string customerId)
UpdateCustomer(CustomerTO customerTO)
DeleteCustomer(string customerId)
GetCustomerList()
Ther CustomerServices
class has a reference to the IRepository
interface. The CustomerServices
client (in this case, the Presenter object) can create the CustomerServices
object with one of its two repository "modes": Database and InMemory. The former makes the CustomerServices
instantiate a repository for a live relational database (SQL Server 2000, in our sample application), and the latter mode will tell CustomerServices
to work with an In-Memory repository. The Repository objects will soon be discussed in this article.
Services Layer Test Results
The Transfer Objects layer
The transfer objects are read-only POCOs (Plain Old C Sharp Objects) that transfer domain model object data between the layers in the application. In the sample Northwind solution, they are applied only between the Presentation layer and the Services layer. Transfer objects are pretty dumb by definition, and should never carry any kind of validation or business logic.
The TOHelper
class is a static class that offers two functionalities:
- Creates a Transfer Object from an equivalent Domain Model object and;
- Updates a Domain Model object from the Transfer Object data.
Both functions use Reflection techniques to investigate object properties and copy data from one instance to another. This increases reusability and extensibility of the data transfer process.
Transfer Objects Layer Test Results
The Domain Model Layer
I started the design of my domain model with a generic class, DomainBase<t>
, where the parameterized type "t
" is the type used in the virtual ID property. Every domain class should override this base ID property with a proper ID. The Customer ID is a string, while the Order ID is an integer.
The Customer
class is pretty simple, and the only validation it makes is for not null/not empty values in the setters of the CompanyName
and Phone
properties.
Notice that in the Northwind sample, the domain object classes are decorated with two attributes: ActiveRecord
and MappedTO
.
[ActiveRecord("Customers")]
[MappedTO("Northwind.TransferObjects.Model.CustomerTO")]
public class Customer : DomainBase<string>
- The
ActiveRecord
attribute provides the table name the domain object it is mapped to. This attribute is used by the ActiveRecord framework, an Object-Relational Mapper framework, and will be discussed later on in this article. - The
MappedTO
attribute provides the Transfer Object full name, and is used by the TOHelper
class (in the TransferObjects package) in the Domain Object-to-Transfer Object copy operations.
The Unit of Work pattern
The DomainBase
class also stores the values IsNew
and IsDirty
, indicating whether the domain model instance is a new one and if it has been modified since the last time it has been persisted in repository.
The Unit of Work Pattern helps you in keeping the roundtrips to the database as minimum as possible. If you ask the repository to persist a collection of objects, it will only make trips to the database for those instances marked as new or dirty. Every not-new and not-dirty objects will be ignored in the process.
Domain Model Layer Test Results
The Active Record Pattern
Martin Fowler describes Active Record as:
An object that wraps a row in a database table or view, encapsulates the database access, and adds domain logic on that data.
The Northwind sample uses the Castle Project ActiveRecord, which is a very good implementation of the Active Record pattern. The Castle ActiveRecord was built on top of NHibernate, and provides a powerful, yet simple, set of Object-Relational Mapping functionalities which allow you to persist your domain objects very easily. With Castle ActiveRecord, you can focus on designing your domain model, while saving time working in data access maintenance.
As I explained before, the domain objects act as Active Record objects. This is made by decorating each domain object class with an ActiveRecord
attribute. In addition, each property in a domain object has to be decorated with a PrimaryKey
or a Property
attribute, indicating which table column the property is mapped to. If the table column name is not provided, the ActiveRecord framework will assume it has the same name as the class property.
[PrimaryKey(Column = "CustomerID", Generator = PrimaryKeyType.Assigned)]
public override string ID
{
...
[Property()]
public string CompanyName
{
...
The Repository Pattern
The last pattern in this article is the Repository pattern. Edward Hieatt and Rob Mee state that the Repository pattern.
Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.
In the Northwind sample, the repositories form the core of the Data Access layer. I've implemented two repositories based on the generic, collection-like IRepository
interface:
public interface IRepository<t,t> where T : DomainBase<t>
{
T Load(t id);
void AddOrUpdate(ISession session, T domainModel);
void Remove(ISession session, T domainModel);
T[] FetchByExample(T example);
}
- The
RepositoryBase
works with the NHibernate framework, and is responsible for persisting/retrieving data to/from a live database (SQL Server 2000, in this case). Let's see how it implements the AddOrUpdate
method:
public void AddOrUpdate(ISession session, T domainModel)
{
if (domainModel.IsNew)
{
session.Save(domainModel);
domainModel.IsNew = false;
domainModel.IsDirty = false;
}
else
{
if (domainModel.IsDirty)
{
session.Update(domainModel);
domainModel.IsDirty = false;
}
}
}
The InMemoryRepositoryBase
works with a Dictionary
variable, and uses this collection to provide in-memory persistence functionalities. This is how it implements the AddOrUpdate
method:
public void AddOrUpdate(ISession session, T domainObject)
{
if (domainObject.IsNew)
{
domainObject.IsNew = false;
domainObject.IsDirty = false;
inMemoryDictionary.Add(domainObject.ID, domainObject);
}
else
{
if (domainObject.IsDirty)
{
domainObject.IsDirty = false;
inMemoryDictionary[domainObject.ID] = domainObject;
}
}
}
As I described before in the Services layer section, the presenter will ask the user to choose the kind of repository he/she wants to start working with. The presenter then creates an instance of CustomerServices
passing the RepositoryMode
to its constructor.
Data Access Layer Test Results
4. Conclusion
With this article, I hope to provide developers/students with insights about how important it is to add testability to their applications. The sample application certainly is not perfect. Perhaps, it needs more decoupling and factories. I'm willing to improve it as much as I can, so feel free to post your opinions about it.
Article History
- 2007/05/23 - Initial posting.
- 2007/05/24 - Layout changes.
- 2007/05/30 - Unit testing images added, source code updated.
- 2007/06/18 - Bug fix: new orders were not being saved.