Introduction
I'm currently building a suite of automated acceptance tests for a web application. In the past, I've used Watir (pronounced "water," not "waiter"), which is a framework that will drive an instance of Internet Explorer. I've since moved to WatiN, which is a .NET port of Watir, since we are a Microsoft shop. One of the shortcomings of these frameworks is that they don't supply a mechanism for executing and reporting on the tests. Instead, they instruct you to use a unit test runner and framework to perform test execution. Unfortunately, the needs of an acceptance test are different from the needs of a unit test.
Current xUnit test runners fail to meet the needs of an Acceptance test runner. To bridge this gap, I've built this Logging Test Runner. It is written to be NUnit syntax compliant, so that your current investment in test assets can be leveraged and you will be working with a familiar syntax. If there is enough interest, I'll turn it into a full open source project.
The Chasm Between xUnit Needs and Acceptance Test Needs
Unit test automation is well understood. It has been around for almost two decades, although wide adoption didn't occur until the rise of eXtreme Programming and the development of JUnit by Kent Beck. You can read Kent's original article on testing if you are curious. I also came across another article, A Brief History of Test Frameworks by Andrew Shebanow, that claims some unit test automation work actually pre-dates sUnit. By and large, however, most people I interact with consider JUnit to be our starting point.
Well-designed xUnit tests should adhere to the following principles:
- Tests should isolate what they are testing
- Tests should be atomic: fully fail or fully succeed
- Tests should be independent: each test controls all its pre-conditions and can be run independently
- Tests should be repeatable: a second or third run yields the same result as the first run
- Tests should clean up after themselves: leave the system how you found it
Acceptance test automation (and integration tests) are not as mature, especially for web applications. The frameworks, like FIT and FITNesse, Watir and Selenium have only been around for a few years. There are a multitude of recommendations on how to perform acceptance testing, but there is no consensus. Witness the recommendations on Watir. They give examples of writing output to Excel, console or using a unit test runner.
By their nature, acceptance tests have lots of pre-conditions and post-conditions. Often, that is exactly what you are testing. They are expensive to set up and can be long-running. Also, what constitutes acceptance may not always be clear. This is my list of principles that an acceptance test should adhere to:
- Tests should focus on the feature/story they are trying to prove acceptance on
- Tests should be resilient: a failure of one aspect of the test should still allow the full test to execute
- Tests should be independent: each test controls all its pre-conditions and can be run independently
- Tests should be repeatable: a second or third run yields the same result as the first run
- Tests should clean up after themselves: leave the system how you found it
As you can see, the needs are similar, but not the same. The last three bullets are identical for both unit and acceptance tests, but the first two vary. One of the intents of a unit test is to indicate exactly where your code fails. To that end, any failure in a test is a complete failure. Acceptance tests, on the other hand, are testing major functionality and interaction points. You might also be checking many, many minor items along the way to testing your major points. Just because a minor item fails doesn't mean the major feature is failing to function correctly.
Example: You have an event registration form and want to write a test to confirm that when a user registers, they are in the system. The heart of the test enters data in a screen and then navigates to a registrant-listing page to confirm that the user exists. This screen might be four or five screens into your application past a login screen. Each time the test runs, it must perform these prior steps, so it is expensive to set up the test.
After initially building the test, you realize that you also want to test your field validation. In a unit test, you would write a new test. For an acceptance test, however, this is more something that you would want to test "along the way." Since you are already in the editing screen and have gone to the expense of logging in and navigating to the proper screen, this is a good time to perform these tests. As you test each of 20 fields, you discover that half of the fields are not properly validating. To discover this with a unit test runner, it would take at least 10 runs. Therein lies the purpose of the Logging Test Runner (aka: LogRunner).
Done with the boring stuff. On to the code.
Sample Test Suite
We will build up some trivial samples to show LogRunner in action. Our sample will perform two tests: search for a result in Google and navigate the MSDN site.
Step 1: Install Frameworks
Step 2: Setup Project
Create a new C# class library project in Visual Studio 2005. Call it MyWatinSample
. Add references to nunit.framework.dll and WatiN.Core.dll. These will be located in the bin directory of their respective installation folders. You can also remove the System.Data
and System.Xml
references that Visual Studio adds for you if you are anal about your code.
Step 3: Create Test Fixture Classes
Add class files named GoogleTest.cs and EntertainmentWeeklyTest.cs to your project. I started with a test of the MSDN site, but it is horrifyingly slow and does something to keep IE from completing page load, so I did a sample with EW instead. These are your two test fixtures. Below is the code for each file. The tests could be refactored better, but the intent is a little clearer without base classes. I'll let you examine it at your leisure.
GoogleTest Source Listing
using System;using NUnit.Framework;
using WatiN.Core;
namespace MyWatinSample
{
[TestFixture]
public class GoogleTest
{
#region Property to web browser instance
private IE _browser = null;
public IE Browser
{
get
{
if (_browser == null)
{
_browser = new IE();
}
return _browser;
}
set { _browser = value; }
}
#endregion
#region Fixture setup and teardown
[TestFixtureSetUp]
public void SetupTestFixture()
{
}
[TestFixtureTearDown]
public void TearDownTestFixture()
{
Browser.Close();
Browser = null;
}
#endregion
[Test]
public void SearchForWatinHome()
{
string watinSearch = "WatiN Test";
string watinHome = "WatiN Home";
string watinURL = "http://watin.sourceforge.NET/";
Browser.GoTo("http://www.google.com");
Assert.IsTrue(Browser.ContainsText("Google"),
"Google page does not contain Google Name");
Browser.TextField(Find.ByName("q")).Value = watinSearch;
Browser.Button(Find.ByValue("Google Search")).Click();
Assert.IsTrue(Browser.ContainsText(watinHome),
"Search result did not find " + watinHome);
Browser.Link(Find.ByText(watinHome)).Click();
Assert.AreEqual(watinURL, Browser.Url,
"WatiN Home not at sourceforge URL");
}
}
}
EntertainmentWeeklyTest Source Listing
using System;
using WatiN.Core;
using NUnit.Framework;
namespace MyWatinSample
{
[TestFixture]
public class EntertainmentWeeklyTest
{
const string ewHomeUrl = "http://www.ew.com/ew";
#region Property to web browser instance
private IE _browser = null;
public IE Browser
{
get
{
if (_browser == null)
{
_browser = new IE();
}
return _browser;
}
set { _browser = value; }
}
#endregion
#region Fixture setup and teardown
[TestFixtureSetUp]
public void SetupTestFixture()
{
}
[TestFixtureTearDown]
public void TearDownTestFixture()
{
Browser.Close();
Browser = null;
}
#endregion
#region Test setup and teardown
[SetUp]
public void TestSetup()
{
if (Browser.Url != ewHomeUrl)
{
Browser.GoTo(ewHomeUrl);
}
}
[TearDown]
public void TestTeardown()
{
}
#endregion
[Test]
public void EWMoviesTest()
{
Assert.IsTrue(Browser.ContainsText("Entertainment Weekly"),
"Entertainment Weekly does not " +
"state its name on home page.");
Assert.IsTrue(Browser.ContainsText("Movies"),
"Movie test not found on home page");
Browser.Link(Find.ByText("Movies")).Click();
Assert.IsTrue(Browser.ContainsText("Upcoming Movies"),
"EW Movie page does not announce upcoming movies.");
}
[Test]
public void ChrisColeMoviesTest()
{
Assert.IsTrue(Browser.ContainsText("Entertainment Weekly"),
"Entertainment Weekly does " +
"not state its name on home page.");
Assert.IsTrue(Browser.ContainsText("Movies"),
"Movie test not found on home page");
Browser.Link(Find.ByText("Movies")).Click();
Assert.IsTrue(Browser.ContainsText("Chris Cole"),
"Entertainment Weekly does not have Chris Cole's new " +
"indie film listed on their movie site.");
}
[Test]
public void BriteySpearsTest()
{
Assert.IsTrue(Browser.ContainsText("Britey Spears"),
"Entertainment Weekly does not " +
"have anything about Britney" +
" Spears on its home page.");
Browser.TextField("searchbox").Value = "Britney Spears";
Browser.Button("btn_search").Click();
Assert.IsTrue(Browser.ContainsText("All about"),
"EW does not talk all about Britney Spears");
Browser.Link(Find.ByText("Britney Spears")).Click();
Assert.IsTrue((Browser.Title.IndexOf("Britey Spears") > -1),
"About Britney page mis-titled");
}
}
}
Now we have our samples that will run in an NUnit runner. One performs a search for the WatiN homepage in Google and the other looks at some pages in MSDN. The supplied source code includes an NUnit project to execute the tests. You must run it on a single apartment thread. The sample code includes a proper config file to do this.
LogRunner Code
LogRunner is designed to be syntax-compatible with NUnit. This design decision was made so that tests could be developed and run with either NUnit Runner (Console or GUI) or the LogRunner. My first application of this was for a smoke test that could be compiled into an acceptance test suite or as a standalone console application for smoke testing deployed applications. Compilation mode is set via a compilation switch that replaces the using XXX
reference depending on what your target is. LogRunner is a refinement of that original smoke test runner.
NUnit (and most other .NET xUnit frameworks) make use of custom attributes to define test fixtures and aspects of the test suite. Our logging runner will need to define and read those same custom attributes. We will additionally have to implement methods from the Assert
object in a way that logs results and continues the current test run, as opposed to halting the test on an assert failure.
We'll start with some basic parts. We need an implementation of the Assert
static
class. We'll also need an implementation of all the custom attribute classes used by NUnit. These classes will be put in a namespace that can be controlled via a compiler directive. In the case of LogRunner, this will be the LogRunner.Framework
namespace.
Assert
Assert provides static
methods to perform a variety of tests. IsTrue
happens to be the most useful, since every question can be phrased as True
/False
. Everything else is just for convenience and readability. This implementation also contains properties to indicate current TestFixture
and Method
, as well as a collection (List
) of failed assert calls.
The failure log makes use of a FailedAssert
class. This is only a convenience class to bundle meta information about the assert for the log collection. I leave it to you to investigate the FailedAssert
class in the sample code.
Assert Source Listing (Brief)
using System;
using System.Collections.Generic;
using System.Text;
namespace LogRunner.Framework
{
static class Assert
{
#region Properties about assert
private static string _classUnderTest = null;
private static string _methodUnderTest = null;
private static List<FailedAssert> _failedAsserts = null;
public static string ClassUnderTest
{
get { return _classUnderTest; }
set { _classUnderTest = value; }
}
public static string MethodUnderTest
{
get { return _methodUnderTest; }
set { _methodUnderTest = value; }
}
public static List<FailedAssert> FailedAsserts
{
get
{
if (_failedAsserts == null)
{
_failedAsserts = new List<FailedAssert>();
}
return _failedAsserts;
}
set
{
_failedAsserts = value;
}
}
#endregion
#region Assert methods
public static void IsTrue(bool test)
{
IsTrue(test, "[No message for error]");
}
public static void IsTrue(bool test, string message)
{
string className = ClassUnderTest;
if (className == null) className = "[Unknown]";
if (test == false)
{
AddTestFailureMessage(message);
}
}
#endregion
private static void AddTestFailureMessage(string message)
{
FailedAsserts.Add(new FailedAssert(ClassUnderTest,
MethodUnderTest, message));
}
public static bool RunWasSuccessful()
{
return (FailedAsserts.Count == 0);
}
}
}
Custom Attributes
We also have to replicate all the custom attributes used by NUnit. These are really only markers for the reflection.
Attribute Classes Source
using System;
namespace Log.Framework
{
[AttributeUsage(AttributeTargets.Class)]
public class TestFixtureAttribute : System.Attribute { }
[AttributeUsage(AttributeTargets.Method)]
public class TestFixtureSetUpAttribute : System.Attribute { }
[AttributeUsage(AttributeTargets.Method)]
public class TestFixtureTearDownAttribute : System.Attribute { }
[AttributeUsage(AttributeTargets.Method)]
public class SetUpAttribute : System.Attribute { }
[AttributeUsage(AttributeTargets.Method)]
public class TearDownAttribute : System.Attribute { }
[AttributeUsage(AttributeTargets.Method)]
public class TestAttribute : System.Attribute { }
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class CategoryAttribute : System.Attribute
{
public CategoryAttribute() { }
public CategoryAttribute(string name) { }
private string _name;
public string Name
{
get { return _name; }
set { _name = value; }
}
}
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class IgnoreAttribute : System.Attribute
{
public IgnoreAttribute() { }
public IgnoreAttribute(string reason)
{ this.Reason = reason; }
private string _reason;
public string Reason
{
get { return _reason; }
set { _reason = value; }
}
}
}
We now have a replication of the required NUnit parts. The next step is to build out the test runner with proper reflection to execute the tests. An xUnit test fixture's lifecycle looks something like this:
- Load
TestFixture
class instance - Run Fixture Setup
- Run Test Setup
- Execute Test 1
- Run Test Teardown
... - Run Test Setup
- Execute Test N
- Run Test Teardown
- Run Fixture Teardown
TestClassInfo
In order to aid the test runner, LogRunner makes use of a TestClassInfo
class to hold the meta information about a fixture. This contains MethodInfo
properties to represent the various setup and teardown methods. By making use of this class, the TestRunner
can handle all tests in a more generic manner. This class also encapsulates the reflection logic for discovering what methods to test.
In order to build out the meta information, the class invokes LoadClassToTest
with an instance of the class to be tested. This method gets all public
instance methods with the below call:
Type testDef = objectToTest.GetType();
MethodInfo[] methods = testDef.GetMethods(
BindingFlags.Public | BindingFlags.Instance);
It then loops each MethodInfo
object, checks its custom attributes and adds it to the proper property as below:
foreach (MethodInfo currentMethod in methods)
{
object[] attributes =
currentMethod.GetCustomAttributes(true);
foreach (object attr in attributes)
{
string typeName = attr.GetType().Name;
if ("TestAttribute" == typeName)
{
_testMethods.Add(currentMethod);
break;
}
else if ("TestFixtureSetUpAttribute" == typeName)
{
FixtureSetupMethod = currentMethod;
break;
}
else if ("TestFixtureTearDownAttribute" == typeName)
{
FixtureTeardownMethod = currentMethod;
break;
}
else if ("SetUpAttribute" == typeName)
{
TestSetupMethod = currentMethod;
break;
}
else if ("TearDownAttribute" == typeName)
{
TestTeardownMethod = currentMethod;
break;
}
}
}
After loading all the reflective information and grabbing the proper method hooks, it is ready to be tested. I leave it to you to explore the entire class listing in the included source code.
TestRunner
TestRunner
handles loading any number of assemblies and discovering all classes within, as well as executing tests on those classes. In the future, this may be made into an abstract
class or an interface, but for now there is only one runner that the console may invoke. For a listing of all current shortcomings, see the Release Notes section below.
Communication between the runner and hosting app is handled via events. A predictable set of events is defined (TestFixtureStarted
, TestComplete
, etc.). TestComplete
includes a log of all failed Assert
s in its argument, so this is currently where we discover if we had a successful run or not. You can view LogRunner.Program
to see what is being done with the events raised by TestRunner
.
TestRunner
has two main methods for test execution: ExecuteAssemblyTests
and ExecuteObjectTests
. These methods use a little reflection, plus our TestClassInfo
object to actually execute the tests. ExecuteAssemblyTests
loops through all types defined in an assembly. Any type found to be flagged as a TestFixture
will be run. Actual test execution by ExecuteObjectTests
requires an instance of the test fixture class. To obtain this instance, we use System.Activator.CreateInstance()
to create an instance of this class. This instance is then passed to ExecuteObjectTests
, which will create a TestClassInfo
object containing all of the method pointers for the test setup, execution and teardown. Below shows what these methods look like.
public void ExecuteAssemblyTests(string assemblyName)
{
if (!TestAssemblies.ContainsKey(assemblyName))
throw new ArgumentException("Assembly not defined for testing");
Assembly testAsm = this.TestAssemblies[assemblyName];
if (testAsm == null) return;
OnAssemblyTestsStarted(new TestExecutionEventArgs(assemblyName));
Type[] testTypes = testAsm.GetTypes();
foreach (Type testClassType in testTypes)
{
object[] fixtureAttribs =
testClassType.GetCustomAttributes(typeof (
TestFixtureAttribute), false);
object[] ignoreAttribs =
testClassType.GetCustomAttributes(typeof(IgnoreAttribute),
false);
if (fixtureAttribs.Length == 0) continue;
if (ignoreAttribs.Length > 0) continue;
object testInstance = Activator.CreateInstance(testClassType);
ExecuteObjectTests(testInstance);
}
OnAssemblyTestsComplete(new TestExecutionEventArgs(assemblyName));
}
public void ExecuteObjectTests(object objectToTest)
{
string assemblyName = objectToTest.GetType().Assembly.FullName;
string fixtureName = objectToTest.GetType().FullName;
TestClassInfo testClass = new TestClassInfo(objectToTest);
Framework.Assert.ClassUnderTest = fixtureName;
OnTestFixtureStarted(new TestExecutionEventArgs(assemblyName,
fixtureName));
InvokeMethod(testClass.FixtureSetupMethod, testClass.ClassToTest);
foreach (MethodInfo test in testClass.TestMethods)
{
try
{
System.Diagnostics.Debug.WriteLine(String.Format(
" -Testing: {0}", test.Name));
Assert.MethodUnderTest = test.Name;
Assert.FailedAsserts = new List<FailedAssert>();
OnTestStarted(new TestExecutionEventArgs(
assemblyName, fixtureName, test.Name));
InvokeMethod(testClass.TestSetupMethod,
testClass.ClassToTest);
InvokeMethod(test, testClass.ClassToTest);
InvokeMethod(testClass.TestTeardownMethod,
testClass.ClassToTest);
}
catch (Exception ex)
{
Assert.FailedAsserts.Add(new FailedAssert(assemblyName,
fixtureName, test.Name, "Exception: " + ex.Message));
OnTestExecutionException(new TestExecutionExceptionEventArgs(
ex, assemblyName, fixtureName, test.Name));
}
finally
{
OnTestComplete(new TestExecutionEventArgs(assemblyName,
fixtureName, test.Name, Assert.FailedAsserts));
}
}
InvokeMethod(testClass.FixtureTeardownMethod, testClass.ClassToTest);
OnTestFixtureComplete(new TestExecutionEventArgs(assemblyName,
fixtureName));
}
Now Where's the Magic?
Glad you asked. As I said before, LogRunner is NUnit syntax compliant, not binary compliant. This means that your NUnit test fixtures will not run out of the box. They need to be recompiled to bind to the object set defined within LogRunner. You shouldn't need to rewrite your tests. You will, however, do a little magic with compilation flags and pre-processor statements.
First, add a reference to LogRunner.exe from any projects you wish to run under LogRunner. Next, in each test fixture class, change your current using NUnit.Framework;
statement to the following:
#if LOGRUNNER
using LogRunner.Framework;
#else
using NUnit.Framework;
#endif
Finally, add the conditional compiler directive LOGRUNNER
to your project in Visual Studio (Project > [Project_Name] Properties > Build > Conditional compilation symbols) or set the value at command line compile time. Your code will now be ready for LogRunner. To change back to NUnit, simply remove the compiler directive.
Conclusion
Acceptance testing, particularly web application acceptance testing, has different needs from unit testing. While unit test runners are fairly mature, acceptance test runners still have a long way to go. In addition to being a little bit of fun with reflection, WatiN and conditional compilation, this is an attempt to move us closer to the kind of functionality we need from acceptance test runners.
Release Notes
- Currently, all command line switches are ignored; only assembly paths are honored
[Ignore]
attributes at the test level are ignored - Results are only sent to
stdout
as plain text; there's no XML support - In the next version, I'll likely try to support the NUnit XML output file TestResult.xml
- Only a small subset of
Assert
methods are supported - Single fixture and single test execution is not supported
- LogRunner is distributed under the Apache License version 2.0
History
- 14th November, 2007: Original version posted
License
This article has no explicit license attached to it, but may contain usage terms in the article text or the download files themselves. If in doubt, please contact the author via the discussion board below.
A list of licenses authors might use can be found here.