Introduction
When I'm starting a new project, I often forget how to start it in a test driven way. Here are some general reminders useful for C# development. Developing with tests is much better than trying to add them retrospectively. You will have less bugs, and more confidence in your code.
What is Test-driven?
Test-driven is an approach to developing software. There is no excuse anymore for not writing tests as you develop, there are lots of test frameworks free of charge to use (for example MS Test built into Visual Studio). Best practices dictate every time you develop some code, follow these steps:
- Resolve your requirement. Remove any ambiguity, stipulate concisely what you are trying to achieve. Consider error conditions and edge cases, the "what-ifs" ?
- Develop a test. Write a test that will prove the function you are writing works exactly as you want it to. Then, add some tests to make sure if it fails you handle the failure in a graceful manner, nobody wants to use software that bombs out periodically.
- Develop your function. Use comments for any non-obvious code. Use sensible variable and function names.
- Add logging. I use different levels of logging. For example, I add a log entry for every function as an Info i.e. "FunctionA called with these params {0}", and caught exceptions will print an Error. Make sure your error messages give enough information if possible, for the user to sort the problem out themselves, thus cutting down on lots of support calls, and bug reports, etc.
Of course, if you have an automated build process, then after you check in your code to a repository, make this process execute the tests as well (if you can auto build and test all your configurations i.e. Release/Debug then that's event better). This will make sure no matter how finished the project is, the code is at least still runnable !
Test Project
Always start your development with a test project, to set one up simply create one, File->New Project->Templates->Visual C#->Test->Unit Test Project.
Link the code you want to test into your test projects references.
Creating Test Classes and Tests
All test classes that contain tests should be public
. Here are some reminders of the MS Test annotations...
[TestClass()]
- Makes the class a test class
[TestMethod]
- Marks the function as a test function
[Description("Test the set up of command parameters")]
- Add a description about the intention of the test
[Ignore]
- Don't run the test as default. Ignore the test if you want to keep the test code, but only want to test it occasionally. This is especially useful if it leaves a device in a undesired state, i.e., reboot on an ios device, or if the test takes ages to run.
[ExpectedException]
- Use this annotation if you expect your test to throw an exception. Note if your test throws more than one exception in different places, then see the tip below for using try and catch with Fail instead.
Test Conditions
When testing, use the Assert
class. Remember you are making sure something is true
. For example, Assert(This is true)
.
Note that you write an exception with what you expect as the first parameter and what you actually got as the second. The arguments are ordered, if you swap them around, you will get false information, i.e., Expected null
, got a valid reference is not the same as Expected a valid reference and got a null
!
Here are some Assert
functions:
IsTrue
- Is the statement true
IsFalse
- Is the statement not true
AreEqual
- Are the two equal
IsNull
- Is the reference null
Fail
When testing, try not to be only optimistic, have tests that pass in null
s, files that don't exist, etc. If we hit a Fail assert, then chances are the code you are testing hasn't thrown an exception or caused what you expected. If the test should throw an expected exception use the [ExpectedException] annotation, your test will pass only if the exception is thrown. If however you expected several exceptions consider this approach i.e.:
[TestMethod]
[Description("Test the pairing of the device")]
public void DeviceTargetConnect(string deviceId)
{
IOSDevice targetDevice = new IOSDevice(Id);
Assert.IsFalse(targetDevice.SupportCB);
Assert.IsTrue(targetDevice.SupportConnection);
try
{
targetDevice.Connect();
Assert.IsTrue(targetDevice.IsConnected());
}
catch (Exception exc)
{
Assert.Fail("Connecting to the device failed.
Exception : " + exc.ToString());
}
}
Setting Up and Tearing Down
You can have functions that are called at certain times to set things up, and then to tear them down again.
Note that these signatures are sacrosanct, deviate and your tests will compile fine but then either bomb out before any code is called, or fail silently by the function not being called at all.
Check the Test dialog in visual studio and you will see the exception being raised if your signature isnt perfect, you will not be able to debug anything to discover what has gone wrong if for example your assembly functions signatures are not as expected.
Here are the annotations:
[AssemblyInitialize]
- Gets called once before all the tests are run (before ClassInitialize)
Do not use with ASP.Net:
public static void AssemblyInit(TestContext context)
[AssemblyCleanup]
- Gets called when test suite has finished (after ClassCleanup
):
public static void AssemblyCleanup()
[ClassInitialize]
- Gets called when the test suite begins (when the test run begins):
public static void IOSTestSettings(TestContext context)
[ClassCleanup]
- Gets called when test suite has finished (when the test class goes out of scope):
public static void CleanIOSTestSettings()
[TestInitialize]
- Gets called before each individual test function:
public void StartTest()
[TestCleanup]
- Gets called after each individual test function:
public void EndTest()
Note that if you do not declare them as public
, they will not get called, and sometimes people put function brackets after the annotation, i.e., [TestCleanup()]
, but this makes no difference.
Print Output TTY
When you want to output any text in Visual Studio, use:
System.Diagnostics.Debug.WriteLine("Hello Dave ");
This will print out your text in the Immediate Window, not the Output window.
Temporary Dialogs
Sometimes, you want to run a test and make sure other threads are behaving as expected. There are two approaches to setting up the code so you can examine what is going on.
Sleep
If you need an operation to finish before the next line is executed and you cannot get a notification from the system you are calling, although not perfect, sleep is the best option.
System.Threading.Thread.Sleep(TimeToWaitForSomething);
Make sure you don't put magic numbers, i.e., (5000), as this means nothing, and if the behaviour of the system changes, you can easily change the sleep period in a single place in the code, rather than grepping for 5000 !
Dialog Box
"No", they cried, "What?", they gasped. Yes, sometimes you want to set some hardware up while you are in the middle of a test, for this I use a dialog. I've never checked in code to a repository that does this, every system I have worked on has had a fully automated test suite (i.e., no manual intervention). But, sometimes, for debugging it can be useful, i.e. pair an iphone before downloading an application requires someone to click on a dialog on the iOS device, therefore I need a dialog in the test code.
DialogResult d = MessageBox.Show("Hello",
"Press the button when you want to run the app", MessageBoxButtons.OK);
Conclusion
Try and keep your tests atomic (i.e. test one thing), chaining functionality together can give results that always need investigating further. For example, a test installs an app on a phone. It fails. Ok this is single piece of functionality so investigate why.
Another test attempts to install an app, run the app, close it, and then un-install the application. The test fails, ok now we have to debug to find out where it fails (or at least investigate the log). If this test is testing the un-install, then this may be the only way to set the device up to test the un-install, so it's unavoidable. If this isn't the case, then split the test up into individual atomic parts and test them independently.
Desirable test results should indicate exactly what functionality is broken simply by which test fails (goes red).
I have caught 80% more bugs earlier on in the development lifecycle just from using a test-driven approach. It took me years to finally agree to doing it, and now I am using it I can't imagine developing in any other way. Take the plunge!