Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

NetCase - Automated test case API using metadata programming in .NET

0.00/5 (No votes)
29 May 2005 1  
NetCase is an API for automation of test cases in .NET.

Introduction

This article introduces newbies to metadata programming in Microsoft .NET via an automated test API named NetCase. Although there are many similar test automation sources (and probably more advanced), this is a very simple test API that I developed to try my hand at metadata programming in .NET.

Metadata Programming

Metadata refers to data about data hence it is two levels of indirection to the actual data. In modern programming languages such as .NET languages, metadata can be used to gain information about the programming language entities (such as classes, methods etc.) and then later retrieve this information and do something useful to the code. Microsoft .NET framework SDK contains the System.Reflection namespace that contains useful classes to deal with metadata.

Introduction To NetCase

NetCase is a name that I coined for the test automation API that I made in .NET to test the code I write. It is similar to JUnit in Java and NUnit in .NET. I created it purely to test my hand at metadata programming in .NET. Hence one can write test cases for a software and use NetCase to execute these test cases and view the results.

Writing A NetCase Test Case

For writing a test case in NetCase, one has to derive a class from the NetCase.TestCase class. TestCase class has the following methods:

  • Initialize

    This method is called before executing each test case.

  • Finalize

    This method is called after execution of each test case.

  • Assert

    Asserts a boolean condition.

As can be seen, Initialize can be overridden to supply initializing code before running each test case and Finalize can be overridden to provide the cleanup code. Assert condition can be tested at the end of each test case to determine the test result. If assert fails, an exception of type TestFailedException is thrown. Following is a simple example of a test case method to test binary operation addition of two numbers:

[TestCase]
  public void Addition()
  {
     A = 2;
     B = 3;
     Assert((A+B) == 5, true);
  }

As can be seen, the test case method is marked by an attribute [TestCase]. This helps the NetCase to differentiate between normal methods and test case methods. This is a simple example of metadata. Test cases can also be negative such as:

[TestCase(false)]
  public void Division()
  {
     A = 10;
     B = 1;
     Assert(float.IsInfinity(A/B), true);
  }

Negative test cases pass a false value to the [TestCase] attribute (by default, a test case is positive). Once a class is derived from the NetCase.TestCase class, it can be passed to the RunTestCase method of the NetCase.TestCaseSuite class for its execution. RunTestCase is a static method that executes test cases contained in a TestCase derived class and returns an object of type ClassTestResult. Following is the definition of the ClassTestResult structure:

public struct ClassTestResult
 {
    public TestResult [] SuccessfulTests;
    public TestResult [] UnSuccessfulTests;
 }

Hence the caller can obtain information about the test results. Note that failure of negative test cases is considered as a success. Currently I am in the process of adding a Microsoft Visual Studio .NET plug-in to display test results and I will update the project file as soon as it is done.

So Where Is Metadata Programming?

The source of the RunTestCase method of the NetCase.TestCaseSuite class contains the metadata code for the processing of test methods. The code checks for proper format of test methods and executes them.

public static ClassTestResult RunTestCase(TestCase Case)
{
    // Perform Sanity

    if (Case == null)
    {
        throw new NullReferenceException();
    }

    // Initialize The Case

    Case.Initialize();
    Type CaseType = Case.GetType();
    ClassTestResult Result = new ClassTestResult();
    ArrayList SuccessfulTests = new ArrayList();
    ArrayList UnSuccessfulTests = new ArrayList();

    // Execute All Methods Marked As Test Cases

    foreach (MethodInfo Info in CaseType.GetMethods())
    {
        // Check If The Method Has TestCaseAttribute

        // If So Execute It

        foreach (Attribute Attr in Attribute.GetCustomAttributes(Info))
        {
            if (Attr.GetType() == typeof(TestCaseAttribute))
            {   
                try
                {
                    if ((Info.GetParameters().Length != 0)|| 
                       (Info.ReturnType != typeof(void))||
                       (!Info.IsPublic))
                    {
                        break;
                    }
                    // Positive Test Case

                    if (!((TestCaseAttribute)Attr).IsNegativeCase)
                    {
                        Info.Invoke(Case, null);
                        SuccessfulTests.Add(new TestResult(Info,true,false,null));
                    }
                    else
                    {
                        // Negative Test Case

                        try
                        {
                            Info.Invoke(Case, null);
                        }
                        catch (TargetInvocationException e)
                        {
                            // Test Case Threw Any Other Exception

                            // Than TestFailedException

                            if (e.InnerException.GetType() != 
                                       typeof(TestFailedException))
                            {
                                UnSuccessfulTests.Add(new TestResult(Info, 
                                            false,true,e.InnerException));
                            }
                            else
                            {
                                SuccessfulTests.Add(new TestResult(Info, 
                                           true,true,e.InnerException));
                            }
                            // Test Failed As Expected

                            continue;
                        }
                        // Negative Test Case Passed !

                        UnSuccessfulTests.Add(new TestResult(Info, false, 
                                       true, new TestFailedException()));
                    }
                }
                catch (System.Reflection.TargetInvocationException e)
                {
                    if (e.InnerException != null)
                    {
                        UnSuccessfulTests.Add(new TestResult(Info,false, 
                                               false,e.InnerException));
                    }
                    else
                    {
                        UnSuccessfulTests.Add(new TestResult(Info,false,false,e));
                    }
                }
            }
        }
        if (SuccessfulTests.Count > 0)
        {
            Result.SuccessfulTests = (TestResult[])
                   SuccessfulTests.ToArray(SuccessfulTests[0].GetType());
        }
        if (UnSuccessfulTests.Count > 0)
        {
            Result.UnSuccessfulTests = (TestResult[])
              UnSuccessfulTests.ToArray(UnSuccessfulTests[0].GetType());
        }
   }
   return Result;
}

That's all for now! In the upcoming projects, I will be adding a Visual Studio .NET add-in that will display a toolbar having test progress and results.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here