Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Integrating Performance Profiling into the Build Process

12 Apr 2011 1  
A look at how performance profiling can be integrated into the build process to form part of the automated test suite.

This article is in the Product Showcase section for our sponsors at CodeProject. These articles are intended to provide you with information on products and services that we consider useful and of value to developers.

Introduction

Many developers understand the value of using a performance profiler as they approach release, but finding a problem only a few days before launch can be a headache. You may have to unravel work done over several weeks in order to change the method responsible for the program’s slow speed, and when it is solved you might not have time to test the resolution as carefully as you might otherwise have done. All the while, you are working under pressure, increasing the risk that you might make a mistake. It would undoubtedly be better to profile your program throughout its development, but few teams have the luxury of spare time to do this regularly.

If you have adopted the continuous integration methodology, you probably already have automated tests at least partly assimilated into your build system. Wouldn’t it be great if you could use your existing test harness to check your application’s performance automatically, every time the tests run? This would have the advantage of spotting problem code, soon after it’s checked in, helping you to find problems, and their solutions, quicker. It might even help you to spot the problem before your boss!

In this walkthrough, I demonstrate how to use Red Gate’s ANTS Performance Profiler to compare the number of CPU ticks that each method takes with a known baseline, inside a NUnit test. If the number of ticks taken exceeds a 33% threshold, the NUnit test fails.

Automated Performance Profiling

Background

ANTS Performance Profiler 6 introduced a command-line interface, which allows profiling sessions to be run without the graphical user interface. Results can then be exported to a XML file. The procedure described here relies on comparing values in two XML results files, one for a known baseline and the other for the version you have just built.

For the sake of simplicity, I have assumed that your application can be run entirely, and without interaction, from the command line; if this isn’t the case, you may need to test different parts of your application individually.

Note that this walkthrough uses parameterized NUnit tests, which require NUnit 2.5 or later.

Step 1: Record a Baseline Set of Results

The first step is naturally to record a set of results from an existing build of your application, which is known to perform well. Start ANTS Performance Profiler at the command line, and save the results to an XML file:

image001.jpg

Ensure that the baseline results are saved somewhere that your test harness will be able to read them.

Step 2: Record Results for New Bbuilds

Add the same command to a batch file that is run when the build server finishes creating a new build. Again, ensure that the results are saved somewhere that your test harness will be able to read them.

Note that the command also saves a copy of the results in the APP6 results format. If a performance problem is encountered, you can open the results inside ANTS Performance Profiler without needing to profile the application again.

Step 3: Write a Program to Read Data from Both Results Files, and Provide it as a Parameterized NUnit Test

The solution comprises two class library projects: one to read the XML results files created by ANTS Performance Profiler, and the other to perform the tests.

Reading Data from the Results Files

using System;
using System.Collections;
using System.Collections.Generic;
using System.Text;
using System.Xml;
 
namespace RedGate.NUnitProfilingSample.ReadXml
{
    public class ReadXml
    {
        public Dictionary<string, long> XmlRead(string filename)
        {
            
     List<string> hierarchy = new List<string>();

 
            Dictionary<string, long> readResults = new Dictionary<string, long>();
 
            using(XmlTextReader textReader = new XmlTextReader(filename))
            {
 
               while (textReader.Read())
               {

                if (textReader.Name== "Method")
                {
                    if (textReader.HasAttributes)
                    {
                        // The current element is a method element
                        textReader.MoveToNextAttribute();
 
                        if (textReader.Name== "class")
                        {
                            string className = textReader.Value
                            textReader.MoveToNextAttribute();
                            hierarchy.Add(className + "." + textReader.Value);
                        }
 
                        else
                        {
                            // If the first attribute isn't a class name, this is normally
                            // because it's unmanaged
                            hierarchy.Add(textReader.Value);
                        }
                    }
                    else
                    {
                        // It's an end tag
                        hierarchy.RemoveAt((hierarchy.Count - 1));
                    }
 
 
                }
                else if (textReader.Name == "CPU")
                {
                    textReader.MoveToNextAttribute();
                    if (textReader.Name  == "ticks")
                    {
                        long cpuTicks = Int64.Parse(textReader.Value);
                        string hierarchyName = String.Join(":", hierarchy);
                        readResults[hierarchyName] = cpuTicks;
                    }
                }
              }
 
            }
            return readResults;
        }
    }
}

Create the NUnit Tests

This project comprises two separate C# files.

The first simply uses the XML reader that we just created to read the results file:

using System.Collections.Generic;

namespace RedGate.NUnitProfilingSample
{
    class DataSource
    {
        Public static Dictionary<string, long> Data()
        {
            ReadXml.ReadXml c = new ReadXml.ReadXml();
            return c.XmlRead(@"..\..\..\ProfilerResults\testresults.xml");
        }
    }
}

The other C# file sets up and runs the NUnit tests. First, the baseline results are read, and then a parameterized test is run over the dictionary returned from the results file. The parameterized test checks whether each method is within the permitted tolerance:

using System;
using System.Collections.Generic;
using NUnit.Framework;

namespace RedGate.NUnitProfilingSample
{

    [TestFixture]
    public class ComparisonTests
    {
        Dictionary<string, long> m_expectedResults;

        [TestFixtureSetUp]
        public void LoadExpectedResults()
        {
            // Reads the expected (baseline) results into a dictionary
            m_expectedResults = new Dictionary<string, long>();
            ReadXml.ReadXml c = new ReadXml.ReadXml();
            m_expectedResults = c.XmlRead(
                @"..\..\..\ProfilerResults\baselineResults.xml");
        }

        [Test]
        public void TestPerformance([ValueSource(typeof(DataSource),
            "Data")] KeyValuePair<string, long> data)
        {
            // Set this to the % tolerance permitted
            const int tolerance = 33;
     
            // Ensures the test doesn't fail if the baseline does not contain a
            // method that is in the code being tested
            if (!m_expectedResults.ContainsKey(data.Key))
            {
                return;
            }
 
            long expectedValue = m_expectedResults[data.Key];
            long result = data.Value;
 
            Assert.True(IsWithinPercentage(expectedValue, result, tolerance), 
                "Value from test ({0}) is not within {1}% of expected value ({2})",
                result, tolerance, expectedValue);
        }

        // Checks that the value (y) is within +/- tolerance% (percentage) of the
        // baseline value (x)
        private static bool IsWithinPercentage(long x, long y, int percentage)
        {
            double percentageAsFraction = (double) percentage/100;
            return y <= x*(1.0 + percentageAsFraction) && y >= x*(1.0 - percentageAsFraction);
        }
    }
}

Step 4: Add the Test to Your Existing NUnit tests

After you have built the two DLLs represented by the two projects, add the DLL containing the NUnit tests to the other tests that take place at build-time.

When the test runs, NUnit checks that no method takes more than 33% longer to run than it did in the baseline results (or more than 33% less time!). We have used 33% because trial-and-error showed this tolerance to provide the most useful results. Naturally, other tasks running on the computer with the test framework can influence the results, and so you would not expect two separate tests to be identical even if no code had been changed. We therefore recommend that you determine the best tolerance for your application, based on your own experimentation.

In the example below, a developer has added a SpinWait() line to the GenerateReport() method, which causes the CPU to do nothing for 90,000 ticks:

static void GenerateReport(PermutationGenerator p)
        {
            string reportText = String.Empty;

            foreach (var permutation in p.Permutations)
            {
                reportText += permutation + Environment.NewLine;
                // waste time here
                Thread.SpinWait(90000);
                if (reportText.Length > 500000)
                {
                    Console.WriteLine(reportText);
 
                    reportText = String.Empty;
                }
            }
 
            Console.WriteLine(reportText);
        }

When this method is compared with the baseline (which did not include SpinWait()), the NUnit test for GenerateReport() fails. This test failure indicates that the change causes the method to execute at least 33% more slowly than in the previous version:

image002.png

image003.png

You can download the files used in this example to try it out for yourself.

Conclusion

Automated testing has long been used in continuous integration to ensure that bugs are found as quickly as possible. In this article, we have shown that you can use ANTS Performance Profiler to extend your existing NUnit tests to include performance testing.

To try ANTS Performance Profiler, download a 14-day free trial.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here