Introduction
This article present NPerf a flexible performance benchmark framework. The framework provides custom attributes that the user uses the tag benchmark classes and methods. If you are familiar with NUnit [1], this is similar to the custom attributes they provide.
The framework uses reflection to gather the benchmark testers, the tested types, runs the tests and output the results. The user just have to write the benchmark methods.
At the end of the article, I illustrate NPerf with some metaphysic .NET question: interface vs. delegates, string concatenation race, fastest dictionary.
QuickStart: Benchmarking IDictionary
Let's start with a small introductory example: benchmarking the []
assignment for the different implementation of IDictionary
. To do so, we would like to test the assignment on a growing number of assignment calls.
All the custom attributes are located in the NPerf.Framework
namespace, NPerf.Framework.dll assembly.
PerfTester attribute: defining testers
First, you need to create a tester class that will contains method to do the benchmark. This tester method has to be decorated with the PerfTester
attribute.
using NPerf.Framework;
[PerfTester(typeof(IDictionary),10)]
public class DictionaryTester
{
...
}
The PerfTesterAttribute
constructor takes two argument:
- the
Type
of the tested class, interface, or struct,
- the number of test runs. The framework will use this value to call test methods multiple times (explained below).
PerfTest attribute: adding benchmark tests
The PerfTest
attribute marks a specific method inside a class that has already been marked with the PerfTester
attribute, as a performance test method. The method should take the tested type as parameter, IDictionary
here, and the return type should be void
:
[PerfTester(typeof(IDictionary),10)]
public class DictionaryTester
{
private int count;
private Random rnd = new Random();
[PerfTest]
public void ItemAssign(IDictionary dic)
{
for(int i=0;i<this.count;++i)
dic[rnd.Next()]=null;
}
}
PerfSetUp and PerfTearDown Attributes
Often, you will need to set up you tester and tested class before actually starting the benchmark test. In our example, we want to update the number of insertion depending the test repetition number. The PerfSetUp
attribute can be used to tag a method that will be called before each test repetition. In our test case, we use this method to update the DictionaryTester.count
member:
[PerfTester(typeof(IDictionary),10)]
public class DictionaryTester
{
private int count;
private Random rnd = new Random();
[PerfSetUp]
public void SetUp(int index, IDictionary dic)
{
this.count = index * 1000;
}
}
The set-up method must return void
and take two arguments:
index
, current test repetition index. This value can be used to modify the number of elements tested, collection size, etc...
dic
, the tested class instance
If you need to clean up resources after the tests are run, you can use the PerfTearDown
attribute to tag a cleaning method:
[PerfTester(typeof(IDictionary),10)]
public class DictionaryTester
{
...
[PerfTearDown]
public void TearDown(IDictionary dic)
{
...
}
}
PerfRunDescriptor attribute: giving some information to the framework
In our example, we test the IDictionary
object with an increasing number of elements. It would be nice to store this number in the results, and not store just the test index: we would like to store 1000, 2000, .... and not 1, 2, ...
The PerfRunDescriptor
attribute can be used to tag a method that returns a double from the test index. This double is typically used for charting the results, as x coordinate.
[PerfTester(typeof(IDictionary),10)]
public class DictionaryTester
{
[PerfRunDescriptor]
public double Count(int index)
{
return index*1000;
}
}
Full example source.
The full source of the example is as follows:
using System;
using System.Collections;
using NPerf.Framework;
[PerfTester(typeof(IDictionary),10)]
public class DictionaryTester
{
private int count = 0;
private Random rnd = new Random();
[PerfRunDescriptor]
public double Count(int index)
{
return index*1000;
}
[PerfSetUp]
public void SetUp(int index, IDictionary dic)
{
this.count = (int)Math.Floor(Count(index));
}
[PerfTest]
public void ItemAssign(IDictionary dic)
{
for(int i =0;i<this.count;++i)
dic[rnd.Next()]=null;
}
}
Compiling and Running
Compile this class to an assembly and copy the NPerf binaries in the output folder: (NPerf.Cons.exe, NPerf.Core.dll, NPerf.Framework.Dll, NPerf.Report.Dll, ScPl.dll).
NPerf.Cons.exe is a console application that dynamically loads the tester assemblies (that you need to specify), the assemblies that contains the tested types (you need to specify), runs the test and output charts using ScPl [2] (ScPl is a chart library under GPL).
The call to NPerf.Cons.exe looks like this:
NPerf.Cons -ta=MyPerf.dll -tdfap=System -tdfap=mscorlib
where
ta
defines an assembly that contains tester classes (DictionaryTester
),
tdfap
defines an assembly that contains tested type. Moreover, the assembly names are given as partial name and will be loaded by AssemblyLoadWithPartialName
.
There are a number of other options that you can get by typing NPerf.Cons -h. Running the command line above will produce the following chart:
In the graph, you can see that some type failed the tests (PropertyDescriptorCollection
). It is possible to specify to NPerf to avoid those types by passing them in the command line:
NPerf.Cons -ta=MyPerf.dll -tdfap=System -tdfap=mscorlib
-it=PropertyDescriptorCollection
Saving to XML
You can also output the results to XML by adding the -x
parameter. Internally, .NET XML serialization is used to render the results to XML.
A few remarks
- You can add as many test method (
PerfTest
) as you want in the PerfTester
classes,
- You can define as many tester class as you want,
- You can load tester/tested types from multiple assemblies
Overview of the Core
The NPerf.Core
namespace contains the methods that do the job in the background. I do not plan to explain them in details but I'll discuss some problem I ran into while writing the framework.
Getting the machine properties
Getting the physical properties of the machine was a surprisingly difficult task. It took me a bunch of Google tries to get on the right pages. Anyway, here's the self-explaining code that get the machine properties:
ManagementObjectSearcher query = new
ManagementObjectSearcher("SELECT * From Win32_ComputerSystem");
foreach(ManagementObject obj in query.Get())
{
long ram = long.Parse(obj["TotalPhysicalMemory"].ToString());
break;
}
query = new ManagementObjectSearcher("SELECT * From Win32_Processor");
foreach(ManagementObject obj in query.Get())
{
string cpu =(string)obj["Name"];
long cpuFrequency =
long.Parse(obj["CurrentClockSpeed"].ToString());
break;
}
TypeHelper, easier CustomAttribute support
A type helper static class was added to automate tedious tasks like check for custom attribute, get a custom attribute, etc... The TypeHelper
class declaration is as follows:
public sealed class TypeHelper
{
public static bool HasCustomAttribute(
Type t,
Type customAttributeType);
public static bool HasCustomAttribute(
MethodInfo t,
Type customAttributeType)
public static Object GetFirstCustomAttribute(
Type t,
Type customAttributeType)
public static Object GetFirstCustomAttribute(
MethodInfo mi,
Type customAttributeType)
public static MethodInfo GetAttributedMethod(
Type t,
Type customAttributeType)
public static AttributedMethodCollection GetAttributedMethods(
Type t,
Type customAttributeType);
public static void CheckSignature(
MethodInfo mi,
Type returnType,
params Type[] argumentTypes);
public static void CheckArguments(
MethodInfo mi,
params Type[] argumentTypes);
}
Benchmark Bonus
In order to illustrate the framework, I have written a few benchmark testers for classic performance questions about .NET. All these benchmarks are provided in the System.Perf project.
IDictionary benchmark
String concatenation benchmark
Interface vs. Delegate
History
- 26-01-2004, v1.0, Initial release.
References
- NUnit, unit test framework for .NET
- ScPl, free chart library