Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Languages / VisualC++

MeTem: A C++ measuring framework

27 May 2014CPOL7 min read 16.6K   218  
A C++ framework to measure things

Introduction

This framework helps you to measure different kinds of parameters in your code. It's very useful when you need to compare at the same time different ways of implementing a feature for measuring performance, use of memory, etc.. By means of MeTem you will be able to measure all those parameters easily.

Background

Sometimes you need to measure parameters of different nature in your code, like performance, used memory, and many others. Due to the variety of these parameters, I have been forced many times to develop measuring ad-hoc applications. Trying to solve this problem, I've designed a framework that compares different implementations of a feature. The framework will help me choose the best implementation depending on a specific target. Sometimes I need to choose the fastest one and other times I need to choose the one that uses the less amount of memory (thinking on a high concurrency scenario). I've also taken into account an easy method to parameterize the tests.
Someone may point out that profiling the code could be the best way to perform these kinds of tasks. I also think the same, but many times we need to measure only the time a piece of code takes to run, even in different computers with different processor architectures, and the use of a profiler is not always an option.

Using the code

First of all, an architecture guideline is always well received :-) The main goal of this framework is to compare the performance or efficiency for different pieces of code, containers, DB servers, etc. so when using MeTem to compare them you have to code each test inside a function with the following signature:

C++
typedef void (* tpTest)(int iSetSize, void *pData, std::vector<int> &results); 

I will explain the meaning of the parameters later.

The next step is to create a test fixture class to group all of them. This class has to inherit from MeTem::CTestFixture base class. For example:

C++
class CSample1PerformanceFixture : public MeTem::CTestFixture
{
public:
    CSample1PerformanceFixture();
}; 

Inside the fixture constructor you have to add each test to the test collection in order to allow the library to run them:

C++
CSample1PerformanceFixture::CSample1PerformanceFixture()
{ 
    // Measurements
    std::list<std::tstring> lstMeasurements;

    lstMeasurements.push_back(_T("Total (s)"));

    setMeasurements(lstMeasurements);

    // Set sizes
    std::list<int> lstSetSizes;

#ifdef _DEBUG
    lstSetSizes.push_back(50000);
#else
    lstSetSizes.push_back(    50000);
    lstSetSizes.push_back(   100000);
    lstSetSizes.push_back(   200000);
#endif

    setSetSizes(lstSetSizes);

    // Tests
    AddTest(_T("Accu_Seq"), Sample1Performance::Test_Accu_Seq, NULL);
    AddTest(_T("Accu_PPL"), Sample1Performance::Test_Accu_PPL, NULL);
}
As you can see, we have to follow three different steps in the constructor:
  1. Identify the measures we are going to collect from the tests. You are responsible for giving a meaning to these measures. The size of this measure list will be the same than the "results" vector that each test will receive as a parameter. In that vector of integer elements the tests have to put each measure in the right order. In this case we are measuring time so the tests will return the result in milliseconds. We'll also need a results adapter to format milliseconds into seconds, in order to show the results in seconds on screen.
  2. Establish the collection of set sizes we want to perform our tests with. In this case, when building the release version of the fixture, we want to perform each test with 50,000, 100,000 and with 200,000 "elements". These values are the ones that each test will receive as the first parameter ("iSetSize"). This means that the tests have to take this value into account to parameterize the algorithm to be tested.
  3. Add each test to the internal test collection giving them a name and a generic pointer that they will receive as a parameter ("pData").

To isolate the development of the tests as much as possible from the test runner I've decided that the test fixtures have to exist inside a DLL (you can include as many different fixtures as you want in the same DLL) so I've also implemented an object factory pattern to allow the runner enumerating and instantiating the fixtures in a dynamic way. All you have to do is register each fixture class into the factory by just calling a register macro provided by the library (METEM_REGISTER_CLASS) in any place of your module (before the class constructor is a good location). Example:

C++
METEM_REGISTER_CLASS(CSample1PerformanceFixture)

And you will also need to declare the factory in any place of your DLL (only once) using the METEM_FACTORY_OBJ macro:

C++
// MeTem fixtures factory
METEM_FACTORY_OBJ​  

Finally, these are the sample tests:

namespace Sample1Performance
{

void Test_Accu_Seq(int iSetSize, void * /*pData*/, std::vector<int> &results)
{
    std::vector<int> vData(iSetSize, 0);

    TIME_START

    for (int i= 0; i < iSetSize; i++)
    {
        int iAccu= 0;

        for (int j= 0; j <= i; j++)
        {
            iAccu+= j;
        }

        vData[i]= iAccu;
    }

    TIME_STOP(results[0])
}

void Test_Accu_PPL(int iSetSize, void * /*pData*/, std::vector<int> &results)
{
    std::vector<int> vData(iSetSize, 0);

    TIME_START

    Concurrency::parallel_for(0, iSetSize, [&vData](int i)
    {
        int iAccu= 0;

        for (int j= 0; j <= i; j++)
        {
            iAccu+= j;
        }

        vData[i]= iAccu;
    });

    TIME_STOP(results[0])
}

As you can see, when using these kinds of frameworks is very easy to focus on the important things. The maximum number of the code lines you have to write is the code you have to measure. The rest of the work is made by the framework. The library also provides a couple of macro sets to help you measure the running time and used memory:

  1. TIME_START and TIME_STOP. These macros will measure the elapsed time between them and store it in the integer variable passed as a parameter (in milliseconds).
  2. MEMORY_START and MEMORY_STOP. These ones calculate the used memory and store it in the integer variable provided as a parameter (in bytes). I haven't found a better way to measure this than calling the Windows "GetProcessMemoryInfo" function. Your system has to be in a "stable" status to get a reliable result. If the system is in a stressed status you can in fact get a negative result because this function is very sensitive to memory pagination events. If anyone knows a better way to measure the used memory please don't hesitate to report it in the comments section ;-)

Once you have written the tests and the fixture you only have to instantiate the fixture and make a call to the "PerformTests()" method. I'm also providing a test runner GUI to perform this task, using the fixture factory.

Due to the use of PPL library, the second test (Test_Accu_PPL) won't compile in Visual Studio versions prior to VS2010 (in fact it will compile an empty test that returns no results).

The GUI

The provided GUI application is its very first release. It allows loading the tests DLL, running the selected fixture tests and even exporting the results to MS Excel. It's just a prototype, so it doesn't check is MS Excel is present or not, and the interface is not responsive when running the tests. I will provide a more advanced version as soon as possible.

This is its current looking:

Image 1

As you can see, these sample tests only return a time measurement. As you probably remember, the helper macros return the elapsed time in milliseconds but the GUI is showing them in seconds. This is possible by overloading the "FormatMeasure" virtual fixture method. Example:

C++
class CSample1PerformanceFixture : public MeTem::CTestFixture
{
public:
    CSample1PerformanceFixture();
    ~CSample1PerformanceFixture();

    std::tstring FormatMeasure(int iMeasure, int iValue) const;
};

And the implementation:

C++
std::tstring CSample1PerformanceFixture::FormatMeasure(int /*iMeasure*/, int iValue) const
{
    std::tostringstream ss;

    //ss.imbue(std::locale(""));

    ss << std::fixed << std::setprecision(3) << iValue / 1000.0;

    return ss.str();
}

As you can see, the method receives the "iMeasure" as parameter, so you can format each measurement separately.

I've avoided formatting the results using the user locale because if you want to dump the data to MS Excel, it always expects that numbers are formatted using US English locale.

Due to C++ binary incompatibility issues (mixing multiple Visual Studio versions in a program is Evil) you have to build both modules (MeTem GUI and fixture tests DLLs) with the same Visual Studio version. So I provide two MeTem GUI versions: A 2005 version and a 2013 one. If you need to use a different Visual Studio version, you shouldn't find any problem when porting any of them to yours.

Next Steps

There are a lot of improvements that I have in my head to improve the framework. These are a few of them:

  1. A responsive test runner GUI, that allow us to cancel or even pause the test runs
  2. A concurrency simulator to measure the tests in a concurrency scenario
  3. A test runner console, to allow to perform unattended runs in an automated scenario
  4. To allow to specify the size of the sets during run time (from the GUI)
  5. ...

Please, don't hesitate to suggest new features you'd like to see in MeTem.

Points of Interest

Writing this code I've learned how to implement an object factory pattern and how to format numbers with user's locale using C++ standard library :-)

Acknowledgements

I'd like to thank to Rafael Cabezas and to Joaquín Sánchez-Valiente for reviewing my writing. Thank you guys!

Donations

If you find this framework useful in the development process of your commercial product, please consider making a donation. This will help me to support the framework. Thank you very much.

Donate

History

  • 1.0 - 27 Apr 2014. Initial release

Conclusions

I've used standard C++ code as much as possible except in GUI modules (2005 and 2013 versions) and MFC one, so it shouldn't be too difficult to port this framework to another compiler or platform. I will release a standard C++ test runner console as soon as possible to facilitate this task.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)