Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

RaptorDB - the Key Value Store

0.00/5 (No votes)
22 Jan 2012 20  
Smallest, fastest embedded nosql persisted dictionary using b+tree or MurMur hash indexing. (Now with Hybrid WAH bitmap indexes)

RaptorDB/raptorDB.pngThe Key Value Store

RaptorDB/20million1.png

Preface

For the new version 2.0 of the engine please go to (http://www.codeproject.com/Articles/316816/RaptorDB-The-Key-Value-Store-V2) which is superior.

The code for RaptorDB is now on http://raptordbkv.codeplex.com/. I will do my best to keep this article and the source code there in sync.

Introduction

RaptorDB is a extremely small size and fast embedded, noSql, persisted dictionary database using b+tree or MurMur hash indexing. It was primarily designed to store JSON data (see my fastJSON implementation), but can store any type of data that you give it. RaptorDB was inspired by the work done by Aaron Watters found at (http://sourceforge.net/projects/bplusdotnet/) although originally I used that code but I found it was too slow, memory consumption was excessive and the code was very hard to follow, so I rewrote the b+tree algorithm from scratch from what I could gather from the internet. Bear in mind that I'm not from a computer science background, so I had to go through a crash course in indexing techniques.

Interestingly, there are very few good resources on basic algorithms on the internet and most are from an academic perspective which does little for you when you want to make a practical application, but that's a different matter and it seems that they are "dark arts" reserved for RDMB vendors.

The name RaptorDB comes from a seeming tradition of naming databases after animals (HamsterDB, RavenDB, ...) but with a more sexy sounding name reflecting the performance attributes.

What Is It?

Here is a brief overview of all the terms used to describe RaptorDB:

  • Embedded: You can use RaptorDB inside your application as you would any other DLL, and you don't need to install services or run external programs.
  • NoSQL: A grass roots movement to replace relational databases with more relevant and specialized storage systems to the application in question. These systems are usually designed for performance.
  • Persisted: Any changes made are stored on hard disk, so you never lose data on power outages or crashes.
  • Dictionary: A key/value storage system much like the implementation in .NET.
  • B+Tree: The cornerstone to all database indexing techniques, see the link for an in depth description on this topic (http://en.wikipedia.org/wiki/B%2B_tree) for my implementation, see the how it works sections. It has a BigO order of O(logN).
  • MurMurHash: A non cryptographic hash function created by Austin Appleby in 2008 (http://en.wikipedia.org/wiki/MurmurHash). It has a BigO order of O(1).

Some Possible Uses...

  • High performance persisted message queue: A message queue with no dependencies and minimal overhead.
  • Document database: Store and retrieve document files.
  • Web site asset storage: Store graphic images and web content in one file.
  • High volume web site data storage: Store user information instantly without storage limitations and added RDMB overhead / installation / running costs.
  • JSON object storage: Basis for a mongodb or couchdb style database.
  • Web site session state storage: Store your web session info in a fast storage systems instead of SQL servers.
  • Single file file system storage: If you have a lot of small files, then RaptorDB could optimize the storage for those files with less file system storage waste and faster access.

Why Another Database?

The main reason for RaptorDB was the fact that there was no pure .NET implementation of a persisted dictionary which met the performance requirements, and regular databases like Microsoft SQL Server, MySql, etc. are dismally slow, and bloated for the job of storing JSON data. I even tried DBF files but most implementation in .NET lacked full memo support. Others like sqlite while having fast insert times lack multi thread support and lock databases on inserts.

RaptorDB was envisioned as a replacement storage system to RDBM storage for my own framework which has had a document centric design since 2003 and was built on RDBM storage at the time. The relational storage systems were extremely slow at storing BLOB data.

Features

RaptorDB has been built with the following features in mind:

  • Blazing fast insert speed ~ 80% hard disk speed (see performance test section)
  • Blazing fast retrieval speed
  • Store huge amounts of data
  • Minimum index files size
  • Background indexing of data
  • Embeddable in larger application
  • No installation required
  • Work on .NET 2.0 and up
  • Multi threaded insert support
  • Small as possible code base ~ 40KB DLL file
  • No dependencies
  • Crash resistant, fail safe and recoverable
  • No shutdown required
  • Add only, so data integrity is maintained and historical/duplicate values are supported
  • Immediate flush of data to disk to ensure data integrity (it is possible to get more throughput if you are willing to sacrifice integrity with buffered output)
  • Count() function implemented for index files.
  • InMemoryIndex property and SaveIndex() function implementation for in memory indexes and ability to save indexes to disk on demand.
  • EnumerateStorageFile() using yield to traverse the storage file contents into a KeyValuePair<byte[], byte[]> (key and contents).
  • Support for duplicate keys via GetDuplicates() and FetchDuplicate().
  • FreeCacheOnCommit property to tweak memory usage on internal commit of the index contents.
  • Read operations use separate Stream and is now multi-threaded.
  • As of v1.5 index file formats are not backward compatible.
  • Internal storage is based on record numbers of int instead of long file offsets, so you get ~50% memory and index file size reduction.
  • Hybrid WAH compressed bitmap indexes are now used for duplicate indexing.
  • Unlimited key size now supported with the RaptorDBString engine (now you can store file names in the DB).

Limitations

The following limitations are in this release:

  • Key size is limited to 255 bytes or equivalent in string length (ASCII vs. Unicode sizes)
  • Session Commit / Rollback: This has been deferred to later because of multi thread issues.
  • Compression: Support is built into the record headers but has not been implemented.
  • Removing items is not supported and the database is append only (much like couchdb).
  • Log file count is hard coded to 1,000,000: which equates to 20 billion inserts in a continuous run (restarting will reset count to 0 for another 20 billion).
  • External recovery program

The Competition

There are a number competing storage systems, a few of which I have looked at are below:

  • HamsterDB: A delightful engine written in C++, which impressed me a lot for its speed while I was using Aarons Watters code for indexing. (RaptorDB eats it alive now... ahem!) It's quite large at 600KB for the 64bit edition.
  • Esent PersistentDictionary: A project on CodePlex which is part of a another project which implements a managed wrapper over the built in Windows esent data storage engine. The dictionary performance goes down exponentially after 40,000 items indexed and the index file just grows on guid keys. Apparently after talks with the project owners, it's a known issue at the moment.
  • Tokyo/Kyoto Cabinet: A C++ implementation of key store which is very fast. Tokyo cabinet is a b+tree indexer while Kyoto cabinet is a MurMur2 hash indexer.
  • 4aTech Dictionary: This is another article on CodeProject which does the same thing, the commercial version at the web site is huge (450KB) and fails dismally performance wise on guid keys after 50,000 items indexed.
  • BerkeleyDB: The grand daddy of all database which is owned by Oracle and comes in 3 flavours, C++ key store, Java key store and XML database.

Performance Tests

RaptorDB/myharddisk.png

Tests were performed on my notebook computer with the following specifications: AMD K625 1.5Ghz, 4Gb Ram DDRII, Windows 7 Home Premium 64bit, Win Index 3.9 (the above picture is a screenshot of the data transfer timings for my hard disk type which is a WD 5400 rpm drive):

Tests Insert Time Index Time B+tree Index Time MurMur Hash
First 1 million inserts 21 sec 61 sec 81 sec
Next 1 million inserts 21 sec 159 sec 142 sec

As a general rule of thumb (on my test machine at least), b+tree indexing time increases linearly by 1 second per 600,000 keys indexed. MurMur hashing is consistent for every 1 million items and maxes out at 4 sec mostly because of writing all the buckets to disk which is around 60MB for the default values.

Performance Test v1.1

With the use of in memory indexes you can achieve the following performance statistics at the expense of memory usage for caching the indexes in RAM:
Tests Insert Time Index Time B+tree Index Time MurMur Hash
First 1 million inserts 21 sec

8.1 sec,

SaveIndex() = 0.9 sec

4.3 sec,

SaveIndex() = 1.1 sec

Next 1 million inserts 21 sec

8.2 sec,

SaveIndex() = 1.5 sec

6.8 sec,

SaveIndex() = 1.4 sec


As you can see the in-memory index is very fast in comparison to the disk based index, as everything is done in memory and there is no flushing to disk for every MaxItemsBeforeIndexing items indexed. You still have the ability to save the index to disk and this is very fast also as the whole index is written at once. The down side to all this is the memory overhead for keeping all the index in RAM.

Performance Test v1.7.5

In version 1.7.5 a test for inserting 20,000,000 Guid items was added to stress test the system, again this test was done on the machine mentioned above, here is the results :

Total Insert time = 552 seconds 

Total Insert + Indexing time = 578 seconds

Peak Memory Usage = 1.6 Gb

Fetch all values time = 408 seconds

Using the Code

Using the code is straight forward like the following example:

RaptorDB.RaptorDB rap = RaptorDB.RaptorDB.Open("docs\\data.ext", 16,true, INDEXTYPE.HASH);
Guid g = Guid.NewGuid();
rap.Set(g, new byte[]{1,2,3,4,5,6,7,8,9,0});
byte[] bytes=null; 
if(rap.Get(g, out bytes)) 
{ 
    //data found and in bytes array
} 

The Open method takes the path for the data and index files (it will use the filename minus the extension for the data, index and log file names and create the directory if needed), the maximum key size (16 in the example for GUID keys), a Boolean to indicate allowing of duplicate key values, and the indexing method which is BTREE of HASH.

Using the Code v1.1

InMemoryIndex and SaveIndex

RaptorDB.RaptorDB rap = RaptorDB.RaptorDB.Open("docs\\data.ext", 16,true, INDEXTYPE.BTREE);
rap.InMemoryIndex = true;

// do some work

rap.SaveIndex(); // will save the index in memory to disk

How It Works

RaptorDB/block_diagram.png

RaptorDB has two main parts; the in memory burst insert handler and the background indexer. When you start inserting values into RaptorDB, the system will instantly write the data to the storage file and to a numbered log file. This ensures that the data is consistent and safe from crashes, the log is kept in memory which is essentially a key and file pointer to the storage file for fast in memory access to data even when the indexer has not started. After MaxItemsBeforeIndexing count of items in the log file, a new log file is created and the old one is given to the background indexer queue.

The background indexer will start indexing the log file contents using the method you specify and when done will free the log file contents from memory. This design ensures maximum performance and a way round concurrency issues regarding index updates which is a bane for b+tree indexers (there is only one thread updating the index).

Searches for keys will first hit the current log file in memory, second the queued log files in memory and lastly the disk based index file.

Tweaking

There are a few parameters which can be tweaked to your individual requirements, these include the following:

  • DEFAULTNODESIZE: This is the numbers of items in each index page, the default is 200.
  • BUCKETCOUNT: This is a prime number which controls the number of buckets for MurMur hashing, the default is 10007.
  • MaxItemsBeforeIndexing: The number of items in each log file and the number of items in memory before swapping to new log file and indexing starts. The default is 20,000.

As a general rule of thumb, if you have less than 1 million items (with the above default values), then go for the b+tree index as it is faster. If you have more than 2 million items, then MurMur hashing is faster and indexing time is constant within every 1 million records indexed.

B+tree

For b+tree indexes, the only parameter to tweak is the DEFAULTNODESIZE, the more the value the less the overall height of the tree from the root to the leaf values which requires fewer seeks to the hard disk and page fetches, this increases the page size on disk and memory usage.

MurMur Hash

For MurMur indexes, you have two parameters to tweak DEFAULTNODESIZE and BUCKETCOUNT. The multiplication of these values gives you the maximum data count that is supported with direct indexing (additional data will overflow into new buckets and would require another disk seek). Again, DEFAULTNODESIZE will determine the page disk size.

Index File Sizes

Index files are proportional to the number of pages in the index. You can compute the page size from the following formula:

Page size = 23 + ( (max key size + 17) * items in page) ~ 23+(33*200) = 6,623 bytes [ for 16 byte keys ]

File header = 19 + ( 8 * bucketcount) ~ 19+(8*10007) = 80,075 bytes

For b+tree indexes, leaf nodes/pages are statistically 70-80% full, and inner leaf nodes are about 30-50% full which averages out to about 75% full overall. So you would require about 25% more nodes/pages than your data count.

For MurMur hashing, the index file is allocated for the max number of data items anyway and overflows into additional buckets when you go over the limit. (e.g. 200*10007 ~ 2million data items limit which equates to 10007 * 6.6kb ~ 67Mb index file).

Memory Usage

Burst inserting 1million records takes about 80Mb of memory for internal log file cache, that's 1million * (16byte key + 8 byte offset) + .NET dictionary overhead.

MurMur hash indexing will take about MaxItemsBeforeIndexing * [page size] + .NET overhead of memory + [overflow pages] * [page size] which for default values will be around 120Mb if the pages are full.

B+tree indexing memory usage will grow as the size of data increases and is hard to quantify but as a worse case MaxItemsBeforeIndexing * [page size]* (1+0.25 +0.1) + .NET overhead.

What About Failures?

There are 4 types of failures:

  1. Non clean shutdown in the middle of indexing (power failure / crash): In the case of b+tree indexing, the system will automatically recover without a problem if the failure was before writing to disk, otherwise the external recovery program is required. For MurMur hashing system crashes have no effect on indexing and the system is automatically recoverable.
  2. Index file corruption/deletion: You lose access to your data but the data exists. Index files can be rebuilt from the data file via an external recovery program which scans the data file for rows and creates log files.
  3. Log file corruption/deletion: You lose access to recently added data for which indexing had not been done on yet. You can extract the log files again via an external recovery program.
  4. Data file corruption: You should have backed up your data! This scenario is extremely rare as data is not overwritten and only appended so at most you might lose the last record inserted but the external recovery program can scan the data file and restore what is valid.

Data is automatically flushed on inserts so the above scenarios are rare, but it does give piece of mind to know that RaptorDB is recoverable.

My Rant

Ever since I started writing articles for CodeProject, I was amazed at how much of the code that we write is bloated and disproportionately large for the task at hand, cases in point are this article, my previous fastJSON article and even the mini log4net replacement. I would push this further and take a swipe at RDBM vendors and ask the question why when I can perform at 80% of my hard disk speed on a managed run-time language, their products have such dismal performance in comparison on native code? It could be argued that they do more, and they certainly do, but the question remains that they can do better, with less code and smaller executable sizes. I understand the position of developers in that they have to get the job done as fast as possible with managers and customers breathing down their necks, and they take the easiest path to ship things out in the least amount of time, to the detriment of quality, speed and size.

I propose that we, as developers, take a little pride in what we do and create / write better and more optimized programs, and in the short and long term better everyone in the process.

vNext, Future Work and Call to Arms

I'm planning to implement the following features in the general directions of a full fledged document database in the realm of mongodb and couchdb.

  • Query-able content via Map/Reduce or similar functionality
  • Column/bitmap orientated indexes for fast queries on huge amounts of data

So if any one is interested in the above and is up to the challenge of writing non bloated and highly optimized code, drop me a line.

Closing Words

Most times, we have to see what is possible to believe, so show your love and vote 5.

Appendix 1 - version 1.7 changes

A lot of changes were made as of version 1.7, the most prominent is the use of WAH compressed bitmap indexes for encoding duplicate key values. This new Bitmap index is very efficient in both storage and query performance and paves the way for future versions of RaptorDB with full query capabilities.

Read speed are now on par with the save performance and in my test machine 1 million reads is done in 16 secs.

Unit test project has been added to RaptorDB to verify that everything works correctly.

Main Methods

Set(Guid, byte[]) Set Guid Key and byte array Value, returns void
Set(string, string) Set string Key and string Valus, returns void
Set(string, byte[]) Set string Key and byte array Value, returns void
Set(byte[], byte[]) Set byte array Key and byte array Value, returns void
Get(Guid, out byte[]) Get the Guid Key and put it in the byte array output parameter, returns true if key was found
Get(string, out string) Get the string Key and put it in the string output parameter, returns true if key was found
Get(string, out byte[]) Get the string Key and put it in the byte array output parameter, returns true if key was found
Get(byte[], out byte[]) Get the byte array Key and put it in the byte array output parameter, returns true if key was found
EnumerateStorageFile() returns all the contents of the main storage file as an IEnumerable< KeyValuePair<byte[], byte[]> >
GetDuplicates(byte[]) returns a list of main storage file record numbers as an IEnumerable<int> of the duplicate key specified
FetchDuplicate(int) returns the Value from the main storage file as byte[], used with GetDuplicates
Count() returns the number of items in the database
SaveIndex() If InMemoryIndex was true then will save the in memory indexes to disk and clear the log files processed.
SaveIndex(bool) Same as above but if the parameter is set then all in memory log files will be flushed to disk and the log will restart at count of 0.
Shutdown() This will close all files and stop the engine.

Appendix 2 - File Formats

The following file formats are used in RaptorDB :
  • *.mgdat : holds the actual values for the keys and is the main file
  • *.mgrec : holds record number to mgdat file offset mapping for information retrieval
  • *.mgidx : holds the indexing information for b+tree or murmur hash
  • *.mglog : holds the preindexing information
  • *.mgbmp : holds the bitmap encoded record numbers for key duplicates
  • *.mgbmr : holds the record number to mgbmp file offset mapping

File Format : *.mgdat

Documents are stored in JSON format in the following structure on disk:
RaptorDB/docs_file.png

File Format : *.mgbmp

Bitmap indexes are stored in the following format on disk :
RaptorDB/mgbmp.png
The bitmap row is variable in length and will be reused if the new data fits in the record size on disk, if not another record will be created. For this reason a periodic index compaction might be needed to remove unused records left from previous updates.

File Format : *.mgidx

The B+Tree and the MurMurHash indexes are stored in the following format:
RaptorDB/mgidx.png

File Format : *.mgbmr , *.mgrec

Rec file is a series of long values written to disk with no special formatting. These values map the record number to an offset in the BITMAP
index file and DOCS storage file.

File Format : *.mglog

The preindexing log information is stored in the following format :
RaptorDB/mglog.png

Appendix 3 - v1.8

A lot of changes were made in this version the biggest of which was the transition of internal structures to generics. Also unlimited key size is now supported with the RaptorDBString engine.

Input throttling has been removed as the log files reveal it is not needed ( see below).

Shutdown will now flush in memory logs to index on the setting of the Globals.SaveMemoryIndexOnShutdown to true. This will obvoiusly add time to the shutdown process.

Using RaptorDBString and RaptorDBGuid

Two new engines have been added one for unlimited key sizes RaptorDBString and one for fast Guid key handling RaptorDBGuid. These two engines take the input data and use the MurMur hash to generate a int32 which is used for the key of the b+trees.

This offers speed improvements and most importantly of all reduced memory usage. The downside is that because the keys are hashed then you loose sort order in the b+tree. For Guid keys this makes no difference, but you must be aware of this for string keys.

Because hashing is being used then there is a chance of hash conflicts, the engine will take care of this and give you back the correct data when needed (internally it is encoded as a duplicate key, so you must be aware of this when using GetDuplicates).

The RaptorDBString constructor can take a parameter which will be sensitive to string case.

var db = new RaptorDBGuid(@"c:\RaptorDbTest\RawFileOne"); 

var rap = new RaptorDBString(@"c:\raptordbtest\longstringkey", false); // case in-sensitive

Indexing performance

A look at the log files generated in this version shows that the indexer thread is on average 3.8 logs behind the insert threads on 10,000,000 continous inserts (3.8 * 20,000= 76,000 items) [ on my test machine ]. This is quite respectable, why the throttling of the input was removed.

New internal data types rdbInt, rdbLong, rdbByteArray

The conversion to generics required the creation of new data types to wrap int32, long and byte[] types. This is for the implementation of the interfaces needed, for the generic engine. With these 3 types you can store any other value type keys as they will map directly to one of these. As sample of the rdbInt is below with the implementation of the IRDBDataType<> interface.

    public class rdbInt : IRDBDataType<rdbInt>
    {
        public rdbInt()
        {

        }
        public rdbInt(int i)
        {
            _i = i;
        }
        public rdbInt(uint i)
        {
            _i = (int)i;
        }
        private int _i;

        public int CompareTo(rdbInt other)
        {
            return _i.CompareTo(other._i);
        }

        public byte[] GetBytes()
        {
            return Helper.GetBytes(_i, false);
        }

        public bool Equals(rdbInt x, rdbInt y)
        {
            return x._i == y._i;
        }

        public int GetHashCode(rdbInt obj)
        {
            return obj._i;
        }

        public bool Equals(rdbInt other)
        {
            return this._i == other._i;
        }

        public rdbInt GetObject(byte[] buffer, int offset, int count)
        {
            return new rdbInt(Helper.ToInt32(buffer, offset));
        }

        public override bool Equals(object obj)
        {
            return this._i == ((rdbInt)obj)._i;
        }

        public override int GetHashCode()
        {
            return _i;
        }
    }

History

  • Initial release v1.0: 3rd May, 2011
  • Update v1.1 : 23rd May, 2011
    • Bug fix data row header read
    • Count() function
    • InMemoryIndex and SaveIndex()
    • Restructured project files and folders
    • SafeDictionary bug fix on byte[] keys in logfile.cs -> .net Dictionary can't handle byte[] keys, problem with GetHashCode()
  • Update v1.2 : 29th May, 2011
    • EnumerateStorageFile() : keyvaluepair -> using yield
    • GetDuplicates(key)
    • FetchDuplicate(offset)
    • FreeCacheOnCommit property
  • Update v1.3 : 1st June 2011
    • Thanks to cmschick for testing the string,string version
    • bug fix CheckHeader() in StorageFile.cs
    • bug fix Get() in LogFile.cs
  • Update v1.4 : 13th June 2011
    • added reverse to short, int, long bytes (big endian) so they sort properly in byte compare (in ascending order)
    • read operation on data file uses another stream
    • bug fix duplicate page not set in keypointer
  • Update v1.4.1 : 17th June 2011
    • bug fix a silly typo in helper.compare thanks to eagle eyed Nicolas Santini
  • Update v1.5 : 22nd June 2011
    • changed long to int for smaller index size and lower memory usage, you should get 50% saving in both
    • all internal operations are on record numbers instead of file offsets
    • added a new file (mgrec extension) for mapping record numbers (int) to file offset (long)
    • ** index files are not backward compatible with pre v1.5 **
    • bug in unsafe bitconvert routines, using ms versions until workaround found, thanks to Dave Dolan for testing
  • Update v1.5.1 : 28th June 2011
    • unsafe bitconvert fixed
    • speed optimized storage write to disk, removed slow file.seek and file.length (amazingly moving from long to int took a ~50% performance hit, now back to ~5% to the original code speed)
  • Update v1.5.2 : 10th July 2011
    • bug fix int seek overflow in index file
    • changed to flush(true) for all files thanks to atlaste
    • bug fix hash duplicate code
  • Update v1.6 : 31st July 2011
    • IndexerThread will do indexing continuosly until the queue is empty
    • resolved a concurrrency issue with Get() and IndexerThread
    • replaced flush(true) with flush() for .net2 compatibility
  • Update v1.7 : 15th October 2011
    • Implemented Shutdown() to close all files and shutdown the engine
    • Duplicates are now encoded as WAH bitarrays (hybrid b+tree/bitmap and hash/bitmap indexes)
    • Special case WAH bitmap for optimized memory usage
    • ** index files are not backward compatible with pre v1.7 because of duplicate encoding and compare changes **
    • Now using IEnumerable<> instead of List<> for lower memory usage
    • Code restructuring and cleanup
    • SaveIndex(true) will now flush all in-memory structures to disk indexes
    • Added unit test project
    • Optimized compare() routine ~18% faster, thanks to Dave Dolan
    • Read speed optimized 10x faster
    • Separate read/write streams for bitmap indexes
    • Added Appendix sections 1,2 to article
  • Update v1.7.5 : 21st October 2011
    • added ThrottleInputWhenLogCount = 400 this kicks in after about 8 million items in the log queue
    • btree cache clearing now only clears leaf nodes to save memory and keep performance
    • optimized the indexer thread
    • optimized Dave's compare()
    • added Twenty_Million_Set_Get_BTREE() test to unit tests
    • bug fix shutdown() thanks to chprogramer for testing
    • moved from SortedList to Dictionary for internal caches
    • renamed the article title to make room for the next version
    • add the raptordb logo
  • Update v1.8 : 13th November 2011
    • hash caching activated, read speed is on par with btree
    • zero length files are deleted on shutdown
    • bug fix hash shutdown
    • new tests to test shutdown and flushing logic
    • Added RaptorDBString with unlimited key size
    • Count() now uses the storage file contents instead of the index (faster for huge row count >10million)
    • Changed internals to generics
    • Implemented IDisposable for clean shutdown
    • optimized bitmap index read speed
    • added minilogger for log progress files
    • bug fix WAHBitArray
    • Shutdown can flush memory logs to index on a flag
    • added SaveMemoryIndexOnShutdown to Global
    • Added Appendix 3 for v1.8 changes
    Added Reference to new article : 19th January 2012
  • 23rd January 2012 : Fixed the link to v2

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here