Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

ODBC Database Access � A Templatised Approach

0.00/5 (No votes)
22 Nov 2004 1  
A library of template classes that enables the rapid production of client-side database code.

Introduction

This article is the first in a short series concerning a templatised approach to writing ODBC database access in C++. This first installment creates a set of classes that allow generic access and updating of ODBC data sources. The code provides an API to allow a programmer to read tables, traverse links between tables, and update tables, including cascaded updates. Furthermore, a transaction framework has been implemented to allow the programmer full commit or rollback functionality regardless of the database platform. Also, there is considerable scope for extending the framework to deal with more complex issues such as tree-walking.

Background

Whenever you write database access code in C++ on Windows using ODBC, it seems that you spend most of your time writing boiler-plate code. Microsoft has tried to reduce the burden of this by producing wizards to help you write customized versions of CRecordset, but I still find the process frustrating at best.

CRecordset and CDatabase from MFC are a useful pair of classes; unfortunately, in many complex applications, they are fundamentally flawed. CDatabase provides a basic transaction framework; however, as far as I am aware, this simply delegates on to the database, which is okay on a database that supports rollback of transactions, but if we are using a simple database such as MS Access, we do not have this functionality. CRecordset has more fundamental problems. By coupling the concept of iterating over a dataset with the concept of writing data into the dataset, we get a simple to use class, but there is little control over exactly when data will be written back into the database. This can be particularly problematic in more complex applications like tree-walkers, when a database table contains a foreign key back to itself. Ensuring that the tree maintains consistency in a multi-user environment is virtually impossible. Also, the CRecordset contains no real transaction framework support, so it can be difficult to add in concepts like pre and post insert actions, etc.

In a recent project at work, I used a custom built framework that solved many of these problems; however, because the system used customized classes instead of CRecordset, a considerable period of time was spent generating boiler-plate code. I hate writing boiler-plate code, so I decided to have a go at writing my own database access library using templates.

How it works

The system relies on a number of libraries, including MFC, Boost, and Loki. Before you can use this system, you must have all of these libraries in existence, and set up in your include and link paths as appropriate.

In essence, the whole of this system centers on four class hierarchies. These are DatabaseDef, TableDef, ColumnDef, and PersistentObject.

A simple class diagram of the key classes

DatabaseDef is the key entry point into the class hierarchy. It is templatised from a string, which at first glance, may appear a little odd. I have a very good reason for this nevertheless. Glancing at the public API, we see that DatabaseDef is actually remarkably simple. It allows us to set and get the DSN, start or end a transaction, execute SQL, or add items to the change queue. It supports two of the key concepts behind the transaction framework. The first is the concept of a transaction itself. BeginTransaction, CommitTransaction and RollbackTransaction are the key functions supporting this concept. The second is the concept of pre and post commit actions. When you call CommitTransaction, three things happen: all the pre-commit actions are completed in the order that you added them; the change queue is actioned in a customized order; and the post-commit actions are completed in the order that you added them. The reason for making DatabaseDef templatised is simply so that I can make it into a singleton. This eases access from tables or persistent objects back to the database. It also allows other core classes to be strictly typed from the database class, thereby increasing database type safety. If you examine the class closely, you will note that DatabaseDef aggregates from the MFC CDatabase, via a wrapper class designed to support customized parameter binding during UPDATE and CREATE queries.

template <char * DBNAME>
class DatabaseDef
{
public:
    // Access the singleton through DatabaseDef::MySingleton::Instance()

    typedef Loki::SingletonHolder< DatabaseDef<DBNAME> > MySingleton;
    friend struct Loki::CreateUsingNew< DatabaseDef<DBNAME> >;

public:
    // Public API

    // Set and get the DSN string

    void SetDSN( const std::string &dsn );
    const std::string &GetDSN() const;
    // Access the transaction framework

    void BeginTransaction();
    void CommitTransaction();
    void RollbackTransaction();

    // Get access to the CDatabase object

    CDatabaseWrapper *GetDatabase();

    // Add an entry to the change queue

    void AddToChangeQueue( ChangeQueueEntryPtr entry );
    // Add an entry to the pre-commit list

    void AddToPreCommitList( PreCommitActionPtr entry );
    // Add an entry to the post-commit list

    void AddToPostCommitList( PostCommitActionPtr entry );

    // Execute sql

    void ExecuteSQL( const std::string &sql );

};

TableDef is essentially the equivalent of the part of CRecordset that gets data from the database. It consists of two layers of inheritance, the top level is a pure virtual interface class that eases the usage of the class, and the second is the actual templatised class. There are three template parameters that are used, the DatabaseDef class, a string giving the table name, and a type list of column definitions. The type list is the clever part of the system. It encapsulates the structure of the table at compile time, and means that you avoid writing boiler-plate code to get column values from the database and to give them back to the user. It was necessary to do some template meta-programming in order to get this concept working, but generally I have relied on the Loki library to do this for me. The only custom meta-programming I did was a FindPrimaryKey class that is used to extract the primary key from a type list of column definitions. The public interface for the TableDef class again describes quite neatly what the class can be used for. There are a set of navigation functions, remarkably similar to CRecordset. We can also force a requery of the data set. Both field values and column definitions can be extracted by index, and we can automatically extract the primary key. Finally, there is an API to allow persistent objects to be created from a particular row, or to allow a new row to be generated. Again, the fundamental engine behind this class is actually the MFC class CRecordset, primarily because CRecordset is actually quite good at extracting data from databases.

template < class DBDEF, char *TABLENAME, class COLDEFS >
class TableDef : public TableDefInterface
{
public:
    typedef typename boost::shared_ptr< TableDef< DBDEF, 
                            TABLENAME, COLDEFS > > _ptr;
    static _ptr GetTableDef();
    static _ptr GetTableDef( const WhereClause &where );

public:
    virtual ~TableDef();

    // Navigation functions

    virtual bool IsBOF() const;
    virtual bool IsEOF() const;

    virtual void First();
    virtual void Last();
    virtual void Next();
    virtual void Prev();

    // Force a requery of the dataset

    virtual void Requery();

    // Access the data

    virtual const SQLValue &GetFieldValue( unsigned int index ) const;
    virtual const ColumnDefInterface &GetColumnDef( unsigned int index ) const;

    // Return the number of columns in this class

    static unsigned int GetNumberOfColumns();
    // Return the first column def with the primary key set

    static ColumnDefInterface *GetPrimaryKey();
    static unsigned int GetPrimaryKeyColumnIndex();

public:
    // API for create persistent objects on this table

    typedef typename PersistentObject<DBDEF, TABLENAME, 
                    COLDEFS>::_ptr persistentObjectPtr;
    persistentObjectPtr NewRow();
    persistentObjectPtr NewRow( const PersistentObjectPtr &dependentEntry );
    persistentObjectPtr AccessRow();

};

PersistentObject is the key class involved in writing data into the database. It is factory generated by a TableDef. The PersistentObject is actually made up of three layers of inheritance; the top level interface is a pure virtual interface to a number of the functions. The next two levels define various templatised functionality, with the middle level providing functionality that is solely a concern of the database, and does not need the table name or column definitions to operate.

PersistentObject is an interesting class because it is actually designed to be expandable in a number of ways. To fully understand the persistent object, you need to look at the four different states it can exist in. When a persistent object is first created, it will often be in the transient state. This means that it will do nothing when the transaction is committed to the database. The remaining states are insert, delete and update. The insert state is automatically set through the appropriate factory functions in TableDef, and the update state is automatically set when a column is updated. The delete state is the only one that must be explicitly set by the programmer. As soon as a persistent object moves away from the transient state, it automatically adds itself to the change queue. By using Boost smart pointers, it is possible to ensure that the persistent object lifecycle extends until the change queue is processed. The persistent object implements the final tier of the transaction processing system, which is the pre and post actions for insert, delete, and update. These can be implemented on a particular persistent object by simple inheritance, and provide a powerful way to implement such features as tree-walkers. For example, a tree object might want to store everything under the tree when the head object has been updated. Rather than basing this class on the flawed CRecordset interface, it generates and sends the necessary SQL directly to the database. The SQL required is automatically generated from the Column Definitions, and the system also automatically binds parameters for binary objects, where they cannot be included directly in the SQL statement.

template< class DBDEF, char *TABLENAME, class COLDEFS >
class PersistentObject : public PersistentObjectDB<DBDEF>
{
public:
    typedef typename boost::shared_ptr< 
            PersistentObject<DBDEF, TABLENAME, 
            COLDEFS> > _ptr;
public:
    virtual ~PersistentObject();

    // Override these functions to perform

    // specific actions before and after database access

    virtual void PerformPreInsertActions() {}
    virtual void PerformPostInsertActions() {}
    virtual void PerformPreDeleteActions() {}
    virtual void PerformPostDeleteActions() {}
    virtual void PerformPreUpdateActions() {}
    virtual void PerformPostUpdateActions() {}

    // Accessors and mutators for the columns and field values

    virtual const SQLValue &GetFieldValue( unsigned int index ) const;
    virtual const ColumnDefInterface &GetColumnDef( unsigned int index ) const;
    virtual void SetFieldValue( unsigned int index, const SQLValue &value );

    // Return the number of columns in this class

    static int GetNumberOfColumns();
};

The final class from the core four is the ColumnDef class. Again, this class is implemented with an inheritance hierarchy of three classes; however, in contrast to the persistent object class where only the bottom class in the hierarchy is a concrete class, there are two classes in this hierarchy that are concrete. Again, there is an interface class, then below that is the most commonly used column definition, this maps a column onto a value, and also identifies whether the column is the primary key for the table. Below that class is the ColumnJoinDef class. This class defines a foreign key within a table, and provides a basis for the meta-programming in TableDef to automatically traverse joins returning data from the other side of the join. It also provides functionality to support cascading inserts, updates and deletes within the persistent object framework.

class ColumnDefInterface
{
public:
    template <class TABLEDEF>
    typename TABLEDEF::_ptr GetTableDef() const;
    template <class TABLEDEF>
    typename TABLEDEF::persistentObjectPtr
GetPersistentObject( const PersistentObjectPtr &dependentEntry ) const;
protected:
    virtual WhereClause GetWhereClause() const
};
template < char *COLNAME, class COLTYPE, bool PRIMARYKEY = false >
class ColumnDef : public ColumnDefInterface
{
public:
    virtual const COLTYPE &GetValue() const;
    void SetValue( const COLTYPE &value );
};
template < char *COLNAME, class COLTYPE, class TABLEDEF >
class ColumnJoinDef : public ColumnDef<COLNAME, COLTYPE>
{
protected:
    virtual WhereClause GetWhereClause() const;
};

The remainder of the class structure within the framework is relatively self-explanatory. I will run through these classes in a semi-logical order explaining what each is for. DBException is a very simple exception class, which deals with all errors within the framework. I have left it simple in the knowledge that most people wanting to use this framework will probably customize it to work more appropriately within their application. ExpressionNode is the start of a simple expression tree. At present, it is only used to describe simple where clauses, eventually it will be used to support custom creation of complex SQL queries on the database. QueryNode is the start of a query tree, again designed to be expanded into the support for complex SQL queries. At present, the only concrete node in this tree is the WhereClause. There are a pair of visitors, ExpressionSQLGenVisitor and QuerySQLGenVisitor, that collapse the trees into SQL. They are again designed to be extended when I write the querying part of the framework, but at present are used to generate the select SQL used within TableDef. The expression tree and visitors are based on code produced for a previous series of articles of mine, as well as the Loki visitor. Transactions are supported through the ChangeQueueEntry, PreCommitAction and PostCommitAction classes. The latter classes are abstract base classes, designed for a programmer to inherit from in order to generate pre or post commit actions. The ChangeQueueEntry provides customized sorting and a simple container for persistent objects. Late binding of parameters is carried out through the SQLParameterBinder, SQLBinaryParameterBinder and CDatabaseWrapper classes. Finally, a number of classes inheriting from SQLValue give null value support to basic types in C++.

A detailed Class Diagram

Using the code

Defining your database structure using the templates

The test code included within the source is based on the Microsoft Northwind application that comes as an example with MS Access XP. It is assumed that a DSN has already been configured called Test. In order to produce classes to access a table within that application, the following code would be required:

// Assume that dbString, CategoriesCol1 ... and Categories are all

// strings with external linkage


// Set up the database, and initialise the DSN

typedef DatabaseDef< dbString > DBDef;
DBDef::MySingleton::Instance().SetDSN( "Test" );

// Define the categories table

typedef ColumnDef< CategoriesCol1, SQLLong, true > ColCategories1;
typedef ColumnDef< CategoriesCol2, SQLString > ColCategories2;
typedef ColumnDef< CategoriesCol3, SQLString > ColCategories3;
typedef ColumnDef< CategoriesCol4, SQLBinary > ColCategories4;
typedef TYPELIST_4( ColCategories1, ColCategories2,
    ColCategories3, ColCategories4 ) CategoriesColumns;
typedef TableDef< DBDef, Categories, CategoriesColumns > CategoriesTable;
typedef PersistentObject< DBDef, Categories, CategoriesColumns >
    CategoriesPersistentObject;

// Carry out a transaction

DBDef::MySingleton::Instance().BeginTransaction();
// TODO: Perform the transaction

DBDef::MySingleton::Instance().CommitTransaction();

A few notes about this code include:

  • dbString is nothing more than a unique identifier for the database.
  • The singleton implementation uses the Loki library.
  • ColumnDef is used to define all normal columns in the table.
  • The primary key for the table must have a true as the final parameter.
  • ColumnJoinDef is used to define foreign keys; it has an additional template parameter giving the dependent table.
  • The type list implementation used is again from Loki, most of the template meta-programming comes from that library as well.

Accessing a table

Accessing a table is simply a case of getting the table def, and using standard functions to iterate over the records reading the fields. This does not need to take place in a transaction since no updates are performed.

// Query the products table

ProductsTable::_ptr products( ProductsTable::GetTableDef() );
// Iterate over all records

while ( !products->IsEOF() )
{
    // Output some fields

    SQLLong idx = products->GetFieldValue( 0 );
    SQLString name = products->GetFieldValue( 1 );
    cout << (long) idx << ", " << (string) name << endl;
    // Move to the next record

    products->Next();
}

Cascaded access of a table

Cascaded access involves generating a table def off the column def that corresponds to the correct foreign key. The system automatically generates a select statement, with the foreign key used as a where clause on the primary key of the dependent table.

// Query the products table, and prepare

// to query the suppliers and categories tables

ProductsTable::_ptr products( ProductsTable::GetTableDef() );
SuppliersTable::_ptr suppliers;
CategoriesTable::_ptr categories;
// Iterate over the products

while ( !products->IsEOF() )
{
    // Output some fields

    SQLLong idx = products->GetFieldValue( 0 );
    SQLString name = products->GetFieldValue( 1 );
    cout << (long) idx << ", " << (string) name << endl;
    // Get the cascaded entries for suppliers and categories

    suppliers = products->GetColumnDef( 2 ).GetTableDef<SuppliersTable>();
    categories = products->GetColumnDef( 3 ).GetTableDef<CategoriesTable>();
    // Output some data about suppliers

    while ( !suppliers->IsEOF() )
    {
        SQLLong idxS = suppliers->GetFieldValue( 0 );
        SQLString nameS = suppliers->GetFieldValue( 1 );
        cout << "     Suppliers: " << (long) idxS 
             << ", " << (string) nameS << endl;
        suppliers->Next();
    }
    // Output some data about categories

    while ( !categories->IsEOF() )
    {
        SQLLong idxC = categories->GetFieldValue( 0 );
        SQLString nameC = categories->GetFieldValue( 1 );
        cout << "     Categories: " << (long) idxC 
             << ", " << (string) nameC << endl;
        categories->Next();
    }
    // Move to the next record

    products->Next();
}

Updating a table

Updating a table is almost as trivial as reading it. When you get to the row you want to update, create a persistent object from the table and edit the values. When the transaction is committed, the data will be updated.

ProductsTable::_ptr products( ProductsTable::GetTableDef() );
while ( !products->IsEOF() )
{
    SQLLong idx = products->GetFieldValue( 0 );
    SQLString name = products->GetFieldValue( 1 );
    cout << (long) idx << ", " << (string) name << endl;
    ProductsPersistentObject::_ptr prods = products->AccessRow();
    SQLString nameNew( (string) name + "_temp" );
    prods->SetFieldValue( 1, nameNew );
    products->Next();
}

Inserting rows into a table

Again this is trivial. Create a table def, create a new persistent object from the table, and set its values.

// Get the categories table

CategoriesTable::_ptr categories( CategoriesTable::GetTableDef() );
// Generate a new persistent object

CategoriesPersistentObject::_ptr cats = categories->NewRow();
// Insert some new data

cats->SetFieldValue( 0, SQLLong( 10001 ) );
cats->SetFieldValue( 1, SQLString( "temp" ) );
cats->SetFieldValue( 2, SQLString( "temp_temp" ) );

Deleting rows from a table

Deleting rows is as easy. Iterate to the correct row on the table def, create a persistent object and call Remove on it.

CategoriesTable::_ptr categories( CategoriesTable::GetTableDef(
    WhereClause( ColumnDefNode( ColCategories1( SQLLong( 10001 ) ), "=" ) ) ) );
while ( !categories->IsEOF() )
{
    CategoriesPersistentObject::_ptr cats = categories->AccessRow();
    cats->Remove();
    categories->Next();
}

Cascading Updates

When we cascade updates, by using the GetDependentPersistentObject template function, we ensure that the change queue entries are actioned in the correct order. Other than that, the syntax is the same as that used for non-cascaded updates.

// Query the products table, and prepare to query the suppliers and categories tables

ProductsTable::_ptr products( ProductsTable::GetTableDef() );
SuppliersTable::_ptr suppliers;
CategoriesTable::_ptr categories;
// Iterate over the products

while ( !products->IsEOF() )
{
    // Output some fields

    SQLLong idx = products->GetFieldValue( 0 );
    SQLString name = products->GetFieldValue( 1 );
    cout << (long) idx << ", " << (string) name << endl;
    ProductsPersistentObject::_ptr prods = products->AccessRow();
    // Get the cascaded entries for suppliers and categories

    suppliers = products->GetColumnDef( 2 ).GetTableDef<SuppliersTable>();
    categories = products->GetColumnDef( 3 ).GetTableDef<CategoriesTable>();
    // Output some data about suppliers

    while ( !suppliers->IsEOF() )
    {
        SQLLong idxS = suppliers->GetFieldValue( 0 );
        SQLString nameS = suppliers->GetFieldValue( 1 );
        cout << "     Suppliers: " << (long) idxS 
             << ", " << (string) nameS << endl;
        // Perform a cascaded update if the product id is 4

        if ( (long) idx == 4 )
        {
             SuppliersPersistentObject::_ptr supps =
                prods->GetDependentPersistentObject<SuppliersTable>( 2 );
            SQLString nameNew( (string) nameS + "_new" );
            supps->SetFieldValue( 1, nameNew );
        }
        suppliers->Next();
    }
    // Output some data about categories

    while ( !categories->IsEOF() )
    {
        SQLLong idxC = categories->GetFieldValue( 0 );
        SQLString nameC = categories->GetFieldValue( 1 );
        cout << "     Categories: " << (long) idxC 
             << ", " << (string) nameC << endl;
        // Perform a cascaded update if the product id is 4

        if ( (long) idx == 4 )
        {
            CategoriesPersistentObject::_ptr cats =
                prods->GetDependentPersistentObject<CategoriesTable>( 3 );
            SQLString nameNew( (string) nameC + "_new" );
            cats->SetFieldValue( 1, nameNew );
        }
        categories->Next();
    }
    // Move to the next record

    products->Next();
}

Cascading inserts

Cascading inserts are similar to the cascading updates.

// Get the products table

ProductsTable::_ptr products( ProductsTable::GetTableDef() );
// Generate a new persistent object

ProductsPersistentObject::_ptr prods = products->NewRow();
// Insert some new data

prods->SetFieldValue( 0, SQLLong( 10001 ) );
prods->SetFieldValue( 1, SQLString( "temp" ) );
prods->SetFieldValue( 2, SQLLong( 10002 ) );
prods->SetFieldValue( 3, SQLLong( 10003 ) );
// Get the dependent persistent objects and insert data into them

SuppliersPersistentObject::_ptr supps =
    prods->GetDependentPersistentObject<SuppliersTable>( 2 );
supps->SetFieldValue( 1, SQLString( "temp_supp" ) );
CategoriesPersistentObject::_ptr cats =
    prods->GetDependentPersistentObject<CategoriesTable>( 3 );
cats->SetFieldValue( 1, SQLString( "temp_cats" ) );

Cascading deletes

Cascading deletes are similar to the cascading updates.

// Get the correct record

ProductsTable::_ptr products( ProductsTable::GetTableDef(
    WhereClause( ColumnDefNode( ColProducts1( SQLLong( 10001 ) ), "=" ) ) ) );
while ( !products->IsEOF() )
{
    // Remove the record and its dependent entries

    // Note that no check has been made of whether these dependent entries are

    // used elsewhere - however, a database exception will be thrown if they

    // are

    ProductsPersistentObject::_ptr prods = products->AccessRow();
    prods->Remove();
     SuppliersPersistentObject::_ptr supps =
        prods->GetDependentPersistentObject<SuppliersTable>( 2 );
    supps->Remove();
     CategoriesPersistentObject::_ptr cats =
        prods->GetDependentPersistentObject<CategoriesTable>( 3 );
    cats->Remove();
    products->Next();
}

Exception handling

When using the framework, you will have to remember about two different exceptions that can be thrown. If the TemplateDB framework fails, then it will throw a DBException. If something fails in the Microsoft framework, then a CDBException will be thrown. At present, this would need to be caught in the calling application, and it would result in the change queue being invalidated and the current transaction to fail. If it is the intention of the programmer that database exceptions are valid code, then a short exception handler would need to be included in the DatabaseDef::CommitTransaction function.

Points of interest

The export keyword

Writing a library of code using templates can raise some very obscure problems. One of them that I had to deal with was circular include dependencies. In a conventional library, circular includes are easily avoided by liberal use of forward declarations, and moving most of the code into the CPP files. Unfortunately, the fact that the export keyword has not been implemented in many compilers yet means that you cannot move code into the CPP if it is templatised. This results in more than enough possibility for circular include dependencies. Let me give an example: TableDef contains a list of ColumnDef classes describing the columns in the table. When you want to traverse a secondary key to get another table, the ColumnDef must factory generate a TableDef. All in all, it caused me more than enough headaches. As far as I know, Microsoft, Borland and GCC all do not yet support the export keyword, I hope that they re-think this in the future and look to try and support this very useful concept.

An auto-cast pattern

The SQLValue class hierarchy demonstrates an interesting pattern that I have called the Auto-Cast pattern. Peculiar to C++, as far as I can tell, it is an invention of one of my programmers at work. The Auto-Cast pattern involves implementing virtual cast operators in the base SQLValue class, that defaults to an error, then overriding them in the derived classes to actually convert the data to the requisite base type. It allows pointers or references to the base class to be used polymorphically almost as if they were the base types themselves. It is a very neat way of avoiding dynamic casts every time you use a SQLValue, especially when you are absolutely certain of its concrete type. Unfortunately, the pattern does have an Achilles Heel. You must be very careful of the use of SQLValue as a parameter to polymorphic functions, or in cases where further implicit casting can create discrepancies. For example, the CLongBinary pointer type could be implicitly cast to a Boolean, and be therefore confused with a SQLBool class. In those cases, explicitly casting the SQLValue using a simple static cast will suffice in clearing up any confusion.

String parameters for templates

ANSI Standard C++ allows many different types of parameters to be used on templates. One of these is the constant integral types that can be evaluated at compile time. These are rarely seen, but can be very useful. Most people who are new to templates, when they first see the code, they tend to ask why would they use an integral template parameter when the integer value could be stored as a const member variable. In fact, in almost all circumstances, it is probably better to use a const member variable. The key exception to this maxim is when we want to be able to distinguish between class types as a result of this parameter. Often, this becomes important in template meta-programming, but it can be important at other times as well. The difference between these two programming paradigms can be seen by comparing the two classes below:

// A templatised version

template < unsigned int I >
class A
{
public:
    unsigned int GetParam() { return I; }
};

// Using a const member variable

class B
{
public:
    B( unsigned int i ) : m_i( i ) {}
    unsigned int GetParam() { return m_i; }
private:
    const unsigned int m_i
};

// Functionally A and B are remarkably similar, the

// key difference is that A<1> and A<2> are different types

// whereas B(1) and B(2) are the same type

A string constant is a constant integral type; however, the Standard has an additional requirement that makes this non-trivial. For a string constant to be used as a template parameter, it must have external linkage. It is not immediately obvious what the syntax for this would be, but once we have penetrated the syntax, using template parameters of this type can be very useful. In this framework, I used string template parameters to identify the database, the tables, and the individual column names.

// Tucked away in a configuration cpp file,

// we would have the following global variable defined

char dbString[] = "TestDB";
// Then when we wanted to instantiate the template we

// would need the following code

extern char dbString[];
typedef DatabaseDef< dbString > DBDef;

Portability

The framework I have created still relies heavily on the CDatabase and CRecordset classes from MFC. Any port to another platform, or an intention to use without MFC, would require the replacement of these classes. Although it is worthy of note that by using CRecordset only for reading records, this port would be considerably easier.

More importantly, by using ODBC, this framework should be portable across a number of database platforms with the minimum of difficulty. Having said that, from past experience, there are a number of issues that can arise between ports, for example, the inconsistent use of normal and wide strings, different formats for Boolean parameters, and other similar issues. I have tested the framework using MS Access, and I suspect that the system could be made to work on SQL Server with little or no effort. Other platforms may cause more significant problems.

Using template meta-programming to access the primary key in a table

It is interesting to examine the method by which the primary key can be extracted from a type list of column defs using meta-programming techniques. We can identify whether an individual column def is the primary key using the IsPrimaryKey const static bool. This is initialized from the template parameter. The FindPrimaryKey template can then be used to extract the primary key column. Its usage is trivial: FindPrimaryKey<COLDEFS>::Result will return the type of the primary key column from a type list of column defs. Examining the FindPrimaryKey template in detail, we can see that the core of the template is based upon a compile time IfThenElse construct. The IfThenElse template by default has a single typedef that corresponds to the second template parameter, but has a partial specialization for when the first template parameter is false with a single typedef that corresponds to the third template parameter. In use, we pass in three template parameters, and the template will return either the second or third parameter depending on whether the first is true or false. We then generate a template that uses the IsPrimaryKey value of the head of the type list to either return the head, or use the same template to check the tail of the type list recursively. In ANSI Standard C++, it is not strictly necessary to terminate this recursion if we only ever use the template with type lists that contain a primary key; however, the Microsoft compiler does not accept this, so we include a partial specialization of the template to terminate recursion.

template <class TList>
struct FindPrimaryKey
{
    typedef typename TList::Head Head;
    typedef typename TList::Tail Tail;
private:
    // Main template IfThenElse handles the default - ie true

    template<bool C, class T, class F>
    struct IfThenElse
    {
        // True variant

        typedef typename T Result;
    };
    // Partial specialisation for the alternate case - false

    template<class T, class F>
    struct IfThenElse<false, T, F>
    {
        // False variant

        typedef typename F Result;
    };
    // Main template returns the Head if it is the PrimaryKey,

    // or recurses onto the Tail

    template<class TList1>
    struct In
    {
        typedef typename TList1::Head Head;
        typedef typename TList1::Tail Tail;
        typedef typename IfThenElse<Head::IsPrimaryKey,
            Head, typename In<Tail>::Result>::Result Result;
    };
    // Partial specialisation to end recursion -

    // returns the Head of the list by default

    template<>
    struct In< ::Loki::NullType >
    {
        typedef typename Head Result;
    };
public:
    typedef typename In<TList>::Result Result;
};

Conclusions

To conclude, writing database applications for Windows in C++ is not trivial. The MFC classes, CDatabase and CRecordset lack a number of key features to do with transaction frameworks. If you decide to write your own framework, generally it is necessary to write a lot of boiler-plate code. Even using the MFC classes, this is necessary, although Visual Studio provides a wizard to ease this process. In an attempt to avoid writing boiler-plate code, whilst still providing a transaction processing framework, I have produced a library of templatised code to rapidly produce database access code over ODBC. There were a number of difficulties in writing this code, not least of which the absence of the export keyword which resulted in a number of problems with circular include dependencies. Nevertheless, the result is a set of powerful and extensible classes.

I intend to expand this framework in the near future to support the generation of complex SQL queries programmatically, by expanding work in the expression and query tree area. Furthermore, it should be possible to generalize the framework to cope with the idiosyncrasies of different database platforms. At present, the framework has only been tested on MS Access, although based on previous experience, it should work on SQL Server with the minimum of effort.

History

  • 22 Nov 04: Version 1 released.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here