Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

NHibernate Best Practices with ASP.NET, 1.2nd Ed.

0.00/5 (No votes)
11 Jun 2008 128  
This article describes best practices for leveraging the benefits of NHibernate 1.2, ASP.NET, generics and unit testing together.

Author's note added June 11, 2008 - Announcement of S#arp Architecture

Thankfully, technologies evolve over the years. Accordingly, Microsoft has introduced ASP.NET MVC as an alternative to classic ASP.NET. I have developed a new architecture which uses many of the design principles of this article for this newer platform called S#arp Architecture. Although this article is still the recommended background reading material for S#arp Architecture, you'll find the new architecture to be simpler and more maintainable while still leveraging the best of what NHibernate has to offer.

Preface to the 1.2nd Edition

In March of 2006 I published my initial thoughts on NHibernate best practices with ASP.NET, generics and unit tests. I've been delighted to learn that these ideas have been implemented in a number of real-world scenarios with strong success. Since then, I've worked with many people to refine these ideas, learn from mistakes and leverage a more powerful version of NHibernate. Accordingly, although only a modest, yet important, amount of modification has been made to the underlying architecture, some other important factors have been updated and addressed in this article:

  • Quite simply, NHibernate is awesome. In the previous edition of this article, I assumed you already knew this...but I now try to convince the dissenters as well.
  • NHibernate 1.2 natively supports generics.
  • NHibernate 1.2 natively supports nullable types.
  • NHibernate 1.2 natively supports mapping attributes.
  • NHibernate 1.2 can communicate with stored procedures.
  • Using CallContext for ISession storage in ASP.NET was susceptible to failure under load.
  • Exposing a publicly settable ID property created a point-of-susceptibility.
  • Providing automatic parent/child wiring, via Ayende's very helpful NHibernate.Generics, was more headache than help.
  • Have you used Rhino Mocks 3.0, NUnitAsp, or Fit? Well, these are all discussed with an expanded emphasis on test-driven development.
  • As an alternative to my recommendations, also consider Castle Project's offerings such as MonoRail and/or ActiveRecord for a simple yet powerful out-of-the-box framework for ASP.NET. In fact, if it's technically feasible and you can generate buy-in on your team for these off-the-shelf tools, I would recommend using them over a ground-up solution. But even if you do use Castle Project facilities, this article should still have a lot of useful information for you!
  • In addition to those listed above, there are other important refactorings and fixes throughout the article and the code. This edition is by no means just a light touch-up of the original article.
  • In addition to an overhaul of the original sample code, an expanded "enterprise" sample has been included demonstrating:
    • NHibernate with web services
    • NHibernate with multiple databases
    • Integration with Castle Windsor
    • A reusable data layer for the data access components.
    • A simple example of Model-View-Presenter

A quick thanks goes out to those who have implemented my ideas in their own work and have given plenty of ideas for improvement and consideration! Now onto the 1.2nd edition...

Article Contents

Introduction

Why Use an ORM?

[Author's Note: The following is an excerpt taken from a book I tinker with in my "spare" time.]

"As we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity."

These prophetic words were put forth by Frederick Brooks' in his now legendary paper, No Silver Bullet, in 1986. Heeding Brooks' words to a tee, it wasn't until more than a decade later, in 1997, that the software world was presented with a hint of an upcoming silver bullet in the form of NeXT's Enterprise Object Framework...one of the first object-relational mappers (ORM). Although not regularly conspicuously stated – often to avoid being seen as a heretic of software dogma – it is commonly accepted by many that ORM technologies, when used correctly, are, in fact, a silver bullet for software development. With the maturation of NHibernate, the ORM silver bullet has been formally introduced to the world of .NET.

The most common dissenters of ORM technologies, in general, are developers using Microsoft technologies. (As I've placed myself squarely into this realm for the past decade or so, I feel quite comfortable bringing us up first!) There seems to be an unwritten rule that "if it wasn't invented by Microsoft, then wait until Microsoft puts out the right way to do it." Stepping up to the plate, Microsoft intends to do just that with "LINQ to Entities" in the upcoming C# 3.0. (Yes, officially discard the use of "ObjectSpaces" and "DLINQ.") Developers may continue to wait for this technology or, alternatively, start realizing the benefits of ORM immediately. To be fair, the not-invented-by-Microsoft toolset for .NET developers used to be sparse but has been steadily growing since the advent of .NET. Circa 2004, the "not created by the mothership" toolset of open source tools finally began to reach a respectable level of maturity and should be seriously considered for any .NET endeavor. (And since, statistically, the majority of software projects fail, it sounds like the consideration of an expanded toolset is certainly warranted.) The impending introduction of LINQ certainly brings great benefits to flexible querying. Fortunately, LINQ is not exclusive to Microsoft data-access layers and LINQ for NHibernate is already well underway by Ayende Rahien.

Other dissenters of these technologies suggest that ORMs kill application performance and only provide a significant improvement to productivity in the initial stages of development. Furthermore, the argument continues, is that the use of an ORM becomes a serious detriment to project success only later in the project, when issues of performance and maintainability begin to have a more noticeable effect. Three obvious retorts come to mind in response to these protests.

First and foremost, in support of ORMs, using a framework such as NHibernate increases your performance as a developer. If you can spend 90% less time (yes, I said "90% less time") on developing data access code, then quality time can be spent improving the domain model and tuning performance – assuming it becomes necessary. Furthermore, leveraging a simple profiling tool goes a long way towards implicating the 5% of code that's causing 95% of the performance bottleneck. And in the cases in which the data access layer is the bottleneck, simple adjustments can usually be made to reap substantial improvements. Incidentally, this is no different than when not using an ORM. (Be sure to check out Peter Weissbrod's introductory article to profiling NHibernate applications.) And in the very few situations in which the ORM framework is the bottleneck and can't be adjusted for improvement, it's trivially simple to bypass the ORM altogether if the data access layer has been properly abstracted.

Secondly, NHibernate, specifically, provides an incredible amount of control over all aspects of data-loading behavior. This has positive effects on developer productivity, application scalability, and application stability. Caching is certainly available – but this is readily available in non-ORM solutions as well. Other out-of-the-box features include lazy loading, support for inheritance, declaration of immutable classes, loading of read-only properties, support for generics, stored procedures, blah blah blah. The point is that all these powerful features have been proven in real-world scenarios and, therefore, have removed many hours of developing, debugging and tweaking a custom data access layer. (And if you really feel the need to get into the code, that's just fine since NHibernate's an open source project.)

Finally, I would argue that those who feel that ORMs like NHibernate become maintenance headaches later in a project are working with a coding architecture that would inhibit the maintenance of any data access layer. Just because I've suggested that NHibernate is a silver bullet doesn't imply that it eliminates the need for proper application design. With the proper amount of judicious, architectural forethought, NHibernate is quite possibly the most maintainable solution for tying a .NET application to a database.

Needless to say, NHibernate, like other ORM tools, has alleviated the maintenance of thousands of lines of code and stored procedures, thus allowing developers to focus more attention on the core of a project: its domain model and business logic. Even if you automatically generate your ADO.NET data-access layer using a tool such as CodeSmith or LLBLGen Pro, NHibernate provides the flexibility in decoupling your domain model from your relational model. Your database should be an "implementation detail" that is defined to support your domain model, not the other way around. The remainder of this article focuses on describing best practices for integrating NHibernate into ASP.NET using well established design patterns and "lessons learned from the field".

Goals and Overview of Article

This article assumes a good understanding of C# and NHibernate, knowledge of the Data Access Object / Repository pattern, and at least a basic familiarity with Generics. Note that this article does not focus on using NHibernate but, instead, on integrating NHibernate into ASP.NET applications. If you're just getting acquainted with NHibernate, I'd recommend reading these two great introductions at TheServerSide.net: Part 1 and Part 2. (Also keep an eye out for Pierre Kuate's forthcoming NHibernate in Action.) For an extensive overview of the Data Access Object pattern, which is leveraged heavily within the samples, go to J2EE's BluePrints catalog. Although I use the phrase "Data Access Object" (or "DAO") throughout, it is interchangeable with Eric Evans' "Repository" in Domain-Driven Design. I just find "DAO" more convenient to type!

In building solid data integration within an ASP.NET 2.0 application, we aim to achieve the following objectives:

  • Presentation and domain layers should be in relative ignorance of how they communicate with the database. You should be able to modify your means of data communication with minimal modification to these layers.
  • Business logic should be easily testable without depending on a live database.
  • NHibernate features, such as lazy-loading, should be available throughout the entire ASPX page life-cycle.
  • .NET 2.0 Generics should be leveraged to alleviate duplicated code.

Two sample applications have been included, demonstrating the use of NHibernate with ASP.NET while meeting the above objectives:

  • Basic NHibernate Sample: This sample demonstrates the fundamentals of using NHibernate with ASP.NET and unit tests with a simple-to-understand, but not completely reusable, architecture.
  • "Enterprise" NHibernate Sample: This sample is provided with an architecturally sound grounding using proven design patterns which should allow you to quickly reuse the framework in almost any sized ASP.NET project. This sample also serves to demonstrate NHibernate with ASP.NET and "a whole bunch of other stuff" including communicating with multiple databases, using the pattern Model-View-Presenter, setting up a simple web-service which uses NHibernate, and integrating with Castle Windsor. Although code is included for communicating with multiple databases, a detailed explanation is beyond the scope of this article and may be found at the CodeProject.com article, Using NHibernate with Multiple Databases. (Note that that article's sample application is compatible with NHibernate 1.0x; albeit, the general approach is still applicable.)

What follows now is a description of how each of the aforementioned design objectives has been tackled in the sample applications. But before getting into the implementation details, let's skip right to the chase and get the sample up and running.

Running the Basic NHibernate Sample

The sample application, at the risk of being terribly cliché, utilize the Northwind database within SQL Server 2005 to view and update a listing of Northwind customers. To demonstrate the use of lazy-loading, the application also displays the orders that each customer has made. All you need to run the samples locally is IIS with the .NET 2.0 Framework installed, and SQL Server 2005 (or 2000) containing the Northwind database. (Since SQL Server 2005 doesn't come with Northwind by default, you can simply backup the Northwind DB from 2000 and restore it into 2005.) The samples also port to SQL Server 2000...simply modify the NHibernate configuration settings, accordingly.

To get the basic NHibernate sample application up and running:

  1. Unzip the sample application to the folder of your choice.
  2. Create a new virtual directory within IIS. The alias should be BasicNHibernateSample, and the directory should point to the BasicSample.Web folder that was created after unzipping the application.
  3. Open BasicSample.Web/web.config and BasicSample.Tests/App.config to modify the database connection strings to connect to a Northwind database on Microsoft SQL Server.
  4. If, and only if, you're running IIS 7, modify web.config by commenting out the "compatible with IIS 6" section and uncomment the "compatible with IIS 7" section.
  5. Open your web browser to http://localhost/BasicNHibernateSample/Default.aspx, and you're off and running!

Steps for getting the "enterprise" sample up and running are discussed in Extending the Basics to an "Enterprise" Solution. But before that, now that you're able to follow along with the basic sample in front of you, we'll examine how the application was developed to meet our design objectives...

NHibernate Integration Basics

When developing an application, my primary goals are to:

  • Write code once and only once.
  • Focus on simplicity and readability.
  • Keep coupling and dependencies to a minimum.
  • Maintain a clean separation of concerns.
  • Do all of the above using test-driven development.

This section discusses using these design principles for the integration of NHibernate into ASP.NET applications. We'll do this by dissecting the internals of the Basic NHibernate Sample.

Architectural Notes

The basic sample should not, necessarily, be seen as a reusable framework for your own ASP.NET application. The focus within this example application is on presenting NHibernate integration in a simple and concise manner. If you are looking for a "ready for the real-world" architecture, be sure to take a look at Extending the Basics to an "Enterprise" Solution after reviewing this section. With that said, the basic sample does present a number of best practice techniques and design patterns:

Separated Interface

Separated Interface, also known as Dependency Inversion, is a technique for establishing a clean separation of concerns between application tiers. This technique is described by Martin Fowler, by Object Mentor, and in further detail in Robert Martin's Agile Software Development. The technique is often used for cleanly separating a data access layer from a domain layer. For example, assume a Customer object - in the domain layer - needs to communicate with a data access object (DAO) to retrieve a number of past Orders. (Whether or not the domain layer should ever communicate directly with a DOA is left for another discussion.) Consequently, the Customer object has a dependency on the Order DAO - in the data layer. But for the Order DAO to return orders, the DAO requires a dependency back to the domain layer.

The simplest solution is to put the data layer, containing the DAOs, into the same physical assembly as the domain layer. To maintain a "virtual" separation of concerns, the containing project could include two folders, one named Domain and the other named Data. (I've actually used this approach on a good-sized project, in a former life, with regrettable consequences.) This approach brings with it a number of ill effects:

  • The domain and data layers share a bi-directional dependency with each other.
  • There's nothing to prevent the data layer from bleeding into the domain layer and vice-versa. If it can't be structurally prevented then it will occur.
  • It's difficult to unit test the domain layer without using a live database to support the data layer. Among other drawbacks, this brings unit testing performance to a crawl; therefore, developers stop unit testing.

Alternatively, the domain and data layers could be placed into physically separate assemblies; e.g., Project.Core and Project.Data, respectively. The domain layer (the Project.Core assembly) would contain domain objects and DAO interfaces. The data layer (the Project.Data assembly) would contain concrete implementations of the DAO interfaces defined in the domain layer. This is shown in the package diagram below. Note that the arrow signifies a uni-directional dependency from the data layer to the domain layer.

Screenshot - SeparatedInterface.jpg

This clean separation brings with it a number of benefits:

  • Developers are structurally unable to put concrete data-access code into the domain layer without adding a number of easy-to-spot references, such as to NHibernate or System.Data.SqlClient.
  • The domain layer remains in ignorant bliss of how the data layer communicates or with who it communicates. Therefore, it's easier to switch out the data-access implementation details (e.g., from ADO.NET to NHibernate) without having to modify the domain layer.
  • Having dependencies on interfaces makes it easy to provide a "mocked" data-access layer to the domain layer while unit testing. This keeps unit tests blazing fast and eliminates the need to maintain test data in the database.

An implementation example of Separated Interface is included in the sample applications and is discussed further below.

Dependency Injection

Separated Interface, as described above, introduces a dilemma: how are the concrete DAO implementations given to the domain layer which only "knows" about interfaces? Dependency Injection, also known as Inversion of Control (IoC), describes the technique for doing this. With Dependency Injection, it's possible to remove many direct instantiations of concrete objects along with the inflexibility that comes along with calling the new keyword directly. Dependency Injection may be performed manually or via an IoC Container. The manual approach is performed by simply passing an object's "service dependencies" to it via its constructor or via property setters. I've written an introduction to this approach within the CodeProject article, Dependency Injection for Loose Coupling. (Note that a "service dependency" may be a DAO, an emailing utility, or anything that leverages external, expensive, or shouldn't-be-used-in-unit-test resources.) This manual approach is most useful within unit tests because the explicitness of passing concrete dependencies is helpful in describing functionality. (Martin Fowler has some wise words on the value of being explicit.) Alternatively, dependency injection may be performed via an IoC Container driven by code or XML. (Fowler has also written a great introduction to IoC Containers, service locators and making an appropriate choice between the two.) Think of IoC Containers as providing decoupling on steroids. The benefits of an IoC Container include a flexible, loosely-coupled framework and an increased emphasis on coding-to-interface. The drawback is an added layer of complexity within the application. This is one of the many trade-offs between flexibility and complexity which needs to be considered when developing an application. Here are two great tools for putting IoC Containers into practice:

  • Castle Windsor: The Castle Windsor, IoC Container provides great IoC support using a combination of XML configuration and strongly typed declarations. Some of the advantages Castle Windsor brings to the table are less XML and more compile-time error catching, when compared against other options. The Castle Project also has a number of other powerful modules, making it an attractive option if you're looking for more than just IoC. Wide support has been shown for this IoC Container which has generated a lot of buzz around the .NET development community. The "Enterprise" NHibernate Sample includes an example of using Castle Windsor.
  • Spring .NET: This framework provides IoC declared via XML configuration files. Like the Castle Project, Spring.NET also provides a wide assortment of additional development utilities. This option should be particularly attractive for developers coming from the Java world and are already familiar with the Spring Framework.

I've found that using an IoC Container is most useful outside of unit tests for enabling ASPX pages to be given dependencies, to avoid ever having to call the new keyword directly. This may be taken further to wire up dependencies within a presenters-layer or service layer, as well.

Design by Contract

Design-by-Contract (DBC) is quite simply the best way to never have to use the debugger again. Although infrequently discussed, this technique should be given the same amount of praise that test-driven development receives. (Not that I'm saying that DBC gets jealous, but it should.) It increases quality, reduces checks for null, reduces debugging time, and greatly improves the overall stability of an application. To be more technically descriptive, DBC is a technique for contractually obligating users of your code (which is usually you, yourself) to use methods and properties in a particular way and for those methods and properties to promise to successfully carry out the given request. If the contract is not followed, an exception is thrown. It may seem a bit drastic at first, but it goes a long way to improving code quality. With DBC, "pre-conditions" assert what contractual obligations must be adhered to when invoking a method or property. "Post-conditions" assert what contractual obligations have been ensured. I highly recommend reading an introduction to Design-by-Contract; you'll be hooked on using it very quickly. The sample projects included with this article use a modified DBC class originally written by Kevin McFarlane. The original allows conditional compilation for turning contracts on for debug mode and off for release mode while the variation included in this article's samples maintains contractual obligations regardless of compilation mode. In my opinion, a contract is always a contract and the behavior should never vary.

In the code, you'll find that pre-conditions are declared with Check.Require while post-condition are declared with Check.Ensure. As DBC is not domain-specific, this class is appropriately pulled out into a reusable, utilities library in the "enterprise" sample.

The Basic Sample Application

So what are we working with here? The basic application fulfills the following user stories:

  • User may view listing of suppliers and their products.
  • User may view listing of existing customers.
  • User may view details of customer.
  • User may view a listing of past orders placed by a customer.
  • User may view a listing of order summaries, including product name and total quantity, placed by a customer.
  • User may edit customer details.
  • User may add a new customer.

Regardless of how valuable this application may or may not be, the above user stories are enough to demonstrate the primary aspects of NHibernate integration. To assist with keeping logical tiers loosely coupled, the included sample application is split into four projects: Tests, Web, Core and Data. As a policy, I use <ProjectName>.<LayerName> for naming projects; e.g., BasicSample.Data. Although this simple separation of concerns will work for now, a more realistic architecture will be discussed later. In peeling the layers of the onion, let's begin with the testing layer and work our way towards the simple presentation layer.

BasicSample.Tests with NUnit, NUnitAsp, Rhino Mocks and Fit

I'll assume you can probably guess what this project is for. In the first edition of this article, unit testing was only lightly discussed. As industry, and personal, experience has proven, test-driven development is a pivotal factor in producing high quality products which are more adaptable to change. Furthermore, taking a test-driven approach tends to produce better designs and, as a side effect, creates a lot of perfectly valid technical documentation. For a terrific beginner's introduction to a "day in the life" of test-driven development, take a look at Kent Beck's Test-Driven Development: By Example. Examining the unit tests of the sample application provides an overview of how the application is structured and what functionality is available. After taking a look at the unit tests, we'll delve further into the code they test.

A Note on Unit Test Performance

It's imperative for unit tests to be blazing fast. If a suite of unit tests takes too long to run, developers stop running them - and we want them running unit tests all the time! In fact, if a test takes more than 0.1 second to run, the test is probably too slow. Now, if you've done unit testing in the past, you know that any unit test requiring access to a live database takes much longer than this to run. With NUnit, you can put tests into categories, making it easy to run different groups of tests at a time, thus excluding the tests that connect to a database most of the time. But at the very least, these "slow" tests should be run every night within a Continuous Integration environment. Here's a quick example of categorizing an NUnit, unit test:

[TestFixture]
[Category("Database Tests")]
public class SomeTests
{
    [Test]
    public void TestSomethingThatDependsOnDb() { ... }
}

Domain Layer Unit Tests

For simplicity, the domain layer of this application is light, to say the least. But even in the lightest of domain layers, it's important to have, at a minimum, every non-exception path covered by a unit test. The domain layer tests may be found in the namespace BasicSample.Tests.Domain. (As a side, is it overkill that CustomerTests.CanCreateCustomer tests the getter/setter of almost every property? It's startling how many trivial bugs you catch in the process.) While reviewing the tests, you may notice the class DomainObjectIdSetter.cs; the motivations for this class will be discussed later.

To run the unit tests, open NUnit, go to File/Open Project and open BasicSample.Tests/bin/Debug/BasicSample.Tests.dll. To prevent the more time-consuming tests from running, go to the Categories tab within NUnit and double-click both "Database Tests" and "Web Smoke Tests." Additionally, click "Exclude these categories" at the bottom. Now, when you run the unit tests, only the domain logic tests will run and not be slowed down by HTTP and database access tests. For such a small application, the added overhead of the "slow" tests is almost negligible, but can add many minutes to running the unit tests of larger apps.

Using Test Doubles for the Data Access Layer

Before getting into simulating the database layer, it should be noted that a nomenclature exists for describing different types of simulated services. Dummies, fakes, stubs and mocks are all used to describe different types of simulated behavior. An overview of the differences highlights a few which will be included in Gerard Meszaros' upcoming XUnit Test Patterns. Meszaros offers "test double" to generically describe any of these behaviors. Stubs and mocks are two such test doubles demonstrated in the sample code.

Unless you're specifically testing DAO classes, you usually don't want to run unit tests that are dependent on a live database. They're slow and volatile by nature; i.e., if the data changes, the tests break. And when testing domain logic, unit tests shouldn't break if the database changes. But the major obstacle is that domain objects themselves may depend on DAOs. Using the abstract factory pattern that is in place in the sample (discussed later), and the associated DAO interfaces, we can inject DAO test doubles into the domain objects, thereby simulating communications with the database. An example is included in the test CustomerTests.CanGetOrdersOrderedOnDateUsingStubbedDao. The following snippet, from the unit test, creates the DAO stub and injects it into customer via a public setter. Since the setter only expects an implementation of the IOrderDao interface, the stub DAO easily replaces all of the live-database behavior.

Customer customer = new Customer("Acme Anvils");
customer.ID = "ACME";
customer.OrderDao = new OrderDaoStub();

As an alternative to writing DAO stubs, which are generally static by nature and often amount to quite a bit of "not implemented" code, it is also possible to mock the DAO using a tool such as Rhino Mocks or NMock. Either selection works perfectly well, but Rhino Mocks invokes methods in a strongly typed manner as opposed to using strings, as NMock does. This makes its usage compile-time checked and assists with renaming properties and methods. The test CustomerTests.CanGetOrdersOrderedOnDateUsingMockedDao demonstrates using Rhino Mocks 3.0 to create a mocked IOrderDao. Although setting up a mocked object appears more complicated than setting up a stub, the added flexibility and greatly reduced "not implemented" code are convincing benefits. The following code, found in the class MockOrderDaoFactory.cs, shows how IOrderDao is mocked with Rhino Mocks. It essentially creates a "static" mock, or stub, in the fact that it doesn't matter what is passed in for arguments; it'll always return the same example orders created by TestOrdersFactory. But mocking with Rhino Mocks isn't limited to dumb reflexes, such as this, and can be as variably responsive as required.

public IOrderDao CreateMockOrderDao() {
    MockRepository mocks = new MockRepository();

    IOrderDao mockedOrderDao = mocks.CreateMock<IOrderDao>();
    Expect.Call(mockedOrderDao.GetByExample(null)).IgnoreArguments()
        .Return(new TestOrdersFactory().CreateOrders());
    
    mocks.Replay(mockedOrderDao);

    return mockedOrderDao;
}

Unfortunately, more often than not, you're maintaining legacy code that has no semblance of the ideal "code-to-interface" that allows for such test-double injection. Usually, there are many explicit dependencies to concrete objects, and it is difficult to replace data-access objects with test doubles to simulate a live database. In these situations, your options are to either refactor the legacy code to fit within a test harness, or to use an object mocking tool such as TypeMock. With TypeMock, it is even possible to mock sealed and singleton classes - an impressive feat to pull off without such a tool. Albeit powerful, TypeMock is best left alone unless absolutely needed; using TypeMock prematurely makes it tempting to not code to interface. The more appropriate course to take when working with legacy code - time and budget permitting - is to refactor the code for greater flexibility. Working Effectively with Legacy Code by Michael Feathers is full of great ideas for refactoring legacy code into a test harness.

Unit Testing NHibernate DAOs

In the previous edition of this article, NHibernate's ISession was stored and retrieved solely via System.Runtime.Remoting.Messaging.CallContext. Although perfectly fine for WinForms and NUnit tests, this is a very bad idea for ASP.NET as the ISession may be "lost" under load. (These two articles provide further explanation for why using CallContext in ASP.NET applications is a bad idea.) For proper integration with ASP.NET applications, the ISession should be stored in HttpContext.Current.Items; but doing so forces you to simulate an HTTP context when running unit tests. It also prevents the framework from easily porting to WinForms. A better approach is to use the proper repository when appropriate. So if a web context is available, then use HttpContext; otherwise, use CallContext. Implementation details of this combined approach are discussed later. (A thanks goes out to the many people in the article comments that raised this concern.) Take a look at BasicSample.Tests/Data/CustomerDaoTests.cs for unit tests that are HTTP-agnostic with respect to how the ISession is managed. As a side, as you'll see in the unit test, unless you want data changes made within your tests to be committed to the database, it's a good idea to rollback the transaction.

As demonstrated in the sample, it's possible to create a generic DAO that works for any persistent object. (Details of which will be discussed later.) This leads to the discussion of what should be tested and how should it be done? Should each concrete DAO be fully tested? How is the test data maintained? Personal experience suggests these guidelines:

  • Make sure a unit test exists to test each method of the generic DAO, once. For example, if you have 10 DAOs which implement the generic DAO, only one of them needs to be fully tested for each method. Unit testing the other nine implementations of the generic DAO will provide little added value.
  • Make sure a unit test exists to test each extension method to the generic DAO. For example, if you have a CustomerDao class which inherits from the generic DAO and then extend it with an additional method, such as GetActiveCustomers(), then a unit test should be written to test this extension method.
  • Make sure a unit test exists to fully test each "specialty" DAO. For instance, the DAO BasicSample.Data/HistoricalOrderSummaryDao.cs does not inherit from the generic DAO and would be considered a specialty DAO. Therefore, a unit test exists to test each of its methods.
  • Use a tool such as NDbUnit to put the test database into a known state both before and after DAO unit tests are run.
  • Always remember, unit tests should never depend on the actions of another unit test! They should be independent and able to be run in isolation. E.g., a delete-test should not depend on the previous insert-test having successfully completed. Note that TestFixtureSetUp, and other setup/teardown methods, are OK to run while still being able to consider the test to have run in isolation.

Using Fit Test Doubles for the Presentation Layer

When the domain layer was tested, test doubles were used to simulate NHibernate communications with the database. Similarly, it's handy at times to use this same approach when testing the presentation layer. Suppose you're working on an ASP.NET project with a dedicated "creative" team. The creative folks, with their black turtlenecks, are in charge of developing the look and feel of the application. While they're working on the graphical layout, you shouldn't be hindered to develop a presentation layer for viewing domain-logic & data-access results and for getting client feedback, while still being able to put off decisions such as master-page setup, security enforcement, and other presentation-specific decisions. In another scenario, suppose you're working on a number of complicated business rules which you'd like the client to be able to verify without having to write a few dozen unit tests to encapsulate each minor variation. FIT (Framework for Integrated Test) is a tool for developers to fake the presentation layer very quickly and provide for a more collaborative effort between developers and project stakeholders. As the Fit site states, this is done "to learn what the software should do and what it does do. It automatically compares customers' expectations to actual results." Arguably, a tool such as this isn't "basic" and isn't required for testing NHibernate; but the importance of test-driven development needs to be emphasized and a tool such as Fit, when used appropriately, is just as applicable to software quality as NUnit.

For viewing Fit test results, you can use WinFITRunnerLite, which runs Fit tests in a Windows client similar to NUnit, or FitNesse, which provides a web-based wiki for modifying test inputs and viewing Fit test results. Although it takes a little more setup, FitNesse provides a very flexible framework for allowing clients to participate in validating coding logic and application workflow. The following screenshot shows a simple example of the output you'd expect to see from running a calculator test with FitNesse:

Screenshot - FitNesse.jpg

Although implementation examples of Fit tests are beyond the scope of this article, my hope is that you'll get interested in learning more about this powerful framework. In addition to the websites listed previously, extensive information concerning the use of Fit and FitLibrary, an extension to Fit, may be found in Fit for Developing Software by Rick Mugridge and Ward Cunningham.

Running ASPX "Smoke Tests" with NUnitAsp

At this point, we've unit tested the domain layer and the data-access layer and learned about testing with a rough presentation layer, using Fit, for getting clients more involved. It's now time to test the ASPX pages themselves. NUnitAsp is a class library for performing these types of unit tests. Although you can get rather sophisticated using NUnitAsp with your WebForms testing, I find that NUnitAsp is best for running "smoke tests" from the continuous integration server to verify that no page is blatantly breaking. Taking NUnitAsp further than this tends to result in a lot of maintenance of the associated unit testing code. Since these HTTP unit tests are slow by nature, they're rarely run and, consequently, lightly maintained; therefore, they should be kept as simple as possible. BasicSample.Tests/Web/WebSmokeTests.cs demonstrates a sampling of these unit tests. Although trivially simple, these smoke tests go a long way towards verifying that your presentation layer is responsive, that database communications are working correctly, and that NHibernate HBMs are, for the most part, error-free. As an added bonus, if the smoke tests are directed at the production environment immediately after a deployment, they serve to pre-load all the ASPX pages for a more responsive experience for the very next visitor. You should include a smoke test for every URL-accessible web page in your application. To help organize them, create a separate test class for each grouping of smoke tests. For example, all the smoke tests for an admin section of the website would be found in a file called AdminSmokeTests.cs.

BasicSample.Core for Defining the Domain Layer

The BasicSample.Core project contains the domain model and NHibernate HBM files. This project also contains interfaces, describing the data access objects, in the BasicSample.Core.DataInterfaces namespace. (Arguably, the HBM files logically belong in the BasicSample.Data assembly, but the maintenance convenience of having the HBM files physically close to the domain objects they describe far outweighs this break in encapsulation.)

Separated Interface, Implemented

You'll notice that the BasicSample.Core project does not contain implementation details of data access objects, only interfaces describing the services it needs. The concrete DAO classes, which implement these interfaces, are found within BasicSample.Data. As described previously, this technique is called Separated Interface. If you consider BasicSample.Core to be an "upper-level layer" and BasicSample.Data to be a "lower-level layer", then, as Robert Martin describes, "each of the upper-level layers declares an abstract interface for the services that it needs. The lower-level layers are then realized from these abstract interfaces. ... Thus, the upper layers do not depend on the lower layers. Instead, the lower layers depend on abstract service interfaces declared in the upper layers".

To see this in action, the data interfaces are described in the namespace BasicSample.Core.DataInterfaces. IDao is a generic interface for providing typical data access functionality. IDaoFactory then acts as an interface for one or more DAO factory classes. Coding to the IDaoFactory interface allows you to create one concrete DAO factory for production code, and another concrete DAO factory for returning DAO test doubles for unit testing purposes. (This is an example of using the abstract factory pattern.) And as previously examined in BasicSample.Tests, leveraging mock objects in unit tests provides a means for testing a single responsibility at a time.

Collection Generics Examined

By far, one of the greatest benefits that C# 2.0 has brought to the table is the inclusion of generics. With generics, more code reuse can be effectively realized while still enforcing strongly typed coding "contracts". In the previous edition of this article, Ayende's very useful, but deprecated, NHibernate.Generics was used to integrate NHibernate with .NET generics. But now that NHibernate 1.2 natively supports generics, this class library is no longer necessary. If you've used Ayende's library in the past, you've got a bit of work ahead of you to completely migrate away from it, especially if you used the automatic wiring for managing parent/child relationships. But don't let this stop you as you can still upgrade to NHibernate 1.2 without having to immediately refactor out the automatic wiring...but you'll still want to do so sooner rather than later. More information about the steps required to refactor away from Ayende's NHibernate.Generics is found below in Migrating from NHibernate 1.0x to 1.2. A drawback to Ayende's NHibernate.Generics was that internal collections needed to be exposed with both getters and setters. This broke encapsulation and allowed collections to be manipulated or modified in unintended ways - think of it as collection harassment. Now that NHibernate supports generics natively, better collection encapsulation techniques may be employed. The following code from Customer.cs and Customer.hbm.xml shows a better encapsulation of generic collections.

public IList<Order> Orders {
    get { return new List<Order>(orders).AsReadOnly(); }
    protected set { orders = value; }
}

public void AddOrder(Order order) {
    if (order != null && !orders.Contains(order)) {
        orders.Add(order);
    }
}

public void RemoveOrder(Order order) {
    if (orders.Contains(order)) {
        orders.Remove(order);
    }
}

private IList<Order> orders = new List<Order>();

<bag name="orders" table="Orders" inverse="true" cascade="all">
    <key column="CustomerID" />
    <one-to-many class="BasicSample.Core.Domain.Order, BasicSample.Core" />
</bag>

Setting the orders collection setter to protected allows NHibernate to populate the collection without having to depend on a private member directly and without having to expose the setter publicly. Alternatively, the NHibernate setting access="field" could be used to set a private member directly, but the use of this should be carefully considered and only used when warranted. In the sample code above, note what AddOrder and RemoveOrder have done to the Customer class. They've shamelessly polluted the class by adding collection management concerns to the containing class. Imagine the headache this would turn into if Customer ended up having a number of collections along with methods for managing each one. In instances like this, it's usually best to employ a custom collection.

As a simple example of using a custom collection, the Supplier class may have a number of Product items associated with it. The caveat is that NHibernate needs to map to an IList collection; consequently, two collections are maintained. The first is the products collection itself, which NHibernate is aware of, and the second is a products collection wrapper which exposes the custom collection. The following code from Supplier.cs and Products.cs demonstrates this.

// Within Supplier.cs...

public Products Products {
    get {
        if (productsWrapper == null) {
            productsWrapper = new Products(products);
        }

        return productsWrapper;
    }
}

private Products productsWrapper;
// NHibernate binds directly to this member by the access="field" setting
private IList products = new List<Product>();


// Within Products.cs

public Products(IList products) {
    this.products = products;
}

public void Add(Product product) {
    if (product != null && !products.Contains(product)) {
        products.Add(product);
    }
}

private IList products;

Although simplistic, the example code should cover most of your custom collection needs. But in some cases, a full-blown generic, custom collection is required. One of the most common scenarios includes creating a custom collection which implements BindingList. In these types of situations, it's necessary to leverage NHibernate's IUserCollectionType. A good example of implementing this may be found here.

Generic IDs and Object Comparisons

In BasicSample.Core, each persistable domain object inherits from DomainObject. This class handles most of the work involved with comparing two domain objects for equality. (A detailed discussion of this object may be found in a devlicio.us blog post.) DomainObject is also a generic class which accepts a data type declaring the ID type of the domain object. This generic property provides the ability to have one domain object which uses a string as an ID, such as Customer, and another to have a long as an ID, such as Order. It should be duly noted that in the previous edition of this article, the ID property had both a public getter and setter. Although the getter is essential, the public setter opened Pandora's box for corrupting existing data. Assume you retrieved a customer from the database and accidentally set its ID to another customer's ID. When the customer gets saved back to the database, its data overwrites that of the other customer. For a more subtle example, assume the Customer class is being used as a pseudo DTO being returned from an edit screen. With a public ID setter, the developer sets the ID and the applicable properties and returns it to another object which takes care of transferring the DTO information to the "real" customer pulled from the database. Mind you, both the DTO and the "real" customer are being passed around as instances of Customer. So if, during maintenance, a developer doesn't realize that the Customer being returned from the view should be treated as a DTO and not as a "real" customer, and proceeds to invoke save on it directly, then it's quite probable that a lot of live customer data has been lost since it was overwritten with the sparse DTO data. The primary weakness behind this is the fact that ID had a public setter. Therefore, it is best that a domain object's ID only be set by NHibernate when loading an object from the database and is hidden from public setting. But there are situations wherein a domain object has an assigned ID. (Personally, I find no place for assigned IDs and avoid them whenever possible, but understand that an argument can be made for their use. You can find a further rant in BasicSample.Web/AddCustomer.aspx.) For situations when an assigned ID is required, an interface is included called IHasAssignedId. This interface defines a method for setting the ID. So although it provides a doorway for setting the ID, it requires a bit more thought concerning its use and provides a good spot for including ID-assignment business logic. The following snippet exemplifies this within the Customer class.

public class Customer : DomainObject<string>, IHasAssignedId<string>
{
    public void SetAssignedIdTo(string assignedId) {
        Check.Require(!string.IsNullOrEmpty(assignedId), 
            "assignedId may not be null or empty");
        // As an alternative to Check.Require, 
            the Validation Application Block could be used for the following
        Check.Require(assignedId.Trim().Length == 5, 
            "assignedId must be exactly 5 characters");

        ID = assignedId.Trim().ToUpper();
    }
    ...
}

An obvious drawback to not exposing a public ID setter is that this property becomes essentially unusable to the unit testing framework, unless NHibernate is used to load the object. To skirt this problem, the class BasicSample.Tests/Domain/DomainObjectIdSetter.cs enables you to set the ID of domain objects, even if they do not implement IHasAssignedId. This ability opens many testing possibilities without having to provide a public setter to the ID property. On the flipside, using reflection to set private members, for the benefit of unit tests, should be seen as a non-standard practice and used only when absolutely necessary. It adds complexity and is fragile since it is string based. But for setting the ID property of domain objects, it's a perfect fit for the unit testing layer.

Mapping the Domain to the Database

NHibernate provides two means of mapping domain objects to the database: HBMs using XML and mapping attributes. The primary advantage of HBM-XML files is that they are physically separated from the domain objects they describe. This enables the domain objects to remain as POCOs (plain old C# objects), relatively oblivious to how they are associated with the database. But keeping the mapping information separated from the domain objects may also be seen as the primary disadvantage of HBMs in that it requires additional maintenance effort to keep switching between HBMs and the classes they map. (Some people also loathe using XML.) Mapping attributes, on the other hand, are intimately connected to the domain objects and are generally less verbose than their HBM equivalents. Using mapping attributes makes the domain objects more akin to Active Records than POCOs. (For true Active Record support, consider using Castle Project's ActiveRecord.) Besides "dirtying" up the domain objects, mapping attributes require a reference to NHibernate.Mapping.Attributes which makes the domain layer a bit less data-access-provider agnostic. On the other hand, do you often find yourself switching out data-access layers completely, anyway? But as a general rule, the domain layer should remain as data-access-provider agnostic as is practical for the design goals of the application. When it comes right down to it, it's a matter of personal preference when deciding between HBMs or mapping attributes. Regardless, when starting a new project, it should be decided which technique will be used; mixing the techniques may lead to confusion as it may not be clear which objects are mapped and which are not. The sample application demonstrates using HBMs for the mapping solution. The following snippet - not found in the sample application - shows an example of using mapping attributes instead of HBMs. Additional information concerning mapping attributes may be found in the NHibernate docs.

[NHibernate.Mapping.Attributes.Class]
public class Customer
{
    [NHibernate.Mapping.Attributes.Property]
    public string FirstName { ... }
    ...
}

NHibernate Support for Nullables

A new capability that NHibernate 1.2 brings to the table is support for nullable types. Previously, a reference to Nullables.NHibernate supported nullable types, but it is no longer needed. There's no need to treat nullable property mappings, within the HBMs, any differently than other property mappings; simply map the nullable property as you would any other. NHibernate is smart enough to transfer null between the database and the nullable property it maps to. Take a look at BasicSample.Core/Domain/Order.cs for an example of using a nullable DateTime. Although support for nullables is now trivially simple to implement, it should only be used after careful consideration. My own experiences have led me to hardly ever create columns in the database which allow null. In my opinion, mapping null to primitive types has little meaning within the database nor within the domain layer. Nullable DateTime properties are about the only exception that come to mind. Obviously, there are plenty of valid situations for using null with non-primitive relationships; but even then, use of the Null Object pattern is a great alternative to checks for null scattered about the code.

NHibernate Support for Stored Procedures

NHibernate 1.2's support for stored procedures is a critical addition to this framework's capabilities. But what's so great about integrating with stored procs if all the CRUD is already taken care of by NHibernate? Two reasons: working with legacy databases and performing reporting queries. Creating a stored procedure to return query-intensive reporting data is the optimal way for polling the database. To illustrate, the Northwind database maintains orders placed by each customer. Assume you need to return the quantity of each product ever ordered by a specific customer. The class BasicSample.Core/Domain/HistoricalOrderSummary.cs is a value object which encapsulates this summary information. Note that this class does not inherit from DomainObject and does not have a corresponding HBM file. This is considered an "unmanaged" class from NHibernate's perspective, not to be confused with unmanaged C#. The file HistoricalOrderSummary.hbm.xml declares a named query which makes a call to the stored procedure CustOrderHist for returning the needed reporting data. (Details of how this data is converted into domain objects will be discussed next, while dissecting the BasicSample.Data project.)

Providing capabilities for interacting with stored procedures brings up the issue of deciding what should be performed by stored procedures and what should be performed by the domain layer. As an alternative to the stored proc, to meet the previously stated requirements, the C# code could have retrieved all the orders placed by the given customer, looped through all of them and summed the quantity of each product. But it was clear that having SQL Server perform this summing was far more efficient; imagine if the customer had placed thousands of orders. If, on the other hand, performance analysis had shown little difference in efficiency between letting the domain layer do the work vs. a stored procedure, then go with the domain layer to do the work. You'll find that this will lend itself to a more complete domain-driven design and better reusability of the resulting code. As is frequently encountered during development, this is a balance between better domain-driven design and optimized performance ... always lean towards the domain until a bottleneck is discovered. (As mentioned in the introduction, see Peter Weissbrod's article for an introduction to identifying NHibernate related bottlenecks.) Incidentally, the first NHibernate project I was involved with, which was about 50,000 LOC, required about six stored procedures in all to help optimize the data-querying capabilities. This should serve to illustrate how much you can put into the domain layer without taking an appreciable hit to performance.

BasicSample.Data for Implementing NHibernate Communications

The BasicSample.Data project contains the concrete DAOs and implementation details for communicating with the database and managing NHibernate sessions.

The DAO Factory and Generic DAO

The DAO factory and generic DAO objects have been implemented as NHibernateDaoFactory and AbstractNHibernateDao, respectively. With a few key modifications, these are C# ports of the Java versions described in detail at Hibernate's website. I highly recommend reviewing this article in detail. (Note that a bug has been fixed from the previous edition of this article wherein calling CommitChanges(), when a transaction was not being used, was not flushing the session; see AbstractNHibernateDao.CommitChanges for the fixed code. Additionally, due to NHibernate 1.2's native support of generics, DAO list results may now be retured directly without first going through a convert-to-generic-list process.) The most impressive thing to note, by using a generic DAO, is that it takes just a few lines of code to create a full-blown DAO ready for use:

  1. Add a new inline interface, and associated retrieval method, to BasicSample.Core/DataInterfaces/IDaoFactory.cs.
  2. Add a new inline implementation for the new DAO, and retrieval method, to BasicSample.Data/NHibernateDaoFactory.cs.

Looking at the ICustomerDao/CustomerDao example in the basic sample application, it took about five lines of code to define the interface and implement the concrete DAO ... not too bad.

Occasionally, it's necessary to extend the abstract, generic DAO with specialty methods. The interface BasicSample.Core/DataInterfaces/IOrderDao.cs defines one such example. In this case, the domain layer needs the orders-DAO to return all the orders found within a given date range. For manageability, any DAOs which extend the abstract DAO should be placed in their own file and not declared inline as ICustomerDao was. Likewise, this also applies to the concrete implementation demonstrated with BasicSample.Data/OrderDao.cs.

Retrieving Reporting Data from Stored Procs

While examining the BasicSample.Core project, we encountered HistoricalOrderSummary.hbm.xml which declared a named query to communicate with a stored procedure for the purposes of retrieving reporting data. In the BasicSample.Data project we find the concrete DAO, HistoricalOrderSummaryDao.cs. This DAO includes the following code to invoke the named query, receive the results, and pass each returned row into the constructor of the value object, HistoricalOrderSummary. The key is the call to SetResultTransformer which informs NHibernate how to translate the results into a new instance of HistoricalOrderSummary. Review the NHibernate documentation to learn about other available options for this translation procedure.

IQuery query = NHibernateSession.GetNamedQuery("GetCustomerOrderHistory")
    .SetString("CustomerID", customerId)
    .SetResultTransformer(
    new NHibernate.Transform.AliasToBeanConstructorResultTransformer(
        typeof (HistoricalOrderSummary).GetConstructors()[0]));

Handling the NHibernate Session

At the core of integrating with NHibernate is BasicSample.Data/NHibernateSessionManager.cs. This thread-safe, lazy singleton performs the following duties:

  • Builds the ISession factory when instantiated for the first time. The class is a singleton because building the session factory is very expensive. In the previous edition of this article, an application setting within web.config declared which assembly contained the HBM files. This is not necessary. Instead, the mapping attribute should be included in the hibernate-configuration. For example, web.config includes the following declaration, within the NHibernate configuration settings, letting NHibernate know where to find the HBMs: <mapping assembly="BasicSample.Core" />. Besides being a cleaner approach than using an application setting, this lends itself better towards managing multiple databases concurrently.
  • Handles commands sent to the context-specific ISession. Examples include creating/closing the current ISession, registering an IInterceptor (which needs to be done, incidentally, before a transaction is begun), and managing transactions.
  • Stores and retrieves the context-specific ISession. There are various ways to manage the life-cycle of the ISession; one such approach, called "Open Session in View," is presented shortly. NHibernateSessionManager does not dictate when the ISession needs to be created and destroyed, but it does dictate how the ISession is stored. In the previous edition of this article, CallContext was used as the sole means for storage. This is a very bad idea for ASP.NET applications. The corrected approach leverages HttpContext when it is available and CallContext when it is not. Although this forces you to include a reference to System.Web, this enables seamless portability between WebForms and WinForms. Incidentally, NUnit tests work like a WinForms application, and, thus, benefit from this portability. The following code, pulled from NHibernateSessionManager, demonstrates switching between the appropriate context and is worth showing in its entirety:
    private ITransaction ThreadTransaction {
        get {
            if (IsInWebContext()) {
                return (ITransaction)HttpContext.Current.
                    Items[TRANSACTION_KEY];
            }
            else {
                return (ITransaction)CallContext.GetData(TRANSACTION_KEY);
            }
        }
        set {
            if (IsInWebContext()) {
                HttpContext.Current.Items[TRANSACTION_KEY] = value;
            }
            else {
                CallContext.SetData(TRANSACTION_KEY, value);
            }
        }
    }
    
    private ISession ThreadSession {
        get {
            if (IsInWebContext()) {
                return (ISession)HttpContext.Current.Items[SESSION_KEY];
            }
            else {
                return (ISession)CallContext.GetData(SESSION_KEY); 
            }
        }
        set {
            if (IsInWebContext()) {
                HttpContext.Current.Items[SESSION_KEY] = value;
            }
            else {
                CallContext.SetData(SESSION_KEY, value);
            }
        }
    }
    
    private bool IsInWebContext() {
        return HttpContext.Current != null;
    }
    
    private const string TRANSACTION_KEY = "CONTEXT_TRANSACTION";
    private const string SESSION_KEY = "CONTEXT_SESSION";

The basic flow for retrieving a session is as follows:

  1. The client code calls NHibernateSessionManager.Instance.GetSession().
  2. If not already instantiated, this singleton object builds the ISession factory, loading HBM mapping files from the appropriate assembly.
  3. GetSession looks to see if an ISession is already bound to the appropriate context.
  4. If an open NHibernate ISession is not found, then a new one is opened (bound to an optional IInterceptor) and stored back to the appropriate context.
  5. GetSession then returns the context-specific ISession.

This flow, as well as the rest of NHibernateSessionManager, follows that closely described in Hibernate in Action, chapter 8 - Writing Hibernate Applications. We'll next see where the session and/or transaction is begun and committed.

BasicSample.Web for Tying it all Together

As expected, the BasicSample.Web project contains application configuration and ASPX pages. In this sample, the code-behind pages act as controllers, communicating with the domain and data access layers directly. This is not best practice, MVC separation - code-behind pages should be seen as part of the view, pure and simple. But for now, it's simple, and serves well for the demonstration. (The "enterprise" sample application, discussed later, shows a better example of keeping business logic out of the code-behind pages using Model-View-Presenter. Castle MonoRail and Maverick.NET are two other options for separating logic from views, as well. There's even talk of a Microsoft-backed MVC framework in the works. But I digress...)

Here's a closer look at some of the more interesting bits of BasicSample.Web...

Open Session in View

If you want to leverage NHibernate's lazy-loading (which you most certainly will), then the Open-Session-in-View pattern is the way to go. ("Session" in this context is the NHibernate ISession...not the ASP.NET Session object.) Essentially, this pattern suggests that one NHibernate session be opened per HTTP request. Although session management within the ASP.NET page life-cycle is clear in theory, various implementation approaches may be used. The approach I've taken is to create a dedicated IHttpModule to handle the details of the pattern. Aside from centralizing session management responsibilities, this approach provides the additional benefit that we may implement the Open-Session-in-View pattern without putting any session management code into our ASPX pages.

To see how this has been implemented, take a look at BasicSample.Web/App_Code/NHibernateSessionModule.cs. The following section is then included in web.config to activate the IHttpModule:

<httpModules>
  <add name="NHibernateSessionModule" 
       type="BasicSample.Web.NHibernateSessionModule" />
</httpModules>

The IHttpModule opens a transaction at the beginning of a web request, and commits/closes it at the end of the request. The following is an example of modifying the IHttpModule so that an IInterceptor gets bound to the session as well as being contained within a transaction:

public void Init(HttpApplication context) {
    context.BeginRequest += 
          new EventHandler(InitNHibernateSession);
    ...
}

private void InitNHibernateSession(object sender, EventArgs e) {
    IInterceptor myNHibernateInterceptor = ...

    // Bind the interceptor to the session.
    // Using open-session-in-view, an interceptor 
    // cannot be bound to an already opened session,
    // so this must be our very first step.
    NHibernateSessionManager.Instance.RegisterInterceptor(
        myNHibernateInterceptor);

    // Encapsulate the already opened session within a transaction
    NHibernateSessionManager.Instance.BeginTransaction();
}

Even if not used, there's very little overhead associated with opening/closing the transaction as NHibernate doesn't actually open a database connection until needed. (With that said, there's room for improvement and a point-of-susceptibility with this approach; see Where to go from here? for areas of further research.) Other strategies that you may want to consider are opening a session not associated with a transaction - hey, it works for eBay - and/or registering an IInterceptor with the NHibernate session. (Using an IInterceptor is great for auditing. See Hibernate in Action, section 8.3.2 - Audit Logging, for more details.)

NHibernate Settings Within web.config

There are two key settings within web.config to optimize NHibernate: hibernate.connection.isolation and hibernate.default_schema. By default, NHibernate uses IsolationLevel.Unspecified as its database isolation level. In other words, NHibernate leaves it up to the ADO.NET provider to determine what the isolation level is by default. If the provider you're using has a default isolation level of Serializable, this is a very strict level of isolation that can be overkill for most application scenarios. A more reasonable setting to start with is ReadCommitted. With this setting, "reading transactions" don't block other transactions from accessing a row. However, an uncommitted "writing transaction" blocks all other transactions from accessing the row. Other provider defaults include (note that they are subject to change by version):

  • SQL Server 2000 - Read Committed
  • SQL Server 2005 - Read Committed
  • Firebird - Read Committed
  • MySQL's InnoDB - Repeatable Read
  • PostgreSQL - Read Committed
  • Oracle - Read Committed

The other optional setting not to be ignored, hibernate.default_schema, is easily overlooked, but can have a significant impact on the querying performance. By default, table names within prepared NHibernate queries - such as those from CreateCriteria - are not fully qualified; e.g., Customers vs. Northwind.dbo.Customers. The crux of the problem is that sp_execsql, the stored procedure used to execute NHibernate queries, does not efficiently optimize queries unless the table names are fully qualified. Although this is a small syntactic difference, it can slow query speeds by as much as an order of magnitude in some cases. Explicitly setting hibernate.default_schema can provide as much as a 33% overall performance gain on data intensive pages. The following is an example of declaring these settings in web.config:

<add key="hibernate.connection.isolation" value="ReadCommitted" />
<add key="hibernate.default_schema" value="Northwind.dbo" />

A Simple List, Add and Update Form

The web project contains a few web pages:

  • Default.aspx: Provides the navigational homepage for the application.
  • ListSuppliers.aspx: Fulfills the user story "User may view listing of suppliers and their products." It's a good example of using a custom collection.
  • ListCustomers.aspx: Fulfills the user story "User may view listing of existing customers." In the enterprise sample, this page loads via Model-View-Presenter (MVP).
  • EditCustomer.aspx: Fulfills the user stories "User may view details of customer," "User may view a listing of past orders placed by a customer," "User may view a listing of order summaries," and "User may edit customer details." Certainly a lot for one page to take on! The enterprise sample demonstrates how to split up these responsibilities using MVP and handle events coming from the views.
  • AddCustomer.aspx: Can you guess which user story this one fulfills?

The important thing to note is that the code-behind pages work with a DAO factory to talk to the database; i.e., the code isn't bound to a concrete implementation of a data access object. This makes it much easier to swap DAO implementations and unit test your code without depending on a live database. With everything in place, the following is an example of how easy it is to retrieve all the customers in the database:

IDaoFactory daoFactory = new NHibernateDaoFactory();
ICustomerDao customerDao = daoFactory.GetCustomerDao();
IList<Customer> allCustomers = customerDao.GetAll();

In the above code, a concrete reference to NHibernateDaoFactory is retrieved via new. In production code, as discussed in the Architectural Notes, this reference may (should?) be injected at runtime using an Inversion of Control (IoC) container such as Castle Windsor. The enterprise sample includes an example of using this utility for dependency injection.

While it's acceptable to use the new keyword to create NHiberanteDaoFactory within your code-behind, or controller, your domain objects should never create DAO dependencies directly. Instead, their DAO dependencies should be provided via a public setter or via a constructor. (IoC can help here as well.) As detailed previously, this greatly enhances your ability to unit test with test double DAOs. For example, the following code, found within BasicSample.Tests/Data/CustomerDaoTests.cs, retrieves a customer and gives it its DAO dependency to perform the next action:

IDaoFactory daoFactory = new NHibernateDaoFactory();
ICustomerDao customerDao = daoFactory.GetCustomerDao();

Customer customer = customerDao.GetById(Globals.TestCustomer.ID, false);
// Give the customer its DAO dependency via a public setter
customer.OrderDao = daoFactory.GetOrderDao();

Using this technique, the business layer never needs to depend directly on the data layer. Instead, it depends on interfaces defined within the same layer, as defined in the BasicSample.Core project.

Extending the Basics to an "Enterprise" Solution

What has been discussed thus far has included basic techniques for integrating NHibernate with ASP.NET and unit tests. But what has not been sufficiently demonstrated is the inclusion of these techniques into a scalable, reusable architecture. The "Enterprise" NHibernate Sample demonstrates one such example and is discussed next. As "enterprise" carries many connotations, this article uses this word to describe any real-world, database backed ASP.NET application.

To get the enterprise sample up and running, you'll need to perform the following steps in addition to those described previously in Running the Sample Applications:

  1. Unzip the enterprise sample to the folder of your choice.
  2. Create a new virtual directory within IIS. The alias should be EnterpriseNHibernateSample, and the directory should point to the EnterpriseSample.Web folder that was created after unzipping the application.
  3. Open EnterpriseSample.Web/Config/NorthwindNHibernate.config to modify the database connection string to connect to a Northwind database on Microsoft SQL Server.
  4. Open the following files and change the fully qualified path, which points to EnterpriseSample.Web/Config/NorthwindNHibernate.config, to the corrected path on your machine:
    • EnterpriseSample.Web/Web.config
    • EnterpriseSample.Web/Config/CastleComponents.config
    • EnterpriseSample.Tests/App.config
    • EnterpriseSample.Tests/Globals.cs
    I don't like having to set this path in this many locations and would like to centralize it to just Web.config and App.config. See Where to go from here? for additional thoughts on how this could be improved.
  5. Open your web browser to http://localhost/EnterpriseNHibernateSample/Default.aspx, and you're off and running!

Real-World Architecture

Although it's best to avoid premature generalization, it's highly beneficial to begin project development with a judiciously planned architecture. A solid architectural foundation is easy to extend, establishes a clean separation of concerns, and provides built-in guidance for developers working with it. There is a fine balance between generalizing too much while still establishing a solid architecture. (I discuss this balance further in a previous post.) Although one shoe does not fit all, the enterprise sample application demonstrates, what I believe to be, a solid architectural framework for medium to large sized ASP.NET projects. The general architectural structure, including direction of concrete and interface dependencies, is shown below.

Application Architecture

High-level structure, motivations, and assumptions of this architecture are discussed here.

Beyond the Basics

The enterprise sample extends the basic sample in the following ways:

  • AbstractNHibernateDao.cs, NHibernateSessionManager.cs, NHibernateSessionModule.cs, and IDao.cs have been placed into their own reusable/extendable project called ProjectBase.Data.
  • DesignByContract.cs has been placed into their own reusable/extendable project called ProjectBase.Utils.
  • The can't-debug-NHibernate-without-it log4net has been added and is configured within web.config.
  • An error logging IHttpModule has been added to the fray.
  • NHibernateSessionManager.cs has been "upgraded" to support concurrent use of multiple databases. Accordingly, the functionality described within the article Using NHibernate with Multiple Databases has been ported to the sample application. You may certainly revert it back to single database support by using code from the basic sample project.
  • The design pattern Model-View-Presenter has been employed to separate business logic from ListCustomers.aspx and EditCustomer.aspx. (The version of MVP employed is in line with the definition of Supervising Controller, an MVP specialization.) Accordingly, a new project called EnterpriseSample.Presenters has been created to hold view interfaces and presenters. Associated unit tests have also been added to EnterpriseSample.Tests.
  • A simple web service, GetCustomer.asmx, has been added which responds with DTOs populated from data retrieved by NHibernate. Please note that this example should not necessarily be considered a best practice example. I am a novice web service designer and am unsure as to what the best practices, with respect to NHibernate integration, really are. See Where to go from here? for an additional note concerning further research needed in this area. A number of related comments have also been included in GetCustomer.cs.
  • Castle Windsor has been integrated to inject the DAO factory into a Page Controller for pages, user controls and web services. Keep in mind that Castle Windsor may be leveraged much further than this.

Taking a close look at the example code may be daunting at first glance (and for the next couple of glances after that). But once everything's in place, it is powerful, flexible, and is minimally intrusive to the code you're developing.

Where to go from here?

If you were reading a text on an emerging technology, this section would be called "Areas for Further Research." I suppose that would be just as applicable since that's exactly what's described here. There are a few items that could be modified for better flexibility or extended for greater usability. Here are a few ideas to fill any spare time you may have:

  • The Guidance Automation Toolkit could be leveraged to bundle these best practices into an installable base project.
  • Few best practices exist for NHibernate integration with web services. Exploring this area and defining at least "better" practices would be very beneficial to the NHibernate community. If you're interested in taking a stab at it, Thomas Erl's Service-Oriented Architecture is a great place to start.
  • This framework supports transactions which begin and end with the HTTP request. A more ideal scenario would be to use attributes to define when a transaction should be begun. The transaction would then commit at the end of the designated method. Using attributes would avoid a direct dependency on the NHibernateSessionManager class. An example attribute would be [Transaction] and would be placed at the top of a method. Furthermore, if working with multiple databases, the attribute would contain some sort of ID designating which database communications require a transaction, such as [Transaction("Primavera")]. Integration with Castle Project's Automatic Transaction Management facility would be a good place to start.
  • On a related note, a few developers have complained that using Open-Session-in-View, via NHibernateSessionModule, opens extra, unused transactions for digital assets such as images and CSS files. A discussion, and candidate solutions, may be found in the article's comments. I'm interested in hearing further suggestions for cleanly resolving this occasional problem. (Using attributes to start transactions, as described in the previous point, would also fix this.)
  • The multiple database implementation passes around each database's config file location to properly identify which ISession belongs with which database. This results in the config file's location being duplicated in both web.config and in the Castle Windsor configuration file. My preference would be to get rid of this duplication, pass a simpler ID string instead, and/or eliminate needing the string to be be passed to the constructor of DAOs altogether.
  • Using the provided framework, it is not possible to manage a single transaction across two databases located on different servers. Perhaps the System.Transactions namespace could be leveraged to resolve this.
  • A template Continuous Integration configuration could be created for the enterprise example including CruiseControl.NET, NAnt (or MSBuild), NDepend, FxCop, SandCastle, NDbUnit, NStatic, and FIT/FitNesse.

Please feel free to discuss your ideas for these topics in the comments below, in the NHibernate forums, or in a new CodeProject article! You can also contact me via http://devlicio.us/blogs/billy_mccafferty if you'd be interested in contributing a solution to one of these topics.

Migrating from NHibernate 1.0x to 1.2

If you're migrating from 1.0x to 1.2, you'll most definitely run into a few migration issues. The official overview of the changes to the API may be found within the NHibernate 1.2 Migration Guide. Listed below are a few items that you should pay particular attention to:

  • Update all references to <hibernate-mapping xmlns="urn:nhibernate-mapping-2.0"> with urn:nhibernate-mapping-2.2. References may be found in HBM and config files.
  • All classes and collections with 1.2 are now loaded lazily by default. Consequently, if you run your application without modifying the lazy attribute, you'll most likely receive a number of "method x should be virtual." To resolve this, set lazy="false" for each class and collection that you do not want loaded lazily.
  • Since the use of generics is now natively supported, you'll no longer need Ayende's extremely useful NHibernate.Generics utility. A detailed example of refactoring away from Ayende's NHibernate.Generics may be found at my blog at devlicio.us.

After performing any migration work to NHibernate 1.2, it's good to test the following for each parent/child relationship:

  • Do updates to a child, via the parent, still work?
  • Do creations of new children, added to the parent, persist (or not persist) to the database as expected?
  • Do deletions of existing children from the parent, or by deleting the child directly, work correctly?
  • Have other CRUD, cascade scenarios been tested?

Summary of NHibernate/ASP.NET Best Practices

The following is a quick summary of the best practices conveyed within the sample application:

  • Business objects should communicate with data access objects via interfaces; i.e., always depend on abstractions.
  • Concrete data access objects should implement interfaces defined by the "client", the business logic layer.
  • Expose data access objects via an abstract factory to assist with testing and to reduce coupling.
  • Keep NHibernate session management details out of the presentation and business logic layers.
  • Use unit test categories to easily turn off unit tests that depend on a database connection.
  • Set hibernate.default_schema within web.config to give NHibernate a great performance boost!

I hope this article has helped with putting best practices into place for leveraging the benefits of ASP.NET, NHibernate, and unit testing. I'm currently having great success with this approach on my own projects, and look forward to hearing your experiences as well. Please let me know if you have any questions or suggestions. If you want to see what I'm up to at any given moment, you can always follow my latest diatribes here.

Article History

  • 2006.03.12 - Initial posting.
  • 2006.03.13 - Added BasicSample.Tests along with the inclusion of a mock DAO and related discussion above.
  • 2006.03.28 - Clarified the default isolation level and added a plug for Model-View-Presenter.
  • 2006.04.27 - Modified NHibernateSessionManager to store ISession with CallContext instead of HttpContext; updated article text to reflect changes as well. [Author's note: Bad idea!]
  • 2007.04.02 - Published the 1.2nd edition of this article including compatibility with NHibernate 1.2, an expanded emphasis on test-driven development, and having updated recommendations.
  • 2007.05.01 - Fixed a number of important bugs, gave DomainObject an overhaul, added usage of custom collections, and expanded the MVP examples.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here