A while back, I had reason to look at one of my assemblies in my “toolbox” which wrapped the System.Data.SqlClient
assembly and as often happens when you look at old code, you cringe and wonder what you were thinking when you wrote it, so I decided to refactor the code using S.O.L.I.D principles to make the code easier to understand, test, maintain and extend and so ADO.NET DAL was born.
For those of you who follow on me on Twitter, you’ll know I spent a long time trying to ensure that the library had 100% code coverage but unfortunately I never made it, one method still eludes me but 99% coverage is pretty good .
A Data Access Layer? Really?
In this age of ORMs, micro-ORMs and Simple.Data, are people still using ADO.NET directly? Are people writing data access layers which don’t return a fully populated POCO? I can safely say yes to both of those questions having seen code with numerous native ADO.NET calls and various blog and tweets from people that don’t like or want to use an ORM.
For some people, ORMs are seen as an anti-pattern which actually hinder development as a project becomes more complex, for other people they may be dealing with legacy software that uses ADO.NET extensively and this use of native ADO.NET makes the code hard to test and often the amount of effort required to refactor it is so large it is just left as it is.
Why Should I Use It?
If you are working on legacy software which has ADO.NET code scattered throughout the codebase, your working on this library should make it easy for you to refactor the code without taking a lot of time, allowing you to write tests for it, and making the code a lot cleaner in the process.
If you don’t want to use an ORM, then this library has wrapped the ADO.NET functionality in a nice package that you can just use meaning you don’t have to worry about connection handling, transactions or creation of parameters.
What Does It Do For Me?
As mentioned above, the library is designed to deal with connection handling, all that you need to do is to provide the SQL, or name of the stored procedure, you want to execute to the appropriate method and the library, create the command object, configure it and execute it for you.
The intention behind the code is to simplify the use of ADO.NET by wrapping the boilerplate code and just exposing the core functionality through a facade meaning you can focus on what you want your code to do rather than the how.
If you have native ADO.NET in your classes, you can simply replace blocks of code with calls to the appropriate methods, this means you need make only the minimum of changes to the overall structure of your code without worrying about side effects.
The library is test friendly making it easy to mock/stub/fake the interaction between your code and the database and by ensuring that the code can be tested. It is also easy to inject into any classes that may need it using your favourite Inversion of Control container.
To further help with testing, the public
API uses the System.Data.Common
namespace which is designed to work with various data providers and all support generic interfaces that can again be mocked/stubbed/faked.
Show Me the Code….
Here is an example of some pretty standard ADO.NET code that you may find in many code bases:
1: using (var connection =
2: new SqlConnection(ConfigurationManager.ConnectionStrings["theDatabase"].ConnectionString))
3: {
4: var command = new SqlCommand("GetValue", connection);
5: command.CommandType = CommandType.StoredProcedure;
6: command.Parameters.Add(new SqlParameter("firstParameter", parameterValue));
7:
8: connection.Open();
9:
10: result = command.ExecuteScalar();
11: }
The code above also has a dependency on ConfigurationManager
, whilst I’m not saying this is always the way the configuration string is provided, I’ve seen it more times than I’d care to remember.
So what does this look like with the DAL?
1: result = dataAccess.ExecuteScalar(storedProcedureName,
2: dataAccess.ParameterFactory.Create("firstParameter",DbType.Int32,123));
You can accomplish the same in a single line… OK, so I cheated a little and missed out the creation of the dataAccess
object itself but that is just:
1: IDataAccess dataAccess = new DataAccess(connectionString);
And if you are using dependency injection, you’d get your IoC container to do this for you (you are using IoC aren’t you?) which means that you won’t have it scattered throughout the code base.
If you look at the code, you’ll see that the main IDataAccess
interface has a ParameterFactory
object that you use for creation of any parameters that you may need, and because the method calls all take a param array of DbParameters
, you can simply add more and more in the method call like this:
1: result = dataAccess.ExecuteScalar(storedProcedureName,
2: dataAccess.ParameterFactory.Create("Parameter1",DbType.Int32, 123),
3: dataAccess.ParameterFactory.Create("Parameter2",DbType.AnsiString,"a"),
4: dataAccess.ParameterFactory.Create("Parameter3",DbType.Double,"12.01"));
But equally, because it's a param array, you can call the same method with no parameters at all:
1: result = dataAccess.ExecuteScalar(storedProcedureName);
Anything Else I Should Know?
The library also provides a basic unit of work implementation through normal ADO.NET transactions:
1: dataAccess.Transactions.BeginTransaction();
2:
3: try
4: {
5: int id = dataAccess.ExecuteScalar(doUpdate,
dataAccess.ParameterFactory.Create("1",DbType.Int32,1));
6:
7: dataAccess.ExecuteNonQuery(doUpdate2,
dataAccess.ParameterFactory.Create("id",DbType.Int32,id),
8: dataAccess.ParameterFactory.Create("value",DbType.String,"abc"));
9:
10: dataAccess.Transactions.CommitTransaction();
11: }
12: catch (Exception ex)
13: {
14: dataAccess.Transactions.RollbackTransaction();
15: throw;
16: }
The other feature that you may want to be aware of is that each method has an overload that allows you access to the actual DBCommand
that was executed so that if you need to get to the return value, return parameters or anything else on the command, you simply provide an out
parameter to populate:
1: dataAccess.ExecuteNonQuery(out commandToCheck, storedProcedureName);
Future Plans
At the moment, the library only supports SQL Server as that was the focus of the original library that I refactored, however, I’m hoping to extend the number of types of database the library supports going forward with the first additional type of database being SQL Compact.
This seemingly simple change will offer plenty of challenges as I would like to keep the public
API the same meaning, I have to work out which type of database you want under the covers, quite how I’m going to do that, I haven’t decided yet.
Feedback Please
I’d love to hear your thoughts on this, is it something you like? Is it something you think you could use? Would I be wasting my time extending this further, given alternatives that are out there?
Please leave me a comment or contact me about it and feel free to fork the code and have a play.