After due consideration (and some input from other developers at work, I found the old article somewhat lacking, This rewrite improves the old code, and explores the magic of the singleton pattern.
Preface
This article is a fairly significant rewrite of an code/article I posted last year, and reflects modifications to that code, as well as new ideas based on the way other programmers at work have used it. The code is significantly changed, so I thought it warranted a whole new article instead of just pointing back to my old code. Beyond that, this code is used in an app for which I'll be writing an article in the future, and I want to simply provide a link to this article instead of having to describe everything there (or make what will be a long article even longer. Those of you with the attention span of a two-slice toaster will rejoice in my sensitivity to your needs. See? I'm a sensitive guy. No - really.
You may notice that I have modified some of the CodeProject styles regarding headers, and <pre> block background colors (mostly to delineate between code blocks and text output). Note to CP staff/editors (please don't muck around with these styles unless absolutely necessary).
There are no screen shots in this article, because there is no GUI associated with the class library or the test application. Instead, I copy/pasted the console output into <pre> blocks because it's easier to read and doesn't run afoul of the CodeProject article image width restrictions.
Finally, fair warning - this is an admittedly long read. That's the way I roll, and in my oh-so-outlaw opinion, it's the way pretty much ALL articles should be. That's why they're called "articles", instead of "tips". New guys? Watch and learn.
Introduction
Why not just use an ORM? Let's get this out of the way right up front. I don't use ORMs. I abhor the idea of them, because they try to be (and need to be) a one-size fits all solution, and assume that you should never stray from whatever "one true path" they tend to advocate. Furthermore, I don't know who decided it was a bad idea to use stored procedures, but ORMs appear to adhere to this paradigm, and generally provide poor support for their use. I much prefer stored procedures over in-code queries due a) to the much higher inherent security stored procedures afford, and b) find it more convenient to modify the back-end instead of having to deploy a new version of an app. Be it known that I'm not the slightest bit interested in arguing the point, nor even rationally discussing it, so don't waste your time (or mine) filling the comments section with such drivel.
What databases does this code support? This code supports SQL Server (I use the 2014 Developer Edition, if you're interested), and should be compatible with all versions of SQL Server, from 2008R2 to the present. Other databases don't interest me, and besides, it would take a lot longer to write this code/article if I were to create alternatives for every "better idea" that's out there. It should be a simple matter for you - a programmer - to figure out how to make this code your own. Once again, I'm not interested in the typical fanboy claptrap regarding Oracle, MySql, YourSql, or WhateverSQL you might be using, and I won't be receptive or willing to help you modify the code to do so. You're all programmers, so work the problem, and don't bother me about it.
Does this code work in .Net Core? I understand that the girls out there that use .NET Core might also be able to use this code, but since I don't (yet), I won't be providing code that supports it, or even suggestions about how to convert it be compatible. If you're even semi-self-aware (and rightfully assuming you're a programmer), you'll be able to easily identify the parts of the code that require modification to support your own swirling vortex of insanity. In fact, if you want to convert this code to .Net Core, be my guest, but make sure you post an article about it. In fact, you can post it as an alternative to this article. Make sure you fully describe what it took to convert.
No .Net Core support? Really Simply out of curiosity, I tried to convert the project to .Net Core. The steps that were required are shown at the end of this article.
Now that we have the pleasantries out of the way...
General
This article reflects a technique I implemented in an environment that supports two networks (classified and unclassified), each having five environments (on different virtual machines), supporting 20 applications, and accessing over 30 databases. In the interest of brevity, I removed everything not directly related to this code (network, environment, and applications) because a) it's proprietary, b) it just cluttered up the code, and c) it didn't apply to the desired aim of the article. If anyone is doing the math, this scheme supported the possibility of having 6000 possible connection strings (although realistically, each of our apps would only require 4-5 to be available).
Originally, I put the onus on the programmer of enabling some of the ideas via base entity classes, in the form of various ways to implement SqlParameter[] creation. In this version of the article, I included more automatic method for it, without removing the ability of the programmer to do it the old way. This new idea is explained in the section that talks about how reflection is used in the library.
To make this code suitable for public distribution, I removed the connection string list class that manages the fore-mentioned connection string chaos that we have to deal with at work, but left in the security-minded code, such as encrypted databases and obfuscated connection strings.
This article demonstrates a reasonably generic approach to using ADO.NET and support of a high number of connection strings. Not only that, but all connection strings are built on-demand, and absolves the programmer of maintaining config files, unless he/she wants to do that.
The Code
The following are nice-to-know facts regarding this code:
- Used Visual Studio 2017 (should be able to use 2013 or higher without significant gymnastics on your part)
- .NET 4.72 (need to use 4.0 or greater)
- SQL Server 2014 (should be able to use 2008R2 or higher)
- Windows 7 VM (yes, I'm a barbarian, but the code should work in whatever bug-ridden Microsoft abomination you might be using)
- Reflection is used to achieve the generic nature of the code. I realize reflection is generally frowned upon because it's slow, but reflection performance can't be a factor due to the nature of using generic types.
- Singletons are used for single-instance objects in the TestHarness application. I normally just create a static global class that instantiates globally needed objects, but I thought I'd stray outside that particular comfort zone and do something I'd never had real cause to use.
- Beyond casual mention, SQL coding techniques and best practices are pretty much outside the scope of this article. Using this code (or even your favorite ORM) does NOT absolve you of the responsibility of being familiar with SQL server and how to write adequate stored procedures and queries. Your job is being a programmer, so BE A PROGRAMMER.
The sample solution architecture is implemented in such a way as to minimally exercise the code. Feel free to release your inner programmer to make it in your own image. I retained the comments in the code snippets in an effort to reduce the typed article narrative. You may notice html-style line-breaks in the intellisense comments. The reason is that VS2019 is supposed to support line-breaks here, but older versions do not. This lack of support in older versions makes my prolific use of comments kinda unwieldy because they end up being REALLY long and many times extend beyond the end of the screen. Blame Microsoft.
Connection Strings with [PWConnectionString]
As stated above, the PWConnectionString
class presented here was developed due to the extreme breadth of operating environments we have to support at work. In our code, we have a ConnectionStringList
class that implements proprietary code to support our infrastructure requirements, but it wasn't necessary to illustrate the actual reason we're here. However, it might be useful if you want to obfuscate your own connection strings, or ease your eventual quagmire of web.config files in a multi-environment situation such as ours.
First, we have the expected connection string components. These properties support the most commonly used attributes in a connection string
, and are mostly unremarkable.
Of most notable interest is the fact that the connection string is assembled from it's component parts, and automatically adapts to the use of Windows login or database credentials. Also, the ConnectionString
property returns the assembled connection string as a base-64 encoded string. At work, we do this to keep the connection string obfuscated until it's actually needed by a SqlConnection
object. This keeps the security nazis off our backs when they're checking the code with source code security scanning apps, such as Fortify.
#region Properties
public bool EncryptTraffic { get; set; }
public string Name { get; set; }
protected string Server { get; set; }
protected string Database { get; set; }
protected string UserID { get; set; }
protected string Password { get; set; }
private bool IsValid
{
get
{
string value = string.Concat( this.Server, ","
,this.Database, ","
,this.UserID, ","
,this.Password);
string[] parts = value.Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
return (parts.Length >= 2);
}
}
private string Credentials
{
get
{
string value = "Integrated Security=";
if (string.IsNullOrEmpty(this.Password) && string.IsNullOrEmpty(this.UserID))
{
value = string.Format("{0}true;", value);
}
else
{
value = string.Format("{0}false; user id={1}; password={2};", value, this.UserID, this.Password);
}
return value;
}
}
private string WithEncryptedTraffic
{
get
{
string value = string.Empty;
if (this.EncryptTraffic)
{
value="Encrypt=true; TrustServerCertificate=true;";
}
return value;
}
}
public string ConnectionString
{
get
{
string value = string.Empty;
if (this.IsValid)
{
value = string.Format("data source={0}; initial catalog={1}; {2} {3}", this.Server, this.Database, this.Credentials, this.WithEncryptedTraffic);
}
else
{
throw new InvalidOperationException("One or more required connection string parameters were not specified.");
}
return value.Base64Encode();
}
}
#endregion Properties
Finally, we have an override for the ToString()
method, which simply returns the value of the ConnectionString
property. This is simply a "just-in-case" approach to getting the connection string, and puts the ToString()
override to good use. (Normally, if you call ToString()
for a class that doesn't have an overriden version of the method, you get the name of the class instead of anything meaningful.)
public override string ToString()
{
return this.ConnectionString;
}
The [DAL]
(Data Access Layer) Class
Finally, the reason we're all actually here. The DAL
object is what makes this whole thing work in a generic way. The need to manually write code to identify properties for inserting/updating into the database is, quite frankly, the main reason various corporate entities developed ORMs. ORMs can automatically identify what properties exist in a given entity, but where they go wrong is when they assume that a given entity is an exact representation of a table in the database. This is fine if you're not doing anything in the least bit complicated, but when you start throwing ORMs at real-world (and admittedly sometimes wonky) problems, they just get in the way.
NOTE: Pretty much all of the methods in the DAL
are protected virtual. The reason is that it's entirely plausible that a given programmer may either want to use all of those methods from a derived class, want to override one or more, override one or more of the methods, or even both. I am of the personal opinion that making private non-virtual methods just interferes with the programmer's use of a given class.
My DAL
object makes heavy use of reflection (there are many good discussions on the internet about what reflection is, and how it works in .Net) so that most of the dreariness of writing code by hand is reasonably mitigated. However, it does not absolve us of creating a certain amount of code.
First, we establish some necessary properties, and initialize them in the constructor. Descriptions of each property's purpose are in the following code snippet.
#region properties
public bool FailOnMismatch { get; set; }
public int TimeoutSecs { get; set; }
public string ConnectionString { get; protected set; }
public bool AddReturnParamIfMissing { get; set; }
public int BulkInsertBatchSize { get; set; }
protected int BulkCopyTimeout { get; set; }
protected SqlBulkCopyOptions BulkCopyOptions { get; set; }
protected SqlTransaction ExternalTransaction { get; set; }
public bool CanReadWriteDB { get; set; }
#endregion properties
There are two constructors, so you can pass in either a string
, or a PWConnectionString
object.
#region constructors
public DAL(string connStr)
{
if (string.IsNullOrEmpty(connStr))
{
throw new ArgumentNullException("connStr");
}
this.ConnectionString = connStr;
this.Init();
}
public DAL(PWConnectionString connStr) : this(connStr.ConnectionString)
{
}
#endregion constructors
The DAL has two basic functions - getting data, and setting data. Setting data is further broken down into setting data one record at a time, and doing a "bulk" insert. All of that processing utilizes reflection, which is what makes this code work with ANY entity you could throw at it.
How it Uses Reflection
Let's start by defining an entity (the sample entities I use here are in the test harness application included in the solution). Here's a simple entity that has six public properties. Notice that none of the properties are decorated, and all of them have a set
method. They're also virtual so that I can overload them later in the article (for purposes of example).
public class MyEntity
{
public virtual string StringValue { get; set; }
public virtual int IntValue { get; set; }
public virtual double FloatValue { get; set; }
public virtual bool BoolValue { get; set; }
public virtual DateTime DateTimeValue { get; set; }
public virtual string AnotherString { get; set; }
}
Normally, when you use ADO, you're saddled with not only writing the code to access the database, but wrangling a SqlParameter
array in order to save data, or manually deserializing the result set you get back when retrieving data. This DAL object eliminates both of these aspects of dealing with database access via ADO, by using reflection.
The Static [DAL]
Methods
Let me start off by acknowledging that there are many ways to skin this particular cat, and my approach is just one of those. I could have used added a method to the existing ObjectExtensions.TypeExtensions
class, but I wanted to keep all this stuff in one place. This is purely an artistic decision, so don't assume it's the "one true way". Maybe one day, when I'm bored, I'll re-organize the code. But then again, maybe not.
The static SqlParameters
class contains a single static method that allows you to build a SqlParameter
array from the specified entity object.
protected static SqlParameter[] MakeSqlParameters<T>(T entity,
BulkInsertType bulkType,
ParamPrecedence precedence = ParamPrecedence.None,
string propertyName = "SqlParameters")
{
if (entity == null)
{
throw new ArgumentNullException("entity");
}
if (string.IsNullOrEmpty(propertyName))
{
throw new InvalidOperationException("It makes no sense to specify a null propertyName. Ever.");
}
SqlParameter[] parameters = null;
PropertyInfo[] properties = entity.GetType().GetProperties();
PropertyInfo sqlParams = sqlParams = properties.FirstOrDefault(x => x.PropertyType.Name == "SqlParameter[]" &&
x.Name == propertyName);
if (sqlParams != null && precedence != ParamPrecedence.UseBulkType)
{
parameters = (SqlParameter[])sqlParams.GetValue(entity);
}
else
{
List<SqlParameter> list = new List<SqlParameter>();
properties = DAL.GetEntityProperties(properties, bulkType);
foreach(PropertyInfo property in properties)
{
list.Add(new SqlParameter(string.Format("@{0}", property.Name), property.GetValue(entity)));
}
parameters = list.ToArray();
}
#if DEBUG
Global.WriteLine("-----------------------------");
if (properties.Length == 0)
{
Global.WriteLine("No properties found.");
}
else
{
int length = parameters.Max(x=>x.ParameterName.Length) + 1;
string format = string.Concat(" {0,-", length.ToString(), "}{1}");
Global.WriteLine("Resulting parameters:");
foreach(SqlParameter item in parameters)
{
string text = string.Format(format, item.ParameterName, item.SqlValue);
Global.WriteLine(text);
}
}
#endif
return parameters;
}
Despite the comments being included, I feel like more explanation of the parameters is warranted.
T entity
- this is the entity object that contains the properties we'll be returning in the SqlParameter
array
BulkInsertType bulkType
- Indicates how you want to populate the resulting SqlParameter
array.
- ALL - causes the
MakeParameters
method to return all properties, regardless of their status. It is only included for completeness, and you will most likely never use this value, especially for use in the DAL
. - DBInsertAttribute - causes the
MakeParameters
metrhod to return any property decorated with the [CanDbInsert]
attribute. This is probably the one you'll use the most (if you're not using one or more SqlParameter[]
properties in your entity). - HasSetMethod - causes the
MakeParameters
method to return any property that has a set
method.
ParamPrecedence precedence
- Indicates whether or not you want to use or ignore the specified bulk insert type (the default value is ParamPrecedence.None
).
- None - causes the
MakeParameters
method to return the "standard" SqlParameter[]
property, named "SqlParameters", if it exists. If it doesn't exist, it uses the specified BulkInsertType
value to determine how to get the properties. - UseBulkType - causes the
MakeParameters
method to *ignore* any/all SqlParameter[]
properties in the entity, and get properties based on the specified bulkType
value.
string propertyName
- the name of the SqlParameter[]
property in the entity object that you want to use which returns a SqlParameter[]
(the default value is "SqlParameters"). This parameter is only useful when you specify a precedence
of None
.
As you can see, the resulting array of SqlParameter
items depends on the combination of parameter values you specify when you call the MakeParameters
method. I tried to sufficiently comment the method so there wouldn't be any/many questions about what's going on.
Next up, we have some supporting static methods that are used by methods in the DAL
, and these methods are three overloads to the GetEntityProperties
method. In the interest of brevity, I will only show the prototype for each method.
protected static PropertyInfo[] GetEntityProperties(Type type, BulkInsertType bulkType)...
This overload gets properties for the entity *type*, based on the value of the bulkInsert
parameter, and is used to assist in the creation of columns in a DataTable
object.
protected static PropertyInfo[] GetEntityProperties<t>(T entity, BulkInsertType bulkType)
</t>
This overload returns property info (including values) for an entity (or class), based on the value of the bulkInsert
parameter (it actually calls the next overloaded method), and is used by the DAL
methods that actually do the work involving the database.
protected static PropertyInfo[] GetEntityProperties(PropertyInfo[] properties, BulkInsertType bulkType)
This overload is where the actual work is done based on the value of the bulkInsert
parameter, and can be used inf you've already got a SqlParameter[]
from another source (probably not associated with this code).
Testing (Using the TestHarness Application)
NOTE: We interrupt our regularly scheduled class description to bring you this important (and presciant) announcement regarding testing, because it's the best way we can think of to allow you a moment to reflect on the effects of reflection. So without further deflection (or inappropriate erections), a discussion of type detection with reflection... in this section...
To assist in testing, I added a TestHarness application project (described later) to the solution. In that application, I create a BLL
object, and added some testing methods to it. What follows is the console app output for each call that the BLL makes to the inherited DAL
methods. I created a list of MyEntity
objects, each with five items. In the interest of brevity, I only show the output for one item's output (because all five are the same).
The key points with regards to the MyEntity
class - it contains six undecorated properties, and no SqlParameter[]
property. This means that the two default parameters would have no effect on the result, so they're not specified.
==========================================================
The method call that was made:
ExecuteStoredProc(e.AsEnumerable(), "test", BulkInsertType.ALL);
The output that was generated:
Because there are no SqlParameter[] properties, it returned all of the parameters.
-----------------------------
Resulting parameters:
@StringValue 101
@IntValue 1
@FloatValue 1
@BoolValue True
@DateTimeValue 4/3/2021 8:54:54 AM
@AnotherString 101B
==========================================================
The method call that was made:
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.DBInsertAttribute)
The output that was generated:
Because there are no SqlParameter[] properties, and because none of the properties were decorated
with the [CanDbInsert] attribute, it returned no parameters.
-----------------------------
No properties found.
==========================================================
The method call that was made:
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.HaveSetMethod)
The output that was generated:
Because there are no SqlParameter[] properties, and because all of the properties have a "set"
method, it returned six parameters.
-----------------------------
(should get six props/params for each entity)
-----------------------------
Resulting parameters:
@StringValue 101
@IntValue 1
@FloatValue 1
@BoolValue True
@DateTimeValue 4/3/2021 8:54:54 AM
@AnotherString 101B
Now let's add a SqlParameter[]
property. To make it easy on the examples, we'll simply inherit from the MyEntity
class. Note that I'm using the "standard" name for this property to exercise the code a little more.
public class MyEntity2 : MyEntity
{
public virtual SqlParameter[] SqlParameters
{
get
{
SqlParameter[] value = new SqlParameter[]
{
new SqlParameter ("@StringValue", this.StringValue )
,new SqlParameter("@IntValue", this.IntValue )
,new SqlParameter("@FloatValue", this.FloatValue )
,new SqlParameter("@BoolValue", this.BoolValue )
,new SqlParameter("@DateTimeValue", this.DateTimeValue)
};
return value;
}
}
}
Because we added the SqlParameter[]
property, we have twice as many ways we can use the method that makes sense.
==========================================================
The method call(s) that was made:
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.ALL, ParamPrecedence.None);
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.DBInsertAttribute, ParamPrecedence.None);
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.HaveSetMethod, ParamPrecedence.None);
The output that was generated:
All three of those method calls will retrieve 5 of the 6 available properties
from the [SqlParameters] property , because and precedence is [None], thus
ignoring the [bulkInsert] value.
-----------------------------
Resulting parameters:
@StringValue 101
@IntValue 1
@FloatValue 1
@BoolValue True
@DateTimeValue 4/3/2021 9:12:54 AM
==========================================================
The method call(s) that was made:
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.ALL, ParamPrecedence.UseBulkType)
The output that was generated:
The method call above will return all properties, regardless of any status
they may have, because precedence is set to [UseBulkType], thus ignoring
any SqlParameter[] properties that might be defined. Furthermore, note that
the SqlParameter[] property is also returned, which is probably not the
desired result (this is why using [ALL] is not recommended within the
context of using the DAL).
-----------------------------
Resulting parameters:
@SqlParameters
@StringValue 101
@IntValue 1
@FloatValue 1
@BoolValue True
@DateTimeValue 4/3/2021 9:12:54 AM
@AnotherString 101B
==========================================================
The method call that was made:
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.HaveSetMethod, ParamPrecedence.UseBulkType)
The output that was generated:
This call will return any property that has a [set] method because the
precedence is set to [UseBulkType], thus ignoring any SqlParameter[]
properties that might be defined. Note that since the SqlParameter[]
property doesn't have a [set] method, it is not returned as a
parameter.
-----------------------------
Resulting parameters:
@StringValue 101
@IntValue 1
@FloatValue 1
@BoolValue True
@DateTimeValue 4/3/2021 9:12:54 AM
@AnotherString 101B
==========================================================
The method call that was made:
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.DBInsertAttribute, ParamPrecedence.UseBulkType)
The output that was generated:
This method call returns no parameters, because bulkType is set to
[DBInsertAttribute] with no properties being decorated with the [CanDbInsert]
attribute, and because precedence is set to [UseBulkType], which causes any
SqlParameter[] properties to be ignored.
-----------------------------
No properties found.
Finally, lets derive a new class - MyEntity3
- from MyEntity2
, decorate five of the six properties, and add a second SqlParameter[]
property.
public class MyEntity3 : MyEntity2
{
[CanDbInsert]
public override string StringValue { get; set; }
[CanDbInsert]
public override int IntValue { get; set; }
[CanDbInsert]
public override double FloatValue { get; set; }
[CanDbInsert]
public override bool BoolValue { get; set; }
[CanDbInsert]
public override DateTime DateTimeValue { get; set; }
public SqlParameter[] SqlParametersEntity3
{
get
{
SqlParameter[] value = new SqlParameter[]
{
new SqlParameter("@StringValue", this.StringValue)
,new SqlParameter("@IntValue", this.IntValue)
};
return value;
}
}
}
Once again, the MyEntity3
class gives us many more ways to use the MakeParameters
method:
==========================================================
The method call(s) that was made:
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.DBInsertAttribute, ParamPrecedence.None)
The output that was generated:
This method call returns 5 of 6 parameters because precedence is None, and the
entity contains a SqlParameters prop). The [bulkType] is ignored.
-----------------------------
Resulting parameters:
@StringValue 101
@IntValue 1
@FloatValue 1
@BoolValue True
@DateTimeValue 4/3/2021 12:11:39 PM
==========================================================
The method call(s) that was made:
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.DBInsertAttribute, ParamPrecedence.None, "SqlParametersXYZ")
The output that was generated:
This method call returns 5 of 6 parameters because a) precedence is set to
[None], b) the specified [propertyName] doesn't exist, but c) the class *does*
have decorated properties.
-----------------------------
Resulting parameters:
@StringValue 101
@IntValue 1
@FloatValue 1
@BoolValue True
@DateTimeValue 4/3/2021 12:11:39 PM
==========================================================
The method call(s) that was made:
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.DBInsertAttribute, ParamPrecedence.None, "SqlParametersEntity3")
The output that was generated:
This method call gets the two properties because a) the precedence is set to
[None], and b) the named SqlParameter property was found, thus ignoring the
specified [bulkType].
-----------------------------
Resulting parameters:
@StringValue 101
@IntValue 1
==========================================================
The method call(s) that was made:
ExecuteStoredProc(e.AsEnumerable(),"test", BulkInsertType.DBInsertAttribute, ParamPrecedence.UseBulkType)
The output that was generated:
This method call returns 5 of 6 properties because the the [bulkType] is set
to [DBInsertAttribute], the class has decorated properties, and the
precedence is set to [UseBulkType], thus ignoring any SqlParameter[]
properties that might be defined.
-----------------------------
Resulting parameters:
@StringValue 101
@IntValue 1
@FloatValue 1
@BoolValue True
@DateTimeValue 4/3/2021 12:11:39 PM
As you can see, the results of the MakeParameters
method can vary, depending on how you call it.
Now that you have an understanding about the way reflection is used in support of the DAL
, let's talk about getting and setting data.
Getting and Setting Data
In order to retrieve data, the DAL
implements three overloaded protected virtual
methods. These will be the methods that are typically used by your BLL object.
Method - [ExecuteStoredProc]
The ExecuteStoredProc overloads allow you to use a stored procedure to get or set data. The overload that is chosen by .Net depends on the return value and parameters you use. The last method in the block below is included to allow a given "set data" operation to use a persistent SqlConnection
object for all objects in the specified collection, ostensibly to provide an efficient "upsert" functionality. One of our programmers expressed concern that opening up a SqlConnection
object for each insert into the database. I can't imagine this not being more efficient, but honestly, I have not conducted any testing to be able to confirm or deny the idea with tangible results.
protected virtual List<T> ExecuteStoredProc<T>(string storedProc, SqlParameter[] parameters)
{
if (string.IsNullOrEmpty(storedProc))
{
throw new ArgumentNullException("storedProc");
}
DataTable data = this.GetData(storedProc, parameters, CommandType.StoredProcedure);
List<T> collection = this.MakeEntityFromDataTable<T>(data);
return collection;
}
protected virtual int ExecuteStoredProc(string storedProc, SqlParameter[] parameters)
{
if (string.IsNullOrEmpty(storedProc))
{
throw new ArgumentNullException("storedProc");
}
if (parameters == null)
{
throw new ArgumentNullException("parameters");
}
if (parameters.Length == 0)
{
throw new InvalidOperationException("The [parameters] array must contain at least one item.");
}
int result = this.SetData(storedProc, parameters, CommandType.StoredProcedure);
return result;
}
protected virtual int ExecuteStoredProc<T>(T data,
string storedProc,
BulkInsertType bulkType,
ParamPrecedence precedence = ParamPrecedence.None,
string paramArrayPropName = "SqlParameters")
{
int result = 0;
SqlParameter[] parameters = DAL.MakeSqlParameters(data, bulkType, precedence, paramArrayPropName);
result = this.ExecuteStoredProc(storedProc, parameters);
return result;
}
protected virtual int ExecuteStoredProc<T>(IEnumerable<T> data,
string storedProc,
BulkInsertType bulkType,
ParamPrecedence precedence = ParamPrecedence.None,
string paramArrayPropName = "SqlParameters")
{
if (string.IsNullOrEmpty(storedProc))
{
throw new ArgumentNullException("storedProc");
}
if (data == null)
{
throw new ArgumentNullException("data");
}
if (data.Count() == 0)
{
throw new InvalidOperationException("Data collection must contain at least onme item");
}
int result = this.DoBulkMerge(data, storedProc, bulkType, CommandType.StoredProcedure, precedence, paramArrayPropName);
return result;
}
Notice the last two overloads above. Some of you might be aware that normally, calling one of these overloads will cause .Net to execute the (T entity...)
overload, even if you pass a collection of objects instead of just a single object. The reason is that when using generic types, .Net sees a collection of objects the same as a non-collection object, so it finds and uses the most appropriate overload. But all hope is not lost. All you have to do is be more specific about what you're passing to the method. See the sample code below.
MyObject oneObject = new MyObject;
int result = this.ExecuteStoredProc(oneObject, ...);
List<MyObject> manyObj = new List<MyOIbject>()
{
new MyObject(),
new MyObject()
};
int result = this.ExecuteStoredProc(oneObject.AsEnumerable(), ...);
</t>
Take note that the ExecuteQuery
method overloads (discussed next) also require this manipulation of parameters to finesse .Net into selecting the desired overload.
Method - [ExecuteQuery]
Like the ExecuteStoredProc
method, the ExecuteQuery
method allows the programmer to get or set data, and several overloads are implemented to that end. All the same comments above regarding the ExecuteStoredProc
, apply to this method as well, except this stored procedure is intended to execute a straight-up SQL query.
protected virtual List<T> ExecuteQuery<T>(string query, params SqlParameter[] parameters)
{
if (string.IsNullOrEmpty(query))
{
throw new ArgumentNullException("query");
}
DataTable data = this.GetData(query, parameters, CommandType.Text);
List<T> collection = this.MakeEntityFromDataTable<T>(data);
return collection;
}
protected virtual int ExecuteQuery(string query, params SqlParameter[] parameters)
{
if (string.IsNullOrEmpty(query))
{
throw new ArgumentNullException("query");
}
int result = this.SetData(query, parameters, CommandType.Text);
return result;
}
protected virtual int ExecuteQuery<T>(T data,
string query,
BulkInsertType bulkType,
ParamPrecedence precedence = ParamPrecedence.None,
string paramArrayPropName = "SqlParameters")
{
int result = 0;
SqlParameter[] parameters = DAL.MakeSqlParameters(data, bulkType);
result = this.ExecuteQuery(query, parameters);
return result;
}
protected virtual int ExecuteQuery<T>(IEnumerable<T> data,
string query,
BulkInsertType bulkType,
ParamPrecedence precedence = ParamPrecedence.None,
string paramArrayPropName = "SqlParameters")
{
if (string.IsNullOrEmpty(query))
{
throw new ArgumentNullException("query");
}
if (data == null)
{
throw new ArgumentNullException("data");
}
if (data.Count() == 0)
{
throw new InvalidOperationException("Data collection must contain at least onme item");
}
int result = this.DoBulkMerge(data, query, bulkType, CommandType.Text, precedence, paramArrayPropName);
return result;
}
Method - [ExecuteBulkInsert]
An important aspect of inserting data into the database is the ability to just flog data into a table. So, this DAL
object implements functionality to do just that. You have the option of hitting it with a DataTime object
, or with your own entity collection.
protected virtual int ExecuteBulkInsert(DataTable dataTable)
{
if (string.IsNullOrEmpty(dataTable.TableName))
{
throw new InvalidOperationException("The table name MUST be specified in the datatable (including the schema).");
}
if (!dataTable.TableName.Contains('.') || dataTable.TableName.StartsWith("."))
{
throw new InvalidOperationException("The schema MUST be specified with the table name.");
}
if (dataTable.Rows.Count == 0)
{
throw new InvalidOperationException("The dataTable must contain at least one item");
}
int recsBefore = BulkInsertTargetCount(dataTable.TableName);
int recordsAffected = 0;
SqlConnection conn = null;
SqlBulkCopy bulk = null;
using (conn = new SqlConnection(this.ConnectionString.Base64Decode()))
{
conn.Open();
using (bulk = new SqlBulkCopy(conn, this.BulkCopyOptions, this.ExternalTransaction)
{
BatchSize = this.BulkInsertBatchSize
,BulkCopyTimeout = this.BulkCopyTimeout
,DestinationTableName = dataTable.TableName
})
{
Debug.WriteLine("DoBulkInsert - inserting {0} rows",dataTable.Rows.Count);
bulk.WriteToServer(dataTable);
}
}
int recsAfter = BulkInsertTargetCount(dataTable.TableName);
recordsAffected = recsAfter - recsBefore;
return recordsAffected;
}
protected virtual int ExecuteBulkInsert<T>(IEnumerable<T> data,
string tableName,
BulkInsertType bulkType,
ParamPrecedence precedence = ParamPrecedence.None,
string paramArrayPropName = "SqlParameters")
{
if (data == null)
{
throw new ArgumentNullException("data");
}
if (data.Count() == 0)
{
throw new InvalidOperationException("The data collection must contain at least one item");
}
if (string.IsNullOrEmpty(tableName))
{
throw new ArgumentNullException("The tableName parameter cannot be null or empty.");
}
if (!tableName.Contains('.'))
{
throw new InvalidOperationException("The schema MUST be specified with the table name.");
}
int result = 0;
DataTable dataTable = null;
if (data.Count() > 0)
{
dataTable = this.MakeDataTable(data, tableName, bulkType, precedence, paramArrayPropName);
result = this.DoBulkInsert(dataTable);
}
return result;
}
Possible [SqlParameter[]]
Property Approach
If you change the static method MakeParameters
to be public
(it's protected
as presented in this article), you can call it from your entity, like so:
public SqlParameter[] SqlParameters4
{
get
{
SqlParameter[] value = DAL.MakeSqlParameters(this,
BulkInsertType.DBInsertAttribute,
ParamPrecedence.UseBulkType);
return value;
}
}
To be honest, I see this as wasted effort since the DAL
already does this for you (because it uses MakeParameters
as well, and in the same way). However, I wanted to illustrate that it is indeed possible to do this. It's also worth mentioning that you can also make the DAL.GetProperties
methods public
and use those from within your entities as well. To make it easy on you, I included a compiler definition at the top of the DALStaticMethods.cs
file. Uncomment the compiler definition, and all of the static methods in the file become public.
Like I said earlier, there are many ways to skin this cat.
The Helper Methods
The info presented in the section are not necessary to understand/use the DAL
object, and is presented only in the name of completeness of documentation.
Method - [protected virtual void Init()]
I anticipated having to create overloaded constructors, so I decided to put the initialization code into a method that could be called from each constructor. It turns out that I haven't yet found a need to do it this way, but I kept it around, because you never know what's going to happen in the future. Because it's virtual, the programmer can easily override it to perform additional (BLL-specific) initialization tasks.
protected virtual void Init()
{
this.TimeoutSecs = 300;
this.FailOnMismatch = false;
this.AddReturnParamIfMissing = true;
this.ExternalTransaction = null;
this.BulkInsertBatchSize = 250;
this.BulkCopyTimeout = 600;
this.BulkCopyOptions = SqlBulkCopyOptions.Default;
this.CanReadWriteDB = true;
}
Method - [protected virtual string TestConnection()]
Adding a method to test the connection seemed logical. Amusingly, the DAL
itself doesn't use it, because I figured the programmer would want to call it just once, from his BLL object (that inherits the DAL
object.
protected virtual string TestConnection()
{
string result = string.Empty;
SqlConnection conn = null;
try
{
using (conn = new SqlConnection(this.ConnectionString.Base64Decode()))
{
conn.Open();
conn.Close();
}
}
catch (Exception ex)
{
result = ex.Message;
}
return result;
}
Method - [protected virtual string AddTryCatchTranToQuery(...)]
If you use query text (as opposed to stored procedures) to access your database, you can call this method to wrap your query with a transaction. It wraps your query in a try/catch block and creates/commits a transaction. If an exception is thrown, the transaction will be rolled back. You can also optionally specify query text that performs logging (or whatever else you want to do if your query fails.
For stored procedures, I assume that you already includes this kind of code, especially when you're doing any kind of CRUD operations. If you don't do this, you should.
protected virtual string AddTryCatchTranToQuery(string query, string logQuery, string transactionName="")
{
transactionName = transactionName.Trim();
logQuery = logQuery.Trim();
StringBuilder text = new StringBuilder();
text.AppendLine("BEGIN TRY");
text.AppendFormat(" BEGIN TRAN {0};", transactionName).AppendLine();
text.AppendLine(query).AppendLine();
text.AppendFormat(" COMMIT TRAN {0};", transactionName).AppendLine();
text.AppendLine("END TRY");
text.AppendLine("BEGIN CATCH");
text.AppendFormat(" IF @@TRANCOUNT > 0 ROLLBACK TRAN {0};", transactionName).AppendLine();
text.AppendLine(logQuery);
text.AppendLine("END CATCH");
return text.ToString();
}
Method - [protected virtual string NormalizeTableName(...)]
One of the ExecuteBulkInsert
method overloads needs a table name to be specified. To prevent the possibility of sql injection, we need to normalize the specified table name by putting square brackets around the name (it also accounts for a schema being included in the name. The idea is that anything put into square brackets would be evaluated as a sql table name, and if the table name isn't valid, your stored procedure/query will throw a sql exception.
Beyond this kind of protection, you should really go read this article regarding "SQL inject". The author enumerates methods for mitigating the threat. This is REALLY important if you value your data - SQL Injection Attacks and Some Tips on How to Prevent Them [^ ]. Securing your SQL is entirely on you, not to mention being outside the scope of this article.
protected virtual string NormalizeTableName(string tableName)
{
string[] parts = tableName.Split('.');
tableName = string.Empty;
foreach(string part in parts)
{
tableName = (string.IsNullOrEmpty(tableName))
? string.Format("[{0}]", part)
: string.Format(".[{0}]", part);
}
return tableName.Replace("[[", "[").Replace("]]","]");
}
Method - [protected virtual int BulkInsertTargetCount(...)]
The ADO SqlBulkCopy
object will take an entire data table, and insert the rows into the database. Unfortunately, it does NOT tell you how many records were actually inserted. If you want to know this value, you have to handle it yourself. This method does a simple count of records in the specified table, and returns that value. The DAL
calls this method before and after the use of the SqlBulkCopy
object and returns the difference between the two counts.
protected virtual int BulkInsertTargetCount(string tableName)
{
if (string.IsNullOrEmpty(tableName))
{
throw new ArgumentNullException("tableName");
}
if (!tableName.Contains('.') || tableName.StartsWith("."))
{
throw new InvalidOperationException("The [tableName] must include a schema. Example: 'dbo.tableName'");
}
int result = 0;
string query = string.Format("SELECT COUNT(1) FROM {0}", this.NormalizeTableName(tableName));
List<EntityRowCount> rowCount = this.ExecuteQuery<EntityRowCount>(query);
if (rowCount != null && rowCount.Count > 0)
{
result = rowCount[0].Count;
}
return result;
}
Method - [protected virtual DataTable GetData(...)]
When you get right down to it, getting data is pretty simple. All you need is the sql query that you want to execute, parameters to pass to the query, and what kind of query it is (stored procedure or a sql query text). Where I work, we don't use any straight-up sql queries, and require that all database access be accomplished via stored procedures, but I suspect there are at least a few environments that don't enforce that paradigm (those places are probably like hell on earth, and doom be unto all those that dare to enter such a realm).
For this DAL
, all "get data" requests eventually funnel into the GetData()
method. There really isn't anything strange going on.
protected virtual DataTable GetData(string cmdText,
SqlParameter[] parameters=null,
CommandType cmdType = CommandType.StoredProcedure)
{
if (string.IsNullOrEmpty(cmdText))
{
throw new ArgumentNullException("cmdText");
}
SqlConnection conn = null;
SqlCommand cmd = null;
SqlDataReader reader = null;
DataTable data = null;
using (conn = new SqlConnection(this.ConnectionString.Base64Decode()))
{
conn.Open();
using (cmd = new SqlCommand(cmdText, conn) { CommandTimeout = this.TimeoutSecs, CommandType = cmdType } )
{
if (parameters != null)
{
cmd.Parameters.AddRange(parameters);
}
if (this.CanReadWriteDB)
{
using (reader = cmd.ExecuteReader())
{
data = new DataTable();
data.Load(reader);
}
}
}
}
return data;
}
Method - [protected virtual int SetData(...)]
The actual act of setting data is only a little mores exciting. Along with the same set of parameters necessary to make GetData
do it's magic, SetData
provides an optional parameter that permits the calling method to indicate that the code should include transaction support.
protected virtual int SetData(string cmdText,
SqlParameter[] parameters,
CommandType cmdType = CommandType.StoredProcedure,
bool useAdoTransaction=false)
{
if (string.IsNullOrEmpty(cmdText))
{
throw new ArgumentNullException("cmdText");
}
int result = 0;
SqlConnection conn = null;
SqlCommand cmd = null;
SqlTransaction transaction = null;
using (conn = new SqlConnection(this.ConnectionString.Base64Decode()))
{
conn.Open();
if (useAdoTransaction && cmdType != CommandType.StoredProcedure)
{
transaction = conn.BeginTransaction();
}
using (cmd = new SqlCommand(cmdText, conn) { CommandTimeout = this.TimeoutSecs, CommandType = cmdType } )
{
SqlParameter rowsAffected = null;
if (parameters != null)
{
cmd.Parameters.AddRange(parameters);
if (cmdType == CommandType.StoredProcedure && this.AddReturnParamIfMissing)
{
rowsAffected = parameters.FirstOrDefault(x=>x.Direction == ParameterDirection.ReturnValue);
if (rowsAffected == null)
{
rowsAffected = cmd.Parameters.Add(new SqlParameter("@rowsAffected", SqlDbType.Int) { Direction = ParameterDirection.ReturnValue } );
}
}
}
try
{
if (this.CanReadWriteDB)
{
result = cmd.ExecuteNonQuery();
}
}
catch (SqlException ex)
{
if (transaction != null && cmdType != CommandType.StoredProcedure)
{
transaction.Rollback();
}
throw (ex);
}
result = (rowsAffected != null) ? (int)rowsAffected.Value : result;
}
}
return result;
}
Method - [protected virtual int DoBulkMerge(...)]
While the ADO SqlBulkCopy
object is a nice way to just flog data into the database, it lacks a certain finesse that is sometimes needed, namely the ability to either update or insert records, depending on the target table's use. So, the DAL
object provides an ExecuteStoredProc
(and ExecuteQuery) method overload to implement this typical "upsert" functionality. We've already talked about the overloaded method, both of which call this method.
In it, a persistent SqlConnection
object is created to provide a certain level of efficiency when inserting multiple rows of data. Beyond that, it is very similar to the SetData
method (also discussed above)
It is HIGHLY advisable to use a stored procedure to upsert.
protected virtual int DoBulkMerge<T>(IEnumerable<T> data,
string queryText,
BulkInsertType bulkType,
CommandType cmdType,
ParamPrecedence precedence = ParamPrecedence.None,
string paramArrayPropName = "SqlParameters",
bool useAdoTransaction=false)
{
if (string.IsNullOrEmpty(queryText))
{
throw new ArgumentNullException("queryText");
}
int result = 0;
SqlConnection conn = null;
SqlCommand cmd = null;
SqlTransaction transaction = null;
using (conn = new SqlConnection(this.ConnectionString.Base64Decode()))
{
conn.Open();
if (useAdoTransaction && cmdType != CommandType.StoredProcedure)
{
transaction = conn.BeginTransaction();
}
using (cmd = new SqlCommand(queryText, conn) { CommandTimeout = this.TimeoutSecs, CommandType = cmdType } )
{
try
{
foreach(T item in data)
{
SqlParameter[] parameters = DAL.MakeSqlParameters(item, bulkType, precedence, paramArrayPropName);
if (parameters != null)
{
cmd.Parameters.AddRange(parameters);
if (this.CanReadWriteDB)
{
cmd.ExecuteNonQuery();
}
cmd.Parameters.Clear();
result++;
}
}
}
catch (Exception ex)
{
if (transaction != null)
{
transaction.Rollback();
}
throw(ex);
}
}
}
return result;
}
Here's an example of an upsert-style stored procedure. Keep in mind that this code is NOT a panacea for preventing duplicates from being inserted into the table. You should implement constraints that enforce unique records (google is your friend), and additionally, you can use the SQL MERGE
statement instead of the simple upsert code provided below.
DECLARE @rowCount INT = -1;
BEGIN TRY
BEGIN TRANSACTION MergeData_Transaction;
UPDATE dbo.MyTable
SET StringValue = @StringValue
,IntValue = @IntValue
WHERE IntValue = @IntValue;
SET @rowCount = @@ROWCOUNT;
IF @rowCount = 0
BEGIN
INSERT INTO dbo.WebSvcTarget
(
StringValue
,IntValue
)
VALUES
(
@StringValue
,@IntValue
)
END
SET @rowCount = @rowCount + @@ROWCOUNT;
COMMIT TRANSACTION MergeData_Transaction;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
BEGIN
SET @rowCount = 0;
ROLLBACK TRANSACTION MergeData_Transaction;
-- do something with the error if desired
END
END CATCH
Method - [protected static T ConvertFromDBValue<T>(...)
This method supports the generic types we depend on so heavily in the DAL
object. It's purpose is to convert a value retrieved from the database to a value that C# can handle when a DataTable
is being populated.
protected static T ConvertFromDBValue<T>(object obj, T defaultValue)
{
T result = (obj == null || obj == DBNull.Value) ? default(T) : (T)obj;
return result;
}
Method - [protected virtual List<T> MakeEntityFromDataTable<T>(...)]
protected virtual List<T> MakeEntityFromDataTable<T>(DataTable data)
{
if (data == null)
{
throw new ArgumentNullException("data");
}
Type objType = typeof(T);
List<T> collection = new List<T>();
if (data != null && data.Rows.Count > 0)
{
int matched = 0;
foreach(DataRow row in data.Rows)
{
T item = (T)Activator.CreateInstance(objType);
PropertyInfo[] properties = objType.GetProperties();
foreach (PropertyInfo property in properties)
{
if (data.Columns.Contains(property.Name))
{
Type pType = property.PropertyType;
var defaultValue = pType.GetDefaultValue();
var value = row[property.Name];
value = DAL.ConvertFromDBValue(value, defaultValue );
property.SetValue(item, value);
matched++;
}
}
if (matched != data.Columns.Count && this.FailOnMismatch)
{
throw new Exception("Data retrieved does not match specified model.");
}
collection.Add(item);
}
}
return collection;
}
Method - [protected virtual DataTable MakeDataTable<T>(...)]
This method creates a DataTable
object from the specified collection. Columns are automatically created, and rows are added based on the definition of the collection entity. This method is called from most of the previously discussed methods that are responsible for putting data INTO the database.
protected virtual DataTable MakeDataTable<T>(IEnumerable<T> data,
string tableName,
BulkInsertType bulkType,
ParamPrecedence precedence = ParamPrecedence.None,
string paramArrayPropName = "SqlParameters")
{
DataTable dataTable = null;
Debug.WriteLine(string.Format("MakeDataTable - {0} data items specified.", data.Count()));
using (dataTable = new DataTable(){TableName = tableName })
{
Type type = typeof(T);
PropertyInfo[] properties = DAL.GetEntityProperties(type, bulkType);
Debug.WriteLine(string.Format("MakeDataTable - {0} item properties per item.", properties.Length));
foreach (PropertyInfo property in properties)
{
dataTable.Columns.Add(new DataColumn(property.Name, property.PropertyType));
}
Debug.WriteLine(string.Format("MakeDataTable - {0} dataTable columns created.", dataTable.Columns.Count));
foreach (T entity in data)
{
DataRow row = dataTable.NewRow();
foreach (PropertyInfo property in properties)
{
row[property.Name] = property.GetValue(entity);
}
dataTable.Rows.Add(row);
}
}
Debug.WriteLine(string.Format("MakeDataTable - {0} rows created.", dataTable.Rows.Count));
return dataTable;
}
The TestHarness Application
What would a library be without a test vehicle of some description? In this solution, the TestHarness is a console application that allows you to play with the DAL
object without having to put it into your own code until you're comfortable with doing so. As presented, the app illustrates the following aspects of my insanity.
Use of Singletons
While using the original PWConnectionString
class, one of the programmers hinted that it would be nice if we could create a globally available instance. I suggested that he create a static Globals
class to do just that, but after thinking about it for a bit, I figured I'd provide a thread-safe singleton container class that does pretty much the same thing without having to create the previously mentioned static Global
class.
This singleton container simply allows the programmer to instantiate the class once, and use that instance throughout the application. For those not familiar with the singleton pattern I found a decent explanation of singletons here - Implementing the Singleton Pattern in C# [^].
The [Singleton<T>]
Base Class
This is my first real experience with the singleton pattern, so, much of the stuff I've done is completely unnecessary in order to "make it go", but I like to play around with new stuff that I'm learning. In the interest of experimentation, I decided to see if I could abstract away what little nastiness is involved in the implementation of a singleton into a base class that can then be inherited in the outward-facing code. In my search for an example of what I wanted to do, I found this CodeProject article - A Reusable Base Class for the Singleton Pattern in C# [^], by Boris Brock.
Singleton object classes should be sealed
to prevent inheritance, but in this case, we WANT to inherit it, but at the same time, we don't want it to be instantiated itself, so instead of using sealed
, we make the class abstract
. Furthermore, a singleton class has a private constructor, but once again, we need to make the constructor it accessible so we can inherit it, so we change the constructor's accessor protected>
. After we've done that, we can use generic types to make it digestible by any deriving class.
public abstract class Singleton<T> where T : class
{
private static readonly Lazy<T> lazy = new Lazy<T>(()=>Create());
public static T Instance { get { return lazy.Value; } }
protected Singleton()
{
Debug.WriteLine(string.Format("{0} instantiated.", typeof(T).GetType().Name));
}
private static T Create()
{
return Activator.CreateInstance(typeof(T), true) as T;
}
}
The [PWConnectionStringleton]
Class
Get it? "Stringleton"? Sounds like "Singleton"? I make myself laugh out loud!
I treat my singletons as containers for the actual instantiated objects. This segregates the singleton pattern itself from the instantiated object, thus making the code more flexible (to use, or NOT use singletons) with little/no code duplication.
As you can see, the derived singleton class looks just like a regular class. Since a singleton object's construct cannot accept parameters, we have to include a mechanism that allows us to initialize the contained object, namely, the PWConnectionString
object.
public sealed partial class PWConnectionStringleton:Singleton<PWConnectionStringleton>
{
public PWConnectionString PWConnString { get; set; }
public void Init(string name, string server, string database, bool encryptTraffic=false, string uid="", string pwd="")
{
this.PWConnString = new PWConnectionString(name, server, database, encryptTraffic, uid, pwd);
}
}
The [BLLSingleton]
Class
The [BLL]
Class
The DAL
is intended to be inherited by your BLL object. If you have any experience at all with .Net and object-oriented development, you'll understand how all that works. Since most of the methods and properties are protected
, they can only be used from the DAL
itself, or from classes that inherit from it. The reason you want to simply inherit the DAL
is so that you can extend it with your own methods, provide public methods for use by external code, and to avoid polluting the DAL
with code that you add.
Inheriting the [DAL]
Object
Below, you'll see how it's done. For the purposes of testing in this application, we prevent the DAL
from actually reading from or writing to the database by setting DAL.CanReadWriteDB
property to false. This will allow us to test various aspects of the DAL
without executing stored procedures or queries.
public partial class BLL : DAL
{
public BLL(PWConnectionString connStr) : base(connStr)
{
this.CanReadWriteDB = false;
}
}
An example of a method you might write is the act of retrieving data and storing it into a collection of entities that you've defined.
public class BLL:DAL
{
...
public List<MyObjects> GetSalesData()
{
List<MyObjects> list = this.ExecuteStoredProc("dbo.GetSalesData", null);
return list;
}
...
}
List<MyObjects> list = BLL.GetSalesData();
...or a method to save data to the database (remember, there are several overloads to this method that make this possible):
public class BLL:DAL
{
...
public int SaveSalesData(MyObject model)
{
int recordsAffected = this.ExecuteStoredProc(model);
return recordsAffected;
}
public int SaveSalesDataCollection(List<MyObject> model)
{
int recordsAffected = this.ExecuteStoredProc(model.AsEnumerable());
return recordsAffected;
}
...
}
List<MyObject> myObjects = new List<MyObject>(){new MyObject(),new MyObject(),...};
int result = BLL.SaveSalesData(myObject[0]);
result = BLL.SaveSalesDataCollection(myObjects);
And let's not forget the ability to perform bulk inserts:
public class BLL:DAL
{
...
public int BulkInsertSalesData(List<MyObject> model)
{
int recordsAffected = this.ExecuteBulkInsert(model,
"sales",
BulkInsertType.DBInsertAttribute);
return recordsAffected;
}
public int UpsertSalesData(List<MyObject> model)
{
int recordsAffected = this.ExecuteStoredProc(model.AsEnumerable(),
"storedprocname",
BulkInsertType.DBInsertAttribute);
return recordsAffected;
}
...
}
List<MyObject> myObjects = new List<MyObject>(){new MyObject(),new MyObject(),...};
int result = BLL.BulkInsertSalesData(myObjects);
result = BLL.UpsertSalesData(myObjects);
I personally believe that the closer you get to the outward-facing implementation, the simpler the interface should be. You can see how that works in the code samples above.
The [Program]
Class
This is where the magic happens. We instantiate the singleton ConnectionString and BLL objects, checks the contents of the value of the string, and proceeds to exercise the test methods. Notice the compiler directive that turns the use of singletons on and off. Keep in mind that if you turn singles off, you have to do it in the MyClass object as well.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using PaddedwallDAL;
using ObjectExtensions;
namespace TestHarness
{
class Program
{
static PWConnectionString PWConnString
{
#if __USE_SINGLETONS__
get { return PWConnectionStringleton.Instance.PWConnString; }
#else
get; set;
#endif
}
static BLL BLL
{
#if __USE_SINGLETONS__
get { return BLLSingleton.Instance.BLL; }
#else
get; set;
#endif
}
static void Main(string[] args)
{
try
{
Global.IsConsoleApp = true;
#if __USE_SINGLETONS__
PWConnectionStringleton.Instance.Init("Test", "localhost", "master");
BLLSingleton.Instance.Init(PWConnectionStringleton.Instance.PWConnString);
Global.WriteLine(string.Format("ConnectionString (base64) = {0}", PWConnString.ConnectionString));
Global.WriteLine(string.Format("ConnectionString (ascii) = {0}", PWConnString.ConnectionString.Base64Decode()));
BLL.MyMethod();
BLLSingleton.Instance.BLL.MyMethod();
MyClass myClass = new MyClass();
#else
PWConnString = new PWConnectionString(Guid.NewGuid().ToString(), "localhost", "Test");
BLL = new BLL(PWConnString);
#endif
BLL.TestEntity1();
BLL.TestEntity2();
BLL.TestEntity3();
}
catch (Exception ex)
{
Global.WriteLine(string.Format("{0}{1}{2}", ex.Message, Environment.NewLine, ex.StackTrace));
}
Console.WriteLine();
Console.WriteLine("Press a key...");
Console.ReadKey();
}
}
}
Other Notes
It wasn't necessary to instantiate the connection string or BLL objects as singletons, but I figured the code needed to be tested, so here we are. For those of you wanted to see the difference between using a singleton, and NOT using a singleton, I created a compiler definition called __USE_SINGLETONS__
in the project properties, which automatically tells the code to use singletons, but allows the programmer to switch between the two paradigms by simply undefining the compiler definition (see the top of the program.cs file).
Closing Comments
I've tried to make using ADO as generic as possible because I gotta backup my general dislike of ORM's. Like my dad always told me, "If you're gonna talk the talk, make sure you can walk the walk."
If this article violates your warped sense of "best practice", well, that's on you. However, don't feel like you can't offer up where you think I went wrong, and how society as a whole is much the less for it, or cetrain protected classes of individuals are likely to fall into a fiery pit.
Breaking Change - A .Net Core Conversion Version
Let me start off by saying this was done ONLY out of curiosity, and I am in no way advocating the use of .Net Core. In point of fact, after my brief exposure to VS2019 and the whole Core paradigm, I ain't impressed. That being said, here we go...
First Things First
NOTE: If you don't want to convert it yourself, I provided a new download for this version (at the top of this article).
This code uses .Net Core 3.1 (which is, as of this writing, in "pre-release" - make of that what you will), and was created in VS2019 (I don't know if Core 3.1 is available for VS2017). Keep in mind that beyond a few using
statement changes, NONE of the actual class code had to change to support Core. So if you're ready, here's what I did:
- Create a new .Core class library project, called PaddedwallDalCore, and delete the default
Class1.cs
file.
- Add a new .Core console application to the project, called TestHarness, and set it as the "startup project".
- Add a reference to the class library project in the console app project.
- Use NuGet to add Microsoft.Data.SqlClient to the solution
- Copy the following folders/files into the approriate solution projects from the original PaddedwallDAL solution. For those that didn't notice, VS2019 doesn't require you to show hidden files and "include in project" on the folders/files that you just copied. Once copied, they are automatically added to the project (and you can still choose to exclude them like you could in past versions of Visual Studio).
- In the entire solution, "Replace in Files" all instances of
using System.Data.SqlClient
to Microsoft.Data.SqlClient
.
Unless I forgot a step, all of the above takes about 15 minutes. When you compile/run it, it acts just like the .Net Framework version.
Comments about ADO in .Net Core
IMHO, it seems kinda half-assed. Instead of bringing it ALL into the Core Microsoft
namespace, they require you to keep using System.Data
. Not only that, Core's ADO support is STILL in the pre-release phase, and they're changing - or worse, deleting - stuff from Core (without telling anyone) . IMHO, this does NOT exactly promote its rapid adoption in enterprise-level code, nor advocating for its use.
Lastly, I often hear the term "cross-platform" associated with Core. On Linux/iOS, without Mono, your Core stuff WILL NOT RUN. That makes it just like .Net Framework, and thus, NOT cross-platform in the truest sense of the term.
So, if you wanna do Core, more power to you, I suppose, but I won't be until it's STABLE - and been stable for at least a year. Not a fan...
Points of Interest
If you want to see an example of how to extend this code, go to this article - Generic DAL Revisit - Making it Your Own, A Practical Example [^].
Why are they releasing dangerous felons because of the corona virus, but arresting people for violating the stay-at home orders? Things that make you go "hmmmm."
History
- 2020.03.06 Added Core 3.1 version.
- 2020.03.05 Initial release.