Introduction
Enhancing existing large code bases has its own set of challenges but one hugely painful one is when the code eschews typesafety for short term expediency particularly around the
different forms of object-to-database and object-to-datagrid mappings.
This article uses some advanced techniques based around the sparsely documented TypedReference
and DynamicMethod
types to put some typesafety back in:
- Fully compiled code for setting, getting and normalising public fields of classes or structures without boxing;
- Special handling of nullable types including user defined structures and "embedded" nulls;
- Normalisation of empty strings, NaN, Infinity, DateTime MinValue and more;
- Declarative typesafe wrappers for calling SQL Server Stored Procedures and reading the results;
- Optimised support for Table Valued Parameters eliminating the inefficient
DataTable
normally required;
- Full support for SQL Server T-SQL type rules including handling of subnormal floating point numbers like NaN;
- Lightweight Database Manager class with support for cancelling all SQL calls via a
CancellationToken
.
Motivation
This section shows examples taken from the Database side of things and then some performance numbers:
The Simple
Given a stored procedure with output parameters:
PROCEDURE dbo.uspGetNVPIds
@idCount int = 1,
@first_value bigint OUTPUT,
@increment bigint OUTPUT
Then declare a mapping structure (field order must match):
public struct uspGetNVPIds
{
public int idCount;
[DB.Parameter(ParameterDirection.Output)]
public long first_value, increment;
}
And call it via DB
, a simple Database Manager:
var token = CancellationToken.None;
var args = new DB.Parameters<uspGetNVPIds>("dbo.uspGetNVPIds");
args.Data.idCount = 5;
DB.ExecuteNonQuery(token, ref args);
var first_value = args.Data.first_value;
var increment = args.Data.increment;
The Complex
Given a stored procedure with one result set and no arguments:
PROCEDURE dbo.uspNVPLoad AS
select d.Id, d.Name, d.Value, d.updatedAt, d.updatedBy,
CAST(d.[version] AS BIGINT) as [Version]
from tNVPData d
Then declare a mapping structure (field order does not matter):
public struct uspNVPLoad
{
public long Id;
public string Name, Value;
public long Version;
}
And call it via DB
again:
var token = CancellationToken.None;
using (var reader = DB.ExecuteReader<uspNVPLoad>(token, "dbo.uspNVPLoad"))
{
while (reader.Read())
{
var id = reader.Row.Id;
var name = reader.Row.Name;
var value = reader.Row.Value;
var dbVersion = reader.Row.Version;
}
}
And The Painful
Given a typical hand-written bulk updater and corresponding UDT:
CREATE TYPE dbo.udtNVPairs AS TABLE
(
Id BIGINT NOT NULL PRIMARY KEY,
TraceId INT,
Name NVarChar(255) NOT NULL,
Value NVarChar(255) NOT NULL
)
PROCEDURE dbo.uspNVPSaveMany
@values dbo.udtNVPairs readonly,
@updatedBy nvarchar(16) OUTPUT,
@updatedAt smalldatetime OUTPUT
AS
OUTPUT src.TraceId, cast(inserted.[version] as bigint) as [Version]
Then declare mapping structures:
public struct udtNVPairs
{
public long Id;
public int TraceId;
public string Name, Value;
}
public struct uspNVPSaveMany
{
public DB.Table<udtNVPairs> values;
[DB.Parameter("@updatedBy", ParameterDirection.Output)]
public string LastSavedBy;
[DB.Parameter("@updatedAt", ParameterDirection.Output)]
public DateTime LastSavedAtUtc;
public struct Result
{
public int TraceId;
public long Version;
}
}
And call it via DB
again:
var token = CancellationToken.None;
var table = new DB.Table<udtNVPairs>(Data.Count);
for (int i = 0; i < Data.Count; i++)
{
table.Row.TraceId = i;
table.Row.Name = "Data";
table.RowCommitAsNew();
}
var args = new DB.Parameters<uspNVPSaveMany>("dbo.uspNVPSaveMany");
args.Data.values = table;
using (var reader = DB.ExecuteReader<uspNVPSaveMany.Result, uspNVPSaveMany>(token, ref args))
{
while (reader.Read())
{
var traceId = reader.Row.TraceId;
var version = reader.Row.Version;
}
}
args.ReadAnyOutputs();
var lastSavedBy = args.Data.LastSavedBy;
var lastSavedAt = args.Data.LastSavedAtUtc;
Normalisation
This occurs on every set and get and all fields will be normalised to their Primary value including those embedded within nullable types as follows:
Type |
Primary |
Alternatives |
float |
NaN |
PositiveInfinity, NegativeInfinity |
double |
NaN |
PositiveInfinity, NegativeInfinity |
char |
'\0' |
|
DateTime |
default(DateTime) |
<= DateTime.MinValue |
long |
long.MaxValue |
|
int |
int.MaxValue |
|
decimal |
decimal.MaxValue |
|
string |
null |
string.Empty |
Note that this table is about the DynamicField
class used;
the additional wrappers on top for SQL will also handle required string
fields in the appropriate way by normalising to string.Empty
.
The basic rule is if something matches then it is regarded as empty so the return result will be the Primary empty value.
The following example will result in normalisation as per above and the SQL wrappers will
then send DBNull
(recall that SQL does not allow subnormals):
struct SubNormals
{
[FieldOptional]
public double One = double.PositiveInfinity;
public double? Two = double.NegativeInfinity;
public double Required = double.NaN;
}
The above mapping is essential in financial applications where subnormals are common. When dealing with a high volume of such calculations it is far better to avoid nullable types so the attributed approach is preferred.
Performance
It is difficult to get repeatable timings especially the "startup" time for first use of DynamicMethods
within an AppDomain so these values are ignored here.
Also the timings are presented only for the non-Database examples to avoid the impact of the delay-access behaviour of ADO.NET.
The timings for the original non-typesafe and the new approach from the samples are:
AddingTypeSafety
- setup for New is 42ms and Original is only 8ms but subsequent runs for both are 0ms
DTOsToDataGrid
- setup for New is 37ms and Original is only 12ms but subsequent runs for both are 0ms
Conclusion
The setup to use this approach is around four times slower but the compiled code and metadata caching makes subsequent runs no slower than direct manual coding. Note that the Database side of things would perform similarly but perhaps with the Reader
being a little slower to setup each time.
Note that because .NET is optimised for fields and not properties (e.g. you can't take the address of properties) and because this solution uses structs
and avoids Linq, it is also highly memory efficient both in the terse IL generated and in the (almost zero) heap usage.
Object Mapping
In this section are two examples of real-world code that revolves around object
rather than other more typesafe approaches form the core of the timing tests just presented.
Fields on Objects
The existing code is maintaining an OO hierarchy for the objects but uses a basic non-typesafe object bag for the field storage as follows:
public enum FieldName
{
FieldString,
FieldDouble,
FieldDateTime,
}
readonly IDictionary<FieldName, object> Fields =
new SortedList<FieldName, object>
{
{FieldName.FieldString, "MyFieldString"},
{FieldName.FieldDouble, 1.23},
};
public T GetFieldAs<T>(FieldName name) where T : class
{
object res;
Fields.TryGetValue(name, out res);
return res as T;
}
In the methods of the OO hierarchy all the fields needed by that method then need to be extracted as follows:
public bool OldExtractValues(out FieldName? errorField)
{
var firstField = GetFieldAs<string>(FieldName.FieldString);
if (string.IsNullOrEmpty(firstField))
{
errorField = FieldName.FieldString;
return false;
}
var secondField = GetField<double>(FieldName.FieldDouble);
if (!secondField.HasValue)
{
errorField = FieldName.FieldDouble;
return false;
}
var someBusinessTask = firstField + secondField.ToString();
}
This can be replaced by a struct
with the fields marked up within the OO hierarchy as follows:
public struct Values
{
public string FieldString;
public double FieldDouble;
[FieldOptional]
public DateTime FieldDateTime;
}
Then the methods can be simplified to focus on the business logic:
public static bool NewExtractValues(out FieldName? errorField)
{
var values = new Extractor<Values>();
if (!values.Extract(Fields, out errorField)) return false;
var someBusinessTask = values.Fields.FieldString + values.Fields.FieldDouble.ToString();
}
And this shared routine wraps the DynamicFields
class from this article:
public struct Extractor<T> where T : struct
{
public T Fields;
public bool Extract(IDictionary<FieldName, object> fields, out FieldName? errorField)
{
var dynamic = new DynamicFields<T>();
var ptr = __makeref(Fields);
foreach (var field in dynamic)
{
var key = Enum<FieldName>.Parse(field.Name);
object value;
fields.TryGetValue(key, out value);
var code = field.TrySetUntyped(ptr, value);
if (code > DynamicField.Code.IsEmpty)
{
errorField = key;
return false;
}
}
errorField = null;
return true;
}
}
Fields to DataGrid
The existing code has a Data Provider emitting DTOs in an OO hierarchy.
Fields with the same name generally have the same type and other parts of the code (e.g. a DataGrid) expect boxed normalised values.
public abstract class DTO
{
}
public class Data1DTO : DTO
{
public string FieldString;
[FieldOptional]
public double FieldOptionalDouble;
public int FieldInteger;
public Data1DTO()
: base("Data1")
{
}
}
public class Data2DTO : DTO
{
[FieldOptional]
public string FieldString;
public DateTime FieldDate;
}
readonly DTO[] Data = new DTO[]
{
new Data1DTO { FieldString = "Data1Field1", FieldInteger = 1,
FieldOptionalDouble = 1.23 },
new Data1DTO { FieldString = "Data1Field2", FieldInteger = 2,
FieldOptionalDouble = double.NaN },
new Data2DTO { FieldString = "Data2Field1", FieldDate = DateTime.Today },
new Data2DTO { FieldDate = DateTime.Today.AddDays(7) },
};
There is an existing map generated either at run-time using reflection or at compile time using T4 etc.
public enum OldFieldNames
{
FieldString,
FieldInteger,
FieldDate,
FieldOptionalDouble,
};
static readonly IDictionary<int, IDictionary<OldFieldNames, Func<DTO, object>>> OldMap
= new SortedList<int, IDictionary<OldFieldNames, Func<DTO, object>>>()
{
{typeof(Data1DTO).MetadataToken, new SortedList<OldFieldNames, Func<DTO, object>>{
{OldFieldNames.FieldString, dto => ((Data1DTO)dto).FieldString},
{OldFieldNames.FieldInteger, dto => ((Data1DTO)dto).FieldInteger},
{OldFieldNames.FieldOptionalDouble, dto => ((Data1DTO)dto).FieldOptionalDouble},
}},
{typeof(Data2DTO).MetadataToken, new SortedList<OldFieldNames, Func<DTO, object>>{
{OldFieldNames.FieldString, dto => ((Data2DTO)dto).FieldString},
{OldFieldNames.FieldDate, dto => ((Data2DTO)dto).FieldDate},
}},
};
The existing approach to obtaining fields and columns etc is roughly like this (see sample code for full details):
public static object OldGetField(DTO dto, OldFieldNames field)
{
object res = null;
IDictionary<OldFieldNames, Func<DTO, object>> fields;
if (OldMap.TryGetValue(dto.TypeId, out fields))
{
Func<DTO, object> extractor;
if (fields.TryGetValue(field, out extractor))
{
res = extractor(dto);
}
}
return res;
}
public static object[] OldGetColumn(OldFieldNames field)
{
var res = new object[Data.Length];
}
public static object[][] OldGetMatrix(params OldFieldNames[] fields)
{
var res = new object[fields.Length][];
}
A cleaner approach using the DynamicFields
class from this article follows.
Note that the same options for generating the NewMap
below are still open:
static IDictionary<int, DynamicFields> NewMap;
static ISet<string> NewFieldNames;
static void NewCreateMap()
{
NewMap = new SortedList<int, DynamicFields>()
{
{typeof(Data1DTO).MetadataToken, new DynamicFields(typeof(Data1DTO), typeof(DTO))},
{typeof(Data2DTO).MetadataToken, new DynamicFields(typeof(Data2DTO), typeof(DTO))},
};
NewFieldNames = new SortedSet<string>();
foreach (var pair in NewMap)
{
foreach (var field in pair.Value)
{
NewFieldNames.Add(field.Name);
}
}
}
Extraction with normalisation can now be done in one step:
public static object NewGetField(DTO dto, DynamicField.SearchKey key)
{
object res = null;
DynamicFields fields;
if (NewMap.TryGetValue(dto.TypeId, out fields))
{
DynamicField field;
if (fields.TryGet(key, out field) >= 0)
{
res = field.GetUntyped(dto);
}
}
return res;
}
The DynamicField
This forms the core of the approach and is responsible for dissecting the Type
and telling the DynamicGenerator
to emit the IL
for the DynamicMethods
needed.
It looks for one or more of the following attributes on each public field:
FieldOptionalAttribute
- one included in the source.
Cached Values
The class makes the following data available post construction:
public readonly string ParentName;
public string Name { get; protected set; }
public readonly Type ParentType;
public readonly Type RootType;
public readonly Type Type;
public readonly FieldInfo Field;
public readonly Type TypeNormalised;
public readonly Type TypeGenericTemplate;
public readonly TypeCode TypeCodeNormalised;
public readonly bool IsOptional;
public readonly bool IsNullableType;
public readonly bool IsUDT;
public readonly Attribute[] Attributes;
Result Codes
The setters and getters can return codes that take into account the definition of empty for that type and whether it is semantically optional or not:
public enum Code
{
Ok = 0,
IsEmpty,
TypeMismatch,
NotOptional,
}
Getters
The getters exist in specialised typed and untyped versions and rely on TypedReferences
:
public Code TryGetUntyped(TypedReference root, out object res)
{
var code = _GetterBoxed(root, out res);
return MapResult(code);
}
public Code TryGet(TypedReference root, out T res)
{
var code = _Getter(root, out res);
return MapResult(code);
}
Setters
The setters are similar:
public Code TrySetUntyped(TypedReference root, object value)
{
var code = _SetterBoxed(root, value);
return MapResult(code);
}
public Code TrySet(TypedReference root, T value)
{
var code = _Setter(root, value);
return MapResult(code);
}
What is a TypedReference?
Officially a TypedReference is just a managed pointer combined with the Type
of the value being pointed to.
They are first class citizens of the IL world but remain largely undocumented in the C# language even though it has support for them.
C#
An example of the C# language support for them is more helpful:
struct Test1
{
public double Field;
}
struct Test2
{
public Test1 Embedded;
}
static void Test()
{
Test2 instance = new Test2();
TypedReference ptr = __makeref(instance.Embedded.Field);
Type typeOfPtr = __reftype(ptr);
double myValue = __refvalue(ptr, double);
__refvalue(ptr, double) = 1.35;
}
IL
Note that there is no boxing/unboxing and this compiles straight down to concise, verifiable IL using the mkrefany
and refanyval
opcodes:
.locals init ([0] valuetype Test2 'instance', [1] typedref ptr)
ldloca.s 'instance'
initobj Test2
ldloca.s 'instance'
ldflda valuetype Test1 Test2::Embedded
ldflda float64 Test1::Field
mkrefany System.Double
stloc.1
ldloc.1
refanyval System.Double
ldind.r8
ldloc.1
refanyval System.Double
ldc.r8 1.3500000000000001
stind.r8
The DynamicGenerator
This is responsible for generating the DynamicMethods
and creating delegates for subsequent use by the DynamicFields
.
Some highlights include:
Dynamic Methods
The main thing to note in CreateMethod
is that when creating a DynamicMethod there is a complex set of security rules applied
to both a) what types are visible and b) whether the IL inside the method must be verifiable or not. The best approach is to neutralise both issues as follows:
static MethodInfo CreateMethod(out ILGenerator ilg, string name, Type returnType, Type rootType = null, params Type[] args)
{
var moduleForSecurity = rootType != null ? rootType.Module : typeof(DynamicGenerator).Module;
var res = new DynamicMethod(name, returnType, args, moduleForSecurity, skipVisibility: true);
ilg = res.GetILGenerator();
return res;
}
static Delegate CreateDelegate(Type type, MethodInfo method)
{
return ((DynamicMethod)method).CreateDelegate(type);
}
Fast Creators
It is useful to be able to create class instances on the fly without using reflection even if they have non-public constructors with arguments.
The CreateCreator<>
and related overloads perform this task and all end up here:
static Delegate _CreateCreator(Type @delegate, string name, Type retType, Type typeToCreate, Type rootType, params Type[] constructorArgs)
{
ILGenerator il;
var res = CreateMethod(out il, name, retType, rootType, constructorArgs);
_EmitCreator(il, constructor, retType);
return CreateDelegate(@delegate, res);
}
The emission of the IL can often be more elegant and easier to understand than the Linq Expressions alternative:
static void _EmitCreator(ILGenerator il, ConstructorInfo constructor, Type typeToReturn)
{
var typeToCreate = constructor.DeclaringType;
var argc = constructor.GetParameters().Length;
for (int i = 0; i < argc; i++) il.Emit(OpCodes.Ldarg, i);
il.Emit(OpCodes.Newobj, constructor);
if (typeToReturn != null && typeToReturn != typeToCreate) il.Emit(OpCodes.Castclass, typeToReturn);
il.Emit(OpCodes.Ret);
}
Using Callbacks
A callback approach is used in order to allow combining together different typed routines without generating everything through IL.
The API is slightly different from the usual Getters and makes heavy use of TypedReferences
:
public Code TryGet(TypedReference root, DynamicGenerator.DelegateGetterCallback callback, TypedReference state)
{
var code = _GetterViaCallback(root, callback, state);
return MapResult(code);
}
As an example, assume that there is a set of pre-defined delegates each one handling a specific type:
struct myState
{
public Delegate SomeRoutineThatTakesATypedReference;
}
void MyGetterCallback(TypedReference state, TypedReference value, DynamicGenerator.Result code)
{
if (code == DynamicGenerator.Result.TypeMismatch) return;
myState typedState = __refvalue(state, myState);
typedState.SomeRoutineThatTakesATypedReference(value);
}
Then a routine like this can be created that accepts the instance (via the usual TypedReference
) along with a DynamicField
on it,
chooses the right delegate and indirectly causes the normalised valued to be sent to it:
void Test(TypedReference root, DynamicField field)
{
var state = new myState();
state.SomeRoutineThatTakesATypedReference = null;
var code = field.TryGet(root, MyGetterCallback, __makeref(state));
}
Note that this approach is the crux of how the SQL wrappers handle reading/writing values from/to the SQL classes without boxing.
The SQL Wrappers
Applying the described techniques to creating SQL wrappers involved a lot of spelunking through reverse engineered code for the SQL ADO.NET provider.
This resulted in three classes: DB.Parameters
for SqlParameters
, DB.Table
for Table Valued Parameters and
DB.Reader
for the DbReader
itself.
DB.Field
The DB.Field
class is built on top of the DynamicField
and the DynamicGenerator
.
It looks for one or more of the following additional attributes on each public field:
SqlFacetAttribute
- to control length of strings primarily;
DB.ParameterAttribute
- to control the SqlParameter
direction and name.
It contains the following information:
public readonly int Ordinal;
public string Name { get; protected set; }
public readonly DynamicField Binder;
public string SqlName;
public readonly SqlDbType SqlType;
public readonly int SqlMaxLength;
public readonly ParameterDirection SqlDirection;
public bool IsInput { get { return (((int)SqlDirection) & 1) != 0; } }
public bool IsOutput { get { return !IsInput; } }
public bool IsTVP { get { return SqlType == SqlDbType.Structured; } }
The callback technique described earlier is used to set values into a SqlDataRecord
and also to create the right SqlType
around
the given typed value. Here is the callback for the former:
void SetValueCallback(TypedReference stateRef, TypedReference valueRef, DynamicGenerator.Result code)
{
if (code == DynamicGenerator.Result.Ok)
{
var state = __refvalue(stateRef, StateForDataRecord);
state.Setter(state, valueRef);
}
}
The callback for the latter involves storing the creation of the (boxed) SqlType
back into the state structure:
void CreateParameterCallback(TypedReference stateRef, TypedReference valueRef, DynamicGenerator.Result code)
{
if (code == DynamicGenerator.Result.Ok)
{
var state = __refvalue(stateRef, StateForParameter);
__refvalue(stateRef, StateForParameter).SqlValue = state.Creator(state, valueRef);
}
}
This is then exposed to the other DB
classes to extract fields as boxed SqlTypes
as follows:
internal DynamicField.Code TryCreateSqlValue(TypedReference source, out object sqlValue)
{
var state = new StateForParameter
{
IsNullable = Binder.IsNullableType,
Creator = _map[Binder.TypeCodeNormalised].Value2,
SqlValue = null
};
var res = Binder.TryGet(source, CreateParameterCallback, __makeref(state));
if (res == DynamicField.Code.NotOptional && SqlType == SqlDbType.NVarChar)
{
sqlValue = _SqlStringEmptyBoxed;
return DynamicField.Code.Ok;
}
sqlValue = state.SqlValue;
return res;
}
DB.Parameters<T>
This generic struct
wraps a marked up one and uses DB.Fields
to read/write values from/to SqlParameters
.
A rough overview of the key routine follows (see source for full details):
internal SqlParameter[] CreateParameters()
{
_parameters = new SqlParameter[_metaData.Length];
var ptr = __makeref(Data);
for (int i = 0; i < _metaData.Length; i++)
{
var info = _metaData[i];
var sql = _parameters[i];
if (info.IsTVP)
{
code = info.Binder.TryGetUntyped(ptr, out value);
if (code == DynamicField.Code.Ok)
{
sql.Value = (IEnumerable<SqlDataRecord>)value;
continue;
}
}
else
{
code = info.TryCreateSqlValue(ptr, out value);
if (code == DynamicField.Code.IsEmpty)
{
sql.Value = null;
continue;
}
if (code == DynamicField.Code.Ok)
{
sql.SqlValue = value;
continue;
}
}
}
}
DB.Reader<T>
This generic struct
wraps a marked up one and uses DB.Fields
to stream values from the underlying DbReader
.
A rough overview of the key routine follows (see source for full details):
public bool Read()
{
var ptr = __makeref(Row);
var state = new StateForGetter();
for (int i = 0; i < _metaData.Length; i++)
{
var md = _metaData[i];
if (_reader.IsDBNull(i))
{
if (md.IsOptional)
{
code = md.TrySetUntyped(ptr, null);
}
else
{
code = DynamicField.Code.NotOptional;
}
}
else
{
state.Ordinal = i;
state.Field = md;
code = _map[md.TypeCodeNormalised](state, ptr);
}
if (code <= DynamicField.Code.IsEmpty) continue;
}
}
The approach used here relies on generics and the .NET TypeCode
to choose the right delegate to call.
Note how the TypedReference
is simply passed through to the DynamicField
:
static readonly IDictionary<TypeCode, SqlGetterDelegate> _map =
new SortedList<TypeCode, SqlGetterDelegate>
(Enum<TypeCode>.Count, Enum<TypeCode>.Comparer)
{
{TypeCode.Boolean, (s, tr) => s.Set<bool> (tr, s.Reader.GetBoolean(s.Ordinal))},
{TypeCode.Byte, (s, tr) => s.Set<byte> (tr, s.Reader.GetByte(s.Ordinal))},
{TypeCode.Char, (s, tr) => s.Set<char> (tr, s.Reader.GetString(s.Ordinal)[0])},
{TypeCode.DateTime, (s, tr) => s.Set<DateTime>(tr, s.Reader.GetDateTime(s.Ordinal))},
{TypeCode.Decimal, (s, tr) => s.Set<decimal> (tr, s.Reader.GetDecimal(s.Ordinal))},
{TypeCode.Double, (s, tr) => s.Set<double> (tr, s.Reader.GetDouble(s.Ordinal))},
{TypeCode.Int16, (s, tr) => s.Set<short> (tr, s.Reader.GetInt16(s.Ordinal))},
{TypeCode.Int32, (s, tr) => s.Set<int> (tr, s.Reader.GetInt32(s.Ordinal))},
{TypeCode.Int64, (s, tr) => s.Set<long> (tr, s.Reader.GetInt64(s.Ordinal))},
{TypeCode.Single, (s, tr) => s.Set<float> (tr, s.Reader.GetFloat(s.Ordinal))},
{TypeCode.String, (s, tr) => s.Set<string> (tr, s.Reader.GetString(s.Ordinal))},
};
delegate DynamicField.Code SqlGetterDelegate(StateForGetter state, TypedReference tr);
struct StateForGetter
{
public SqlDataReader Reader;
public int Ordinal;
public DynamicField Field;
public DynamicField.Code Set<FT>(TypedReference ptr, FT value)
{
return ((DynamicField<FT>)Field).TrySet(ptr, value);
}
}
DB.Table<T>
This generic struct
wraps a Row and uses DB.Fields
to store values for later playback to the SqlDataRecord
- i.e. this is a Table Value Parameter.
Note that the normal approach uses the inefficient DataTable
but this relies on a List<T>
of Rows and therefore has to
manage the SqlMetaData
manually.
A rough overview follows (see source for full details):
public class Table<T> : IEnumerable<SqlDataRecord>, IEnumerator<SqlDataRecord>
{
public T Row;
readonly List<T> _rows;
readonly DB.Field[] _metaData;
int _current;
DB.Field.SqlDataRecordExtended _record;
Table<T> PrepareSqlDataRecord()
{
if (_record == null)
{
var count = _metaData.Length;
var md = new SqlMetaData[count];
for (int i = 0; i < count; i++)
{
DB.Field info = _metaData[i];
var m = info.SqlMaxLength != 0 ?
new SqlMetaData(info.Name, info.SqlType, info.SqlMaxLength)
: new SqlMetaData(info.Name, info.SqlType);
var ord = info.Ordinal;
md[ord] = m;
}
_record = new DB.Field.SqlDataRecordExtended(md);
}
ReadReset();
return this;
}
bool System.Collections.IEnumerator.MoveNext()
{
if (ReadMoveNext())
{
var ptr = __makeref(Row);
for (int i = 0; i < _metaData.Length; i++)
{
DB.Field info = _metaData[i];
var code = info.TrySetValue(_record, ptr);
if (code == DynamicField.Code.Ok) continue;
if (code == DynamicField.Code.IsEmpty)
{
_record.SetDBNull(info.Ordinal);
continue;
}
}
}
}
}
The Database Manager
The DB
static class is a lightweight singleton class to contain the other types and to provide simple helper routines.
Initialisation
This wraps a SqlConnectionStringBuilder
with the normal settings required for Enterprise environments:
public static void Initialise(string displayName, string dataSource, string initialCatalog, string applicationName)
{
}
Execution
The following generic typed routines are available, all with or without DB.Parameters
and cancellation support:
- ExecuteNonQuery
- ExecuteScalar
- ExecuteReader - returning a typed
DB.Reader
- ExecuteReaderRaw - returning the usual
SqlDataReader
For very simple fixed-position private stored procedures it can be easier to just use ExecuteReaderRaw
in which case
the following extension routines become useful:
internal static string GetOptionalString(this SqlDataReader @this, int index)
{
var res = @this[index];
if (res != DBNull.Value)
{
var str = (string)res;
if (!string.IsNullOrWhiteSpace(str)) return str;
}
return null;
}
internal static T? GetOptionalValue<T>(this SqlDataReader @this, int index) where T : struct
{
var res = @this[index];
return res != DBNull.Value ? (T?)res : default(T?);
}
TVP Helpers
Table Valued Parameters only have to match the position and type of each field defined in the SQL UDT.
As a result, the following helper routines can provide the benefits of typesafe streaming without the typing overhead:
public static Table<KeyStruct<T>> CreateTVP<T>(int count, IEnumerable<T> items)
{
var table = new Table<KeyStruct<T>>(count);
foreach (var item in items)
{
table.Row.Key = item;
table.RowCommitAsNew();
}
return table;
}
Surrogates
The examples describe a situation where there might be a high number of instances of some Key
class and only a subset of the fields within that class
are required to be streamed into a TVP. In this case, the DB.Table
class can use the Key
instances directly as follows:
public struct udtKeySurrogate
{
public readonly static DB.Field[] MD = DB.Table<Key>.CreateMetaData<udtKeySurrogate>();
public long Value;
}
public struct uspNVPLoadById
{
[DB.Parameter("@ids")]
public DB.Table<Key> Keys;
}
In this unique case a helper on DB
allows for easy streaming of the data:
var keys = new List<Key>();
var table = DB.CreateTVP(keys, udtKeySurrogate.MD);
Then setup continues as usual (see the examples for the full details):
var args = new DB.Parameters<uspNVPLoadById>("dbo.uspNVPLoadById");
args.Data.Keys = table;
Running the Examples
There are two examples showing the timing of the AddingTypeSafety
and DTOsToDataGrid
situations which do not require a Database.
To run the other samples, the following should be done:
- Using Visual Studio 2012 will work, otherwise, the Data Tools may need to be installed on 2010;
- Double click the
SQL.publish.xml
file;
- If this says it cannot connect, then locate
sqllocaldb.exe
and run it with 'start' and 'Projects' as the arguments.
The SqlLocalDB management utility is in the SqlLocalDB.MSI on MSDN.
References
History
- 1.0 - Initial write up
- 1.1 - Added section on subnormals and memory efficiency