Introduction
Under normal circumstances, it is strongly recommended that you don't concatenate strings to produce an SQL Command, because of the risk of SQL Injection - and this is indeed a very good idea. But if you are inserting 1000 rows, it doesn't make much sense to issue 1000 ExecuteNonQuery
instructions - as you would have to do to use an SqlCommand
instance and Parameterised queries.
All of the examples I had found used either strings or a StringBuilder
to create a monolithic command and issue a single ExecuteNonQuery
. Nasty! Unless you use an SqlBulkCopy
instance instead...
Background
Basically, I have a Tracks
table, a Playlists
table and a LinkTrackToPlayList
table, which contains the row ID, the Guid Track Id, the Guid Playlist Id and an integer Playorder - I could have used a linked list for this, but an integer is fine and easier to maintain / debug. The row ID is integer, and an Identity field, since a track can appear in several Playlists, or even several times in the same Playlist and an integer is all I need to make the row unique.
I had to update the track list for a playlist recently, and I could either wait for the user to press OK then work out which rows to delete, which to add, and which need renumbering (and thus a database UPDATE
command) - or I could just delete the existing rows, and insert the whole list as new rows.
Obviously, the second route is quicker to develop, probably easier to maintain, and definitely more reliable. It may even be quicker when used in an SQL Transaction as well (though I haven't checked that bit.)
But I don't want to do 100 or 1000 ExecuteNonQuery
instructions - each of those needs a round-trip to the server and back, so it's not exactly going to be efficient. Instead, I decided to use an SqlBulkCopy
instead.
SqlBulkCopy
If you haven't met this, have a look - MSDN: SqlBulkCopy - it's a really handy class if you have a lot of work to do, and it is designed to work with Transactions, so it's simple to fit in with your code as well.
What is basically does is allows you to provide a data source, and perform a large number of database operations (INSERT
for example) from it. It even has a "background" mode which lets it do its thing, and tell you when it's finished.
Using the Code
The first thing I needed was a DataTable
with the appropriate data in, so I created this with a simple method call:
Type[] columnTypes = new Type[] { typeof(Guid), typeof(Guid), typeof(int) };
string[] columnNames = new string[] { "TrackId", "PlaylistId", "PlayOrder" };
DataTable dt = AudioPlaylist.MakeTable(columnTypes, columnNames);
private static DataTable MakeTable
(IEnumerable<Type> columnTypes, IEnumerable<string> columnNames)
{
if (columnTypes.Count() != columnNames.Count())
{
throw new ArgumentException("types and names do not match");
}
DataTable dt = new DataTable();
IEnumerator<Type> types = columnTypes.GetEnumerator();
IEnumerator<string> names = columnNames.GetEnumerator();
while (types.MoveNext() && names.MoveNext())
{
dt.Columns.Add(names.Current, types.Current);
}
return dt;
}
The column names are the names of the Database columns, as well as the names of the columns in the DataTable
(The array of names will also be used later.).
All the method does is create a new DataTable
, and add a column for each column we want to set into the database. Note that is this case the Database table ID column is not specified - as I do not need to use it at any time, it is specified as an Identity Int field, and the database will cope with it. Indeed, it would cause an error if we tried to set it!
Because we aren't setting all the columns (and because I don't like to rely on the column order in the database being unchanged), we need to set the ColumnMappings
:
foreach (string name in columnNames)
{
bulkCopy.ColumnMappings.Add(name, name);
}
A ColumnMapping
entry tells the SqlBulkCopy
object which DataTable
column maps to which database column - they can be different names, but in this case it is a lot easier just to use the same ones.
Next, we add the actual data:
dt.Columns["PlayOrder"].AutoIncrement = true;
foreach (AudioTrack track in tracks)
{
dt.Rows.Add(track.Id, Id);
}
Since I want the play order to be just an increasing number, I don't have to set that - using AutoIncrement
on the column does that for me. All I have to do is specify the actual column data - in this case the Guid Track ID and the Guid PlayListID.
Then, the actual work is just one line of code:
bulkCopy.WriteToServer(dt);
The Whole Method
Obviously, there is more code needed to support this stuff - but it's all pretty simple, and a non-static
member of the AudioPlaylist
class:
public string SetTracks(IEnumerable<AudioTrack> tracks)
{
using (SqlConnection con = DBAccess.DBCon)
{
SqlTransaction trans = con.BeginTransaction();
using (SqlCommand cmd =
new SqlCommand("DELETE FROM LinkTrackToPlaylist WHERE PlayListId=@PID", con, trans))
{
cmd.Parameters.AddWithValue("@PID", Id);
cmd.ExecuteNonQuery();
}
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(con, SqlBulkCopyOptions.Default, trans))
{
Type[] columnTypes = new Type[] { typeof(Guid), typeof(Guid), typeof(int) };
string[] columnNames = new string[] { "TrackId", "PlaylistId", "PlayOrder" };
bulkCopy.DestinationTableName = "LinkTrackToPlaylist";
foreach (string name in columnNames)
{
bulkCopy.ColumnMappings.Add(name, name);
}
DataTable dt = AudioPlaylist.MakeTable(columnTypes, columnNames);
dt.Columns["PlayOrder"].AutoIncrement = true;
foreach (AudioTrack track in tracks)
{
dt.Rows.Add(track.Id, Id);
}
try
{
bulkCopy.WriteToServer(dt);
trans.Commit();
}
catch (Exception ex)
{
trans.Rollback();
return ex.Message;
}
}
}
return null;
}
All of the code not described above is concerned with establishing the connection and transaction, deleting the old entries, or error handling.
What? No Download?
No download! Why not? Because the code is going to have to be modified to work with your database structure. If I provided a sample, it would have to identify if SQL Server was installed, find the name of the instance, create a database and tables, and provide a clean-up mechanism afterwards. Too much work for something completely useless!
Points of Interest
Why do people recommend concatenating SQL statements to do this? It's stupid, dangerous, and unnecessary!
History