Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / database / SQL-Server

Generating Insert or Update statements from table data using SQL Server

4.69/5 (26 votes)
13 Nov 2007CPOL8 min read 1   3.9K  
Writing a SQL script to generate insert or update scripts for a table full of data.

Source Code Output

Introduction

Shifting data around is a requirement for every developer and DBA. Whether it is building (or rebuilding!) a database, migrating data for releases, or just creating test data, I at least always seem to be opening a new query page and typing in 'insert into ....'. I know that SQL Server 2005 has great little widgets for creating insert statements based off the table, but what I have always needed was a way to generate an update or insert statement based on data in an existing table in a different database. Sometimes, a DTS / BCP import/export job is too much work when there is only one or two rows, or maybe you can't get a file anywhere near the server because of security restrictions. Maybe you just like doing things the hard way.

What you need is a T-Script which will read the contents of a table, then create update / insert statements for that data to go into the same table in a different server and database. This can be run wherever you have a query window, and you don't need to create a Stored Procedure or install any programs.

Background

I've always had little scripts written for specific tables, but on request from a friend, I decided to get serious and create a one-size-fits-all SQL generator which will generate a line of SQL for each row in a source table, which, when run on the target database, will perfectly replicate the data. I know there are tools around to do this, but this is quick and very easy. It also will generate either for a specific row of text, or simply for all rows in the source table. I think it is a very useful little script to have in the 'My SQL Helpers' folder.

Using the code

The first port of call for a project like this is the system tables. I've always loved the system tables, which are one of those weird circumstances where the structure of the product you are using is described by metadata within the product. Like Reflection and other self-describing metadata, you can get a bit chicken-and-egg if you think about it for too long.

The first thing you need to do is get a list of the columns within the target table and the data type of each of those columns. This is done with a pretty simple piece of SQL:

SQL
SELECT so.name, sc.name, st.name, sc.length
, Case when sc.status = 0x80 then 'Y' else 'N' END as IsIdent
, ColOrder
FROM sysobjects so
INNER JOIN syscolumns sc
  ON so.id= sc.id 
INNER JOIN systypes st
  ON sc.xtype = st.xusertype 
WHERE so.Name = 'SourceTableName'
ORDER BY ColOrder

This will give you the list of columns within a source table, in the same order they are in the database. This is the basic structure needed to get the data out. The tricky column with the Case statement checks the Status binary column and generates a Y/N depending on whether or not the column is an identity column - which we will come to later.

The problem with getting the information out is that we are working in looping sets - which is difficult to do in SQL Server (well, pre .NET anyway). In order to collate the results needed, a temporary table will be created. This temporary table will contain the actual output of the script. Another temporary table is created as a staging point for the data which is in each column.

SQL
create table #output (Line varChar(4000), LineOrder int)
-- Holds the output for the script

create table #ColumnValues (ColName varChar(250), ColOrder int, RowNumber int
                           ,ColValue varchar(4000), ColType varchar(50)) 
 -- Holds the values for each column/row combination

Working out the algorithm

The basic algorithm is:

for each column in the source table

  if the column is not an identity column

    insert into the #ColumnValues table the column name 
           and the value from the source table
  end if
end for


for each row in the #ColumnValues table
   while each column in the table belonged 
   to the same row in the source table
   concatenate the update/insert statement 
   together into a string variable

   end while

   at the end of each column set for a row

   if an insert statement desired

     create the insert statement specific text

   else
    create the update statement specific text

   end if
end for


select all rows from the #Output table

This basic algorithm has some complicated parts when translated into T-SQL - the most conceptually difficult is selecting the values from a particular column in a table. The problem is that you need dynamic SQL - no problem there - but you need to store the result into some type of variable. You can't use locally declared T-SQL @ variables because they are not in the scope of the dynamic SQL. This is why a temporary table is used instead. This can be used in the dynamic SQL, which looks like this:

SQL
exec ('insert into #ColumnValues (ColName, ColOrder, ColValue, ColType)

select ''' + @colName + ''', ' + @ColOrder + '
      , Convert(nvarchar(4000),' + @colName + ') , ''' + @colType +''' 
from ' + @tabName + ' order by ' + @SortCol + ' ' + '
declare @counter int
set @counter = 0 ' + '
update #ColumnValues Set @Counter = RowNumber = @Counter + (' + @numCols + ' * 10)
where ColName = ''' + @colName + '''' )

This rather scary looking piece of code does a couple of things. The @colName variable is taken from the cursor loop, and is simply the name of the column in the source table (as is @colType and @tabName). The code works one column at a time across the table - like learning to read - left to right, down to the next row and on and on... The insert statement takes the value read from the specific column of the source table and inserts the value read from the source table into the temp table (#columnValues). The next line declares a counter and then incrementally assigns the RowNumber of the source table - this is to keep all the columns of the same row together - otherwise, the results would be all over the place, with the values in different columns matched with the wrong primary keys - a bit like a board game accidentally bumped and all the pieces in the wrong place. The last update statement uses the very oblique Set @value = Column = @Value + number syntax - a bit of a hidden secret of SQL Server which allows you to update a running counter for each row in a table. You can either try and work out how it works, or accept that it does and move on.

The row counter is then used a bit later on when creating the insert/update statement from the values in the #ColumnValues table. It is used as a loop switch so the code knows when the first column finishes and the next one ends. This is because the structure of the original table:

row1 : col1 col2 col3 col4 
row2 : col1 col2 col3 col4

is now represented in a table as:

row1 col1
row1 col2
row1 col3
row1 col4
row2 col1
row2 col2
etc....

The row counter (@RowNumber) is checked after each column in a post-loop check, to determine whether or not we have switched rows - i.e., from row1 to row2. At each row switch reading from the table, the insert/update statement is finished off and a new one is started. Each time an insert/update statement is finished off, it is inserted into the results table. When the script is finished, all of the statements are read from the temporary table, and the result will give you a nice list of insert statements which can then be executed into the server/database of your choice (providing, of course, that the exact same table definition exists in that database).

To run the script, simply load it up in Query Editor or SQL Server Manager. Edit the @tab variable to the name of the source table, and type in 'INSERT' or 'UPDATE' into the @StatementType variable. If you wish to do an update statement or wish to do a single-row insert statement, put in the value of the Primary Key column for the table (currently, this only works with a single column primary key) and the name of the column which is the primary key.

Source Code Example

The above example shows the script ready to run an all - rows set of insert statements for a table called 'PaymentStatusTest'.

This is the structure of the 'PaymentStatusTest' table:

Example Table Column list

Execute the script, and then copy out the results from the query results window into the location of your choice. It helps to run Query Editor as 'text' output.

Points of interest

This script can be easily expanded to take in more complex situations, such as generating a list of update statements for a table (although a delete all/insert might be better?), or for dealing with multiple primary keys, or for partial matching across primary keys. I also considered making the script detect the primary key automatically, but this adds a lot of complexity for very little benefit.

Anyone using SQL Server on a frequent basis should learn the in's and out's of the metadata built into the system tables - there is virtually nothing that can't be accomplished with a bit of tinkering, some lateral thinking, and clever script. You could also turn this into a Stored Procedure, but I prefer the instant edit and continue style of SQL Scripts.

Updates

  • 5/4/2007: Added in a second script to the download source, which puts in an extra step that collates the insert/update statements and writes them out so that there is one SQL statement per line of output. This is more useful when generating a large amount of statements for an entire table. Also added in support for varchar, char, and text fields, as per the feedback left.
  • 13/11/2007: I've fixed a couple of bugs which I found myself and were reported in the comments section. These include: ability to specify table owner, the last row being appended to the end of the second last row, not handling ' characters in the data. I've also added the features of allowing for insert statements to tables with identity columns, and put the ability to specify a 'where' clause to generate the insert/update statements for a subset of data in a table. I've also put in a small (commented out) section that will generate a progress report so that you know the script is still working for very large tables. Just uncomment the section and it will report every 5000 rows. I once ran it for 3 hours straight - thankfully, I have no DBA to report to! Thanks to everyone who has given me feedback and helped to make a better script.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)