|
I like to add a note that Cursors are really bad for performance. It is very esily converted to a while loop. If you need help let me know.
|
|
|
|
|
Cursors are slow because they process results row-by-row and a while loop does the same, how come it be faster then ?
|
|
|
|
|
Avoid cursors where possible. You could do this with one statement...
insert into test (id, vfirstname, vlastname)
select newid(), vFirstname, vLastName from customer
|
|
|
|
|
Hi,
Try to understand the output of NewId()
If I issue select NewId() the output will be something like
3B311450-65BB-4AF8-898B-0CD02C5086E2 .
It is a pure hyphenized alphanumeric charecter. If your destination table has the Id column as integer type, then you need to either change the column type from Integer to Varchar or you need to extract only the numeric values from NewId() for your future use.
I have created a sample for you. Look into that(id datatype is varchar) . I hope you will get some insight.
declare @tblSource table(id int identity, FirstName varchar(50),LastName varchar(50))
declare @tblDest table(id varchar(max) , FirstName varchar(50),LastName varchar(50))
insert into @tblSource
select 'firstname1', 'lastname1' union all
select 'firstname2', 'lastname2' union all
select 'firstname3', 'lastname3' union all
select 'firstname4', 'lastname4' union all
select 'firstname5', 'lastname5'
Query:
insert into @tblDest (id,FirstName,LastName)
select NEWID(),FirstName,LastName
from @tblSource
select * from @tblDest
Output:
id FirstName LastName
D2A3D81C-F76E-44A4-9D47-83FBE4DDB76B firstname1 lastname1
0B827C0E-EB68-43EA-A423-B10575EAF572 firstname2 lastname2
EB4DB402-3D8F-4C09-9E37-004CE37DE1FD firstname3 lastname3
E6CE9239-1E95-4660-B5AC-00DBFEE28474 firstname4 lastname4
345F70CC-B957-41CC-977C-9FBA7A81E912 firstname5 lastname5
In case you need to strip out only the numbers, here is an example
declare @str varchar(max)
set @str = 'D2A3D81C-F76E-44A4-9D47-83FBE4DDB76B'
;with cte as(
select 1 as rn
union all
select rn+1 from cte where rn<LEN(@str)),
cte2 as(
select rn,chars
from cte
cross apply(select SUBSTRING(@str,rn,1) chars)X
where chars like '%[0-9]%'
)
select numericData from (
select cast(chars as varchar(max))from cte2
for xml path(''))X(numericData)
Output:
numericData
23817644494783476
Niladri Biswas
modified on Tuesday, December 8, 2009 4:12 AM
|
|
|
|
|
I am retrieving a lot of data and want this to be as fast as possible.
Let's say I have a database with two tables; tablePerson and tableFiles. A person has one file wich is stored in a BLOB. In my solution as it is now, I select * from person table, and while reading from this I select one row from the table with the blob.. This sums up to alot of one-row-queries towards the blob-table.. This blob-table has 678000 rows, and all these queries are taking a lot of time...
So.. How can I improve performance on this? Any tips?
This is something similar to my code:
using(SqlCommand cmd = new SqlCommand("select * from tablePerson", _conn)
{
SqlDataReader r = cmd.exeCuteReader();
while(r.read())
{
Person p = new Person();
string documentID = Convert.ToString(r["p_docid"]);
p.LocationToDocument = ExtractDocument(documentID);
}
r.close();
r.dispose();
}
...
private string ExtractDocument(string docid)
{
string filename = "";
using (SqlCommand cmd = new SqlCommand("select d_docid, d_title, d_BLOB from tableDocument where d_docid=@paramDocID", _conn)
{
SqpParameter p = new SqlParameter("@paramDocID", System.Data.SqlDbType.Text);
p.Value = docid;
SqlDataReader r = cmd.exeCuteReader(System.Data.CommandBehavior.SequentialAccess);
while(r.read())
{
}
}
return filename;
}
|
|
|
|
|
Why not do it in one select with a join?
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
As Asfield said, join the 2 tables so that you retrieve them once. I would add to what he said is that you probably want to add paging. So instead of processing 1 M record, you can process 1000 record at a time etc..
|
|
|
|
|
Good Day All
i have a Table with the ID Field that was not an identity Field. This Field has not Duplicates. Now later i want to Change this numeric Field as an Identity Field like this
--STEP 7 ADD THE IDENTITY BACK IN TABLE MTM_ACTV_STAFF
ALTER TABLE [New_Jaco].[dbo].[MTM_ACTV_STAFF]
ALTER COLUMN [ID] IDENTITY(1,1)
but i get an Error that says
Msg 156, Level 15, State 1, Line 3
Incorrect syntax near the keyword 'IDENTITY'.
Thanks
Vuyiswa Maseko,
Spoted in Daniweb-- Sorry to rant. I hate websites. They are just wierd. They don't behave like normal code.
C#/VB.NET/ASP.NET/SQL7/2000/2005/2008
http://www.vuyiswamaseko.com
vuyiswa@its.co.za
http://www.itsabacus.co.za/itsabacus/
|
|
|
|
|
A quick guess, I never script these thing I use SSMS, but don't you need a data type in the alter as well as the create.
ALTER COLUMN [ID] INT IDENTITY(1,1)
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
|
Hi all, I have what might well be considered a stupid question but fortunately none of you are my employer or former varsity prof so I'll ask away
I've always endeavored to keep my databases normalized to the best of my ability and, for the most part, I think I'm fairly well set in that line of thinking.
At my current company though some of our databases are anything but normalized. When I challenged my boss about it he responded that it was done deliberately because it is not supposed to be a relational database. It is a DB that accepts, stores and processes transactions at quite a fierce tempo and they didn't want every INSERT or SELECT statement to have to internally cross reference lots of other tables.
Fair enough, it makes sense. But then it occurred to me that I probably don't know enough about DB design because I would never have thought of that. I would simply have designed a normalized DB because that's the way I'm programmed.
So, I turned to Wikipedia but even there I can't seem to find an awful lot of information on non-normalized DB's but that's most likely because I don't really know what search terms to use.
So my question is:
1. When should a DB be normalized and when not?
2. Can anyone point me to an online article or two that deals with the topic?
3. Am I correct in assuming that a normalized DB is called a "Relational Database"?
4. What would you call a DB like ours here that is not normalized?
|
|
|
|
|
Your boss may actually be right (I work with batch processes not transactions) because with every FK and index on a table the CRUD operation need to do the additional work to maintain the relationships. I still doubt it is a valid answer though, only a reporting database should not be normalised IMHO.
Can't help more than that and I hope you find the ammo to shoot this one down.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Thanks, but I'm not disputing that my boss is right. I think he has a valid point so I'm not really looking for ammo to shoot his argument down I'd just like to make sure I really understand DB design and not blindly stick to normalized DBs if that's not always the best way to go.
|
|
|
|
|
Ah but I think blindly sticking to normalised databases is the correct thing to do. The only exception I have found is if you are creating a summarised table for reporting, I often include the report descriptors in the summarised table. This has gone out of fashion as we are starting to use cubes to support our reporting and they are whole different matter.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
The term that you are looking for is Denormalization.
In general, databases should always be normalized to at least 3NF unless there are specific areas where performance of reading/writing is a problem. However, most modern RDBMS like Sql Server are extremely efficient at joining tables using indexes so many of the reasons for denormalization are no longer applicable.
|
|
|
|
|
Dewald wrote: 1. When should a DB be normalized and when not?
You normalize it by default, removing redundancy. Once the database becomes too slow (which is usually not due to the normalization, but lack of indexes and such) you start to optimize/de-normalize/add redundancy.
You'll waste a bit more space and gain some speed.
Dewald wrote: 2. Can anyone point me to an online article or two that deals with the topic?
The Oracle FAQ[^] also contains a link to the reverse process.
Dewald wrote: 3. Am I correct in assuming that a normalized DB is called a "Relational Database"?
The fact that there are relations between the data doesn't mean that these have been normalized.
Dewald wrote: 4. What would you call a DB like ours here that is not normalized?
That depends on what you're using it for. If it's a blob-store or a container for hashtables, then I'd say that it's an optimized datastore and I'd make sure to give credit to the optimizer.
If it's relational data that you're storing, then I'd call it an "accident waiting to happen"
I are Troll
|
|
|
|
|
|
Actually Wikipedia has a good explaination of Normalization.
http://en.wikipedia.org/wiki/Database_normalization[^]
Chris Date and Edgard Codd have written numerous books on the topic and are frequently used in Universities as the course textbook. I had to study them when I was in Graduate School.
|
|
|
|
|
|
Check the link in my reply to the post below. Hope it helps.
50-50-90 rule: Anytime I have a 50-50 chance of getting something right, there's a 90% probability I'll get it wrong...!!
|
|
|
|
|
Dear friends,
I need to export records from sql to excel. I know how to export the whole record to excel. But my requirement is, i have to insert 10 records in sheet1 and another 10 records in sheet2 and so on.... a help or some links would be highly appriciated..... (i dont want any packages like DTS)
|
|
|
|
|
The only direct way I know how to do that with separate sheets is with ADO and COM.
|
|
|
|
|
Check this[^]. I think this should help you.
50-50-90 rule: Anytime I have a 50-50 chance of getting something right, there's a 90% probability I'll get it wrong...!!
|
|
|
|
|
|
A seven field composite primary key - sounds like a disaster has found somewhere to happen. This is a BAD idea, I have no knowledge of the performance issues with such a mess but the sheer cumberomeness of forever doing a 7 key join would drive me to drink.
I recommend that you insert a primary key into the central table and populate it, then turn autonumber on.
Now add the foreign key field to the other table(s) and use the 7 key horror to populate the field.
Make your relationship using these fields. You must make sure you support the change within your CRUD code.
Never underestimate the power of human stupidity
RAH
|
|
|
|