|
Yvon Bourgouin wrote: At the beginning, it's quite fast but the performances are degrading quite fast
I'm not surprised. Think about what you are asking the database to do here:
SELECT TOP 200...
SELECT TOP 200 WHERE NOT IN (SELECT TOP 200...)
SELECT TOP 200 WHERE NOT IN (SELECT TOP 400...)
SELECT TOP 200 WHERE NOT IN (SELECT TOP 600...)
... repeat lots of times ...
SELECT TOP 200 WHERE NOT IN (SELECT TOP 1,000,000...)
If I understand you correctly, you are running the same query over and over, asking the database to do more work every time.
You say that you are only processing a small number of the rows. So, why fetch them all? Why not put something into the WHERE clause to cut down the number of rows you select? Once you have fetched these rows, how do you decide which ones you will process and which ones you will ignore and what is stopping you from moving that selection logic back to the database? Sorry if that's obvious, I'm sure you have considered it, but I don't quite understand why you don't do that.
|
|
|
|
|
Returning millions of records takes time. If you can't limit the results via a WHERE clause, you might like to try the following approach;
CREATE PROCEDURE GetData
(
@start INT
)
AS
WITH Query
AS
(
SELECT
ROW_NUMBER() OVER
(
ORDER BY [IJK] ASC
) as position,
[IJK],
[XC],
[YC],
[ZC]
FROM [MSDP]
)
SELECT [IJK],
[XC],
[YC],
[ZC]
FROM Query
WHERE position BETWEEN @start AND @start + 200
To run the sp, you use
EXEC GetData 200000 [Replace the 200000 with the start value]. You could also try returning more than 200 records by changing the sp.
|
|
|
|
|
Thanks, that might solve my problem...
|
|
|
|
|
Yvon Bourgouin wrote: million of records
Yvon Bourgouin wrote: I would like to read the records one by one
Can you spot the problem here. Trying to iterate over millions of rows will, of course, be slow.
Why would you not limit the returned results to the rows you actually want?
|
|
|
|
|
I just had to go through and add view definition and execute permission to a couple dozen scalar value functions for a user. I had to right click on each one, click permissions, click search, type the login, click check names, then check the two permissions and click OK. I could have sworn there was a way to manage object permissions in bulk for a given user but I can't seem to find it.
|
|
|
|
|
It really depends on your set up, you could have set up and AD group and manged the users via AD and simply added the group (when searching) to the functions.
You could have used a script to do the same job. We have a set up where the admin can assign procs to users via the apps UI and script the change.
If you restore the database to another server you may run into the situation where the perissions are named the same but have a different ID and are no longer valid - you need to repeat the process.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
The problem isn't the number of users. The problem is the number of database objects. I want to bring up a list of scalar valued functions and apply permissions for a single user or group to multiple functions at once. Even if I scripted it that would mean a separate grant statement for each function.
|
|
|
|
|
You could employ schema to segregate your objects (I don't and think it would be irritating to have to include schema in every query), otherwise you have to live with your decision to apply user level permissions to database objects.
I'm curious, do your users access your database direct (SSMS/QA) or only through an application?
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi All,
Recently ,when i try to restore files from my back up (MyDB.BAK) i found two entries one Saying a Fullbackup and another backup which contains the date of backup
Why i am getting two different entries i really dont know.Previously,I only used to get a single file for the current date backup.
Have i changed something to get a duplicate backup?
As the size of the file is now Doulbled from the one i had two months ago?
I am just doing a full backup.
In short this Backup (myDB.bak) is contains two files of same type.Its a duplicate. As a result its giving me double the size of my DB.
Is there any way i could get only one backup?
Thanks
|
|
|
|
|
I suspect you checked append when selecting the destination (in SSMS). When restoring, via SSMS, you can select the backup you want to restore. I am surprised that restoring both set doubles the size of your database though, I would expect it to overwrite.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Thanks so much for your response.
I did run the database tuning advicer and sql profiler a couple of weekes ago.Can this increase my transaction log files termendeously.At the moment the log files is about 1.1GB,Is there any tool i can use to get the history of this log file in the last few months?
Other wise ,How can i shrink the logfile? Is there any other tool i can use which can tell me why database has grown by double size in only few weeks.
Many thanks.
|
|
|
|
|
Yes you can shrink the log file - look for shrink in BOL.
If you have changed your logging method from simple to full then that will certainly make a big difference.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi,
I am getting following error:
error LNK2001: unresolved external symbol "void __cdecl sqlcxt(void * *,unsigned int *,struct sqlexd *,*struct sq*lcxp const* )" (?sqlcxt@@YAXPEAPEAXPEAIPEAUsqlexd@@PEBUsqlcxp@@@Z)
But after adding extern "C" in the PL/SQL code
extern "C"
{
void sqlcxt (void **, unsigned int *,+
struct sqlexd *, const struct sqlcxp *);+
void sqlcx2t(void **, unsigned int *,+
struct sqlexd *, const struct sqlcxp *);+
void sqlbuft(void **, char *);+
void sqlgs2t(void **, char *);+
void sqlorat(void **, unsigned int *, void *);+
}
I am getting the following error:
error LNK2001: unresolved external symbol sqlcxt
Please Help!!!!!!
I am using Oracle 9i client on x64 platform.
|
|
|
|
|
There should be a 'lib' directory under the folder where the Oracle 9i client is installed. Add that directory to the directories that are searched during linking.
Chris Meech
I am Canadian. [heard in a local bar]
In theory there is no difference between theory and practice. In practice there is. [Yogi Berra]
|
|
|
|
|
Hi guys,
I would like to create a trigger that must execute at a certain time.e.g.
'2010-06-08 12:00:00'
I know there are stuff such as insert, update, and delete triggers. But is there
a way that i can set a time when that trigger should fire....other than using sql jobs
Please anyone help me.
Thanks, Morgs
|
|
|
|
|
MorgSim wrote: But is there
a way that i can set a time when that trigger should fire
Not as far as I know. A trigger, by definition, is activated by an event, how are you going to activate the trigger if you don't use a job. You are trying to use the wrong tool for the wrong job. Just use a procedure and call it using a job like it is designed to!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Ok thanks guys, i was also actually not sure about this. i will try some other logic instead
|
|
|
|
|
You do not need a trigger here. All you need is a job that executes a stored procedure or a set of queries at the mentioned time.
|
|
|
|
|
If you don't want to use a SQL job, use a scheduled task instead; it is executed outside of the control of the SQLServer Agent Service.
Not necessarily pretty, but provides a different level of control.
Tim
|
|
|
|
|
I have two tables with the following structure.
Table 1
Name Field1
n1 a
n2 b
n2 c
n3 d
Table 2
ID Name Field1
1 n1 NULL
2 n2 NULL
3 n3 NULL
Usually, Table1.Name and Table1.Field1 have a 1..1 relationship. however in very-rare circumstances they do tend to have 1..n
I need to update Table2.Field1 with Table1.Field1; In SQL Server we could write the following update statement to satisfy our business conditions.
UPDATE Table2 SET Table2.Field1 = T1.Field1<br />
FROM Table1 t1, Table2 t2<br />
WHERE t1.Name = t2.Name;<br />
In oracle, we were having the below statement.
UPDATE Table2 SET Field2 = (SELECT t1.FIELD1 FROM Table1 t1, Table2 t2<br />
WHERE t1.Name = t2.Name);
This statement works for Table1 having 1..1 relationship between Table1.Name and Table1.Field1
For 1..n relationship the ORACLE version fails with the following error - ORA-01427: single-row subquery returns more than one row
The SQLServer sql works for both the cases 1..1 and 1..n; given the nature of the join.
Could someone guide me with appropriate sql for the above issue?
I am looking for a bulk update statement rather than using PL/SQL to go through each recrod in Table2.
Thanks for your help.
Kalyan
modified on Monday, June 7, 2010 10:55 AM
|
|
|
|
|
Fairly obvious, your sub select statement is returning 2 rows for n2 , this is illegal. try putting DISTINCT in there
UPDATE Table2 SET Field2 = (SELECT DISTINCT t1.FIELD1 FROM Table1 t1, Table2 t2
WHERE t1.Name = t2.Name);
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Thanks for your reply.
I do know that multiple values is what is causing this issue.
Using DISTINCT would not work as the values being returned are distinct already. It would still return 2 rows for n2
This issue is due to ORACLE not supporting the same syntax as SQLSERVER. (as far as i know)
I am more comfortable with SQLServer than ORACLE and hence was checking if anyone else came across this issue.
Kalyan
|
|
|
|
|
Sorry, looking at it from the wrong end.
What do you expect the table2 field2 (missing from design given) to contain b or c and what does it contian when you use SQL Server, I'm betting tt has c.
Does Oracle not support update with a join, try using INNER JOIN style instead of the tables in the where clause (horrible format)
No matter how you work it your data structue sucks. the relationship is screwed and I consider Oracle correct in spitting the dummy.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
you are right - (for now) its not important as to which value gets updated. The SQLServer version of the script simply updates the t2.Field with the last found record. So for 'n2' t2.Field1 will always end-up with value 'c'
I was looking for an equivalent in ORACLE so that it will always update with the last row found.
The actual issue is much bigger and i was looking for a quick-fix without introducing a major change.
Looks like we need to address the issue with the current design before spending any further time on this issue,
Thanks for your inputs.
Cheers
Kalyan
|
|
|
|
|
I'm confused. Given the example data you have given above, what value would you expect to see in Table2.Field1 for row id 2? It could be either b or c. Which one do you want and how would you distinguish between them?
If you don't care, then you could just say SELECT MAX(t1.FIELD1). That would give you one value.
As far as I can see with your current SQL Server statement you have no control over which value goes in there, so I'm guessing it's not important to you.
|
|
|
|