|
The most efficient way to return values is with OUTPUT parameters. When the code returns, the parameter values will have the values in them; just check the cmd.Parameters("@actorName").Value and cmd.Parameters("@actorLogin").Value properties.
If you did not have any output parameters and did the selects the way you had them, then the return value would have been 0 (zero), I think. If you just had regular SELECT statements without any OUTPUT parameters, you could have ruturned 2 result sets, each with 1 value in them. You would then use a DataReader to read the value from the first result set, then call .NextResult (or something like that, I can't remember), and then read the value out of the second result set.
OUTPUT parameters are much more efficient.
Notice that I changed your SET statements to just SELECT. The SET statements will work, but the SELECT way is more common among db developers.
CREATE PROCEDURE sp_GetAssignedDetails
@roleName nVarChar(50),
@division nVarChar(50),
@actorName nVarChar(50) OUTPUT,
@actorLogon nVarChar(5) OUTPUT
AS
IF @division = 'North'
BEGIN
--Get ActorName and ActorLogon if division is North
SELECT @ActorName = dbo.Actor.ActorName
FROM dbo.Actor INNER JOIN
dbo.ActorRole ON dbo.Actor.Id = dbo.ActorRole.Id
WHERE dbo.ActorRole.RoleName = @roleName
and dbo.ActorRole.North = '1'
SELECT @actorLogon = dbo.Actor.ActorLogon
FROM dbo.Actor INNER JOIN
dbo.ActorRole ON dbo.Actor.Id = dbo.ActorRole.Id
WHERE dbo.ActorRole.RoleName = @roleName
and dbo.ActorRole.North = '1'
END
--Get ActorName and ActorLogon if division is South
ELSE IF @division = 'South'
BEGIN
SELECT @actorName = dbo.Actor.ActorName
FROM dbo.Actor INNER JOIN
dbo.ActorRole ON dbo.Actor.Id = dbo.ActorRole.Id
WHERE dbo.ActorRole.RoleName = @roleName
and dbo.ActorRole.South = '1'
SELECT @actorLogon = dbo.Actor.ActorLogon
FROM dbo.Actor INNER JOIN
dbo.ActorRole ON dbo.Actor.Id = dbo.ActorRole.Id
WHERE dbo.ActorRole.RoleName = @roleName
and dbo.ActorRole.South = '1'
END
--Get ActorName and ActorLogon if division is West
ELSE IF @division = 'West'
BEGIN
SELECT @actorName = dbo.Actor.ActorName
FROM dbo.Actor INNER JOIN
dbo.ActorRole ON dbo.Actor.Id = dbo.ActorRole.Id
WHERE dbo.ActorRole.RoleName = @roleName
and dbo.ActorRole.West = '1'
SELECT @actorLogon = dbo.Actor.ActorLogon
FROM dbo.Actor INNER JOIN
dbo.ActorRole ON dbo.Actor.Id = dbo.ActorRole.Id
WHERE dbo.ActorRole.RoleName = @roleName
and dbo.ActorRole.West = '1'
END
Scott
|
|
|
|
|
Cheers Scott,
That was very informative.
Yeah, unfortunately I'm not a DB developer.
Just trying to think this stuff out for myself as we have no DB expertise in work.
From the above code I'm trying to sequentially chose the above @actorName so that the workload is spread evenly amongst employees from the different divisions. At the moment it is picking the first one off the database table.
Are there any handy stored proc functions that allows a sequential generator of names so that each @actorName in the database will get picked from their appropraite divisons??
Any help would be apprecaited.
Kind Regards,
BC
|
|
|
|
|
Hi,
This is the senario:
TABLES
Project and Request
Schema
Project(ProjectID, ....., PercentageOfProgress)
Request(RequestID, ....., Progress[Pending, On Going, Completed])
The Progress field in the Project table is to be calculated. It should be a percentage based on the total numbers of records (Which are 'Completed' in the Request table.
If anyone could push me in the right direction, that would be great. Thank you.
|
|
|
|
|
You can't use a calculated column for this as the source data for the calculation must come from the row which is to have the calculation applied.
You may wish to set up a trigger so that when any Request row is updated it recalculates the value in the Project table. This is a good solution if the PercentageOfProgress is accessed frequently and changes to the database are infrequent.
Alternatively you may wish to drop the column altogether and recalculate the percentage of progress manually when needed. This is a good solution when the data changes frequently, but the percentage of progress information is accessed infrequently.
|
|
|
|
|
Thank you very much for the reply,
In light of the information presented, I have realised that the calculation must be generated from SQL Statements, efficiently stored in Stored Procedures.
You have suggested 2 things for the frequencies of changes and accesses of the tables and field. My tables will be changed frequently, and the field will be access frequently as well. In anycase, I will play around with some SQL statements and see what I come up with. If you have anything further to suggest, it can only help =D
Thanks again. Much appreciated.
|
|
|
|
|
What is the relationship between Project and Request? I assume there is a ProjectID field in the Request table.
select *, (select count(*) as CompletedRequests
from request
where request.projectid = project.projectid
and Progress = 'Completed'
group by ProjectID) / cast(TotalRequests as decimal) as PercentageOfProgress
from project left join (select count(*) as TotalRequests, projectid
from request
group by ProjectID) as tbl
on project.projectid = tbl.projectid
--EricDV Sig---------
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them.
- Laurence J. Peters
|
|
|
|
|
Thanks for this reply,
The problem is solved, now I just have to integrate it appropriately in my program. Thanks a lot. Appreaciate the replies! =D
PS. Nice quote mate lol.
|
|
|
|
|
Hi All
I have been tasked with migrating an asp\db app from IIS5 to IIS6. The app basically asks users to fill in a search form which then returns relevant records. If the result returns more than about 500 records then the page fails to load (page cannot be displayed error). There are no errors in the event logs, IIS logs or the httperr log. With fewer records then it works fine. It all worked fine on the IIS 5 box.
If I remove 4 of the displayed fields (there are about 12 in total) by just deleting the field from the code below (i.e. deleting every thing between the <tr> </tr> tags) so that there are fewer fields to display for each record then it works again. Looks like it's complaining about the amount of data returned but it's only a few hundred records.
Can anyone shed any light?
Thanks for your help
Mike
The code that displays the records:
.
.
.
'create a recordset
set sqlResults = server.createobject("ADODB.recordset")
sqlResults.open selectPart, dBconnection, 1, 3
CountRecords = sqlResults.recordcount
%>
<font color="navy" face="arial">
<P></P>
<center>
<table align="center" border="2" width="589" bordercolor="navy"
bgcolor="#e6e6e6">
<TR>
<TD width="571" height="26" valign="top" align="middle"> <font
color="navy" size=+1><B><%=CountRecords%>
record(s) held</B></font> </TD>
</TR>
</table>
<%do while not sqlResults.EOF%>
<table align="center" border="2" width="589" height ="95%"
bordercolor="navy" bgcolor="lightblue" style="HEIGHT: 103px; WIDTH:
589px">
<TR>
<TD align="right" valign="top" width ="199"
bgcolor="lightblue"><font color="navy"><B>
Part No: </B></font></TD>
<TD align="middle" width= "372" bgcolor="#e6e6e6"><font color
="navy"><B><%=sqlResults(" Part No")%>
</B></font></TD>
</TR>
<TR>
<TD align="right" valign="top" bgcolor="lightblue"><font
color="navy"><B>Manufacturers
Part No:</B></font></TD>
<TD align="middle" bgcolor="#e6e6e6"><font color
="navy"><B><%=sqlResults("Manuf Part No")%>
</B></font></TD>
</TR>
.
.
About 10 more fields listed here in the same way as the above ones.
.
.
<%
sqlResults.MoveNext
loop
%>
</TABLE>
|
|
|
|
|
How can i manage database transactions between layers? Using Presentation, business and data layers. One solution i found is to declare transaction in the business layer then pass it as parameter to the DAL, but i think that's the worse i can do. Any ideas?
Never argue with an idiot. They drag you down to their level, then beat you with experience. - Dilbert
|
|
|
|
|
The DAL deals with the transactions. From the business layer you can call the DAL's StartTransaction() method (or what ever you call to choose it) and if everything goes okay you can call the DAL's CommitTransaction() and if not RollbackTransaction()
Remember to ensure that the Commit/Rollback-Transaction() method calls are in a finally block in case something goes wrong because you want to ensure that they are called.
|
|
|
|
|
Thanks Collin , someone told me that i can use the transactionscope statement.. I think this would solve my problem. By the way, i read your sql injection article and is one of my favorites. Thanks again.
Never argue with an idiot. They drag you down to their level, then beat you with experience. - Dilbert
|
|
|
|
|
Hey
I have developed a webpage that has a search feature. Now I have read articles in the past that warn about security - how hackers could type bad input that could drop or damage your tables, and ways to go about stopping this from happening.
Now does this still apply today with .net v2? As i have noticed it has some built in security that stops you from entering ceratin characters.
If I still should add extra security does anyone know of any good articles or have an pointers?
Thanks!
|
|
|
|
|
|
OK - don't use inline SQL. Use Stored Procedures.
HTMLEncode your input.
Verify the input to detect common injection conditions.
Colin has a good article on how to do this.
Arthur Dent - "That would explain it. All my life I've had this strange feeling that there's something big and sinister going on in the world."
Slartibartfast - "No. That's perfectly normal paranoia. Everybody in the universe gets that."
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
Hi all,
I have a question on designing the database for my project. Here is the story. I am using VC++ connecting to SQL Express 2005. I have several tables that will have between 1000-5000 new entries a day. Will the "AddNew()", "Edit(), and "Find()" functions get slower on those tables as the entries grow beyond, let say, one million? Should I create one table per year (but then I need to run queries on each table, which I prefer not to do)? One note, these tables has indexes, of course.
Other questions that I have:
- which one is faster: FIND/FINDFIRST or SEEK?
- (VC++) Is querying "SELECT" as SQL command much faster than running Find/Seek thru _RecordsetPtr? Afik, both are done by DB engine on the server side, not by the client side, correct?
Could anyone please give me some light on these? thanks.
|
|
|
|
|
If the tables are indexed, then as your table grows, the insert (and sometimes the edit) operations tend to slow down - this is because the database needs to manage the indexes during the insert. However, a million rows (on a properly indexed table) should not be a problem for a select statement. The referse holds true on a non-indexed table - inserts tend to remain fast but select operations slow down.
There are a couple of design patterns that are applicable. One is to create multiple tables, one per month, one per year, whatever is necessary. This keeps the indivual table size small. You would then create a VIEW to represents a union of all your tables so your select statements always pull from a single entity. Another design pattern is to keep all of your data in a single, indexed, table but have a separate, non indexed, table that acts as an insert queue. New data gets inserted into the queue and that data is moved into the permenant table on a nightly basis (inerted into permenant table, delete from queue). You would still need a VIEW to provided a union between your queue and the permenant table, but you don't need to update the view as new tables are added.
|
|
|
|
|
Thanks a lot for your input. I think I prefer your second option (using a queue), although I have no idea what a VIEW is. I am pretty newbie in DB. What is the quickest/simplest way to copy data from one table (queue) to another (master)? Do I need to use stored procedure (which I don't know yet)?
Since my program will be run by multi-users in a 24/7 warehouse, I cannot do nightly transfer. So this i what I think: have an idle counter, if it hits (let's say) one hour, then that workstation start transferring one entry at a time (checking if it's still idle after each transfer). Of course I need a "setting" table in database indicating that "transferring in process" so that any other 1-hour-idle workstations won't try doing the same thing.
Will this work? Thanks again for any help.
|
|
|
|
|
This isn't a trivial process. Though simple on the surface, there is a lot you need to think about. First, transfering one entry at a time will hurt performance. You would be better off ignoring the queuing concept entirely and just inserting directly to your main table. The general logic you want to follow is this:
Start the transaction
insert from queue into master
delete from queue
commit transaction
syntax for inserting from one table to another is:
INSERT INTO TABLE2 (COL1, COL2, COL3) SELECT COL1, COL4, COL7 FROM TABLE1
Use a lock hint to lock your tables until the transaction is completed, so for example you might have the following code:
BEGIN TRANSACTION<br />
INSERT INTO TABLE2 (COL1, COL2, COL3) SELECT COL1, COL4, COL7 FROM WITH (TABLOCKX)<br />
DELETE FROM TABLE2<br />
COMMIT
For performance purposes, you may also want to drop the indexes in your main table prior to the insert and then recreate them after the insert.
|
|
|
|
|
Won't it take a long time to re-create index after deleting it?
After re-thinking about this, I started to wonder: how beneficial is it to use a temp queue? How slow is it to insert an entry to one-million-entry table? Do you have any approximate number?
If it's still acceptable (maybe less than 10 seconds), than I'll just drop this queue idea. I will still create one table per year, though.
Thanks for all your help.
|
|
|
|
|
Typically, anything greater than a few milliseconds is considered poor performance. You can easily achieve sub-10 second performance on an insert in a million entry table - assuming that your server is appropriately scaled for the job. For a million record table, you should still be able to achieve sub-second inserts. Make sure you do the math on your performance analysis. For example, if you shoot for 10 seconds in your insert time:
10 seconds * 5000 inserts/day
= 50000 seconds/day spent inserting
= 13.8 hours/day spent inserting records
Create your table, inject a few million records into the table and then do some performance analysis. If the performance is acceptable, then you have your answer!
|
|
|
|
|
I have to gather some info from and database - now the issue comes into play with either coding somethng or writing a extremely difficult query - what to do???
Here is the scenario:
Data from a single table.
Have to calculate columns(sum) according to to certain criteria and have a total at the bottom. Also have to calculate columns(sum) that subtract from a main column and then sum that column. I am just confused as to which: 1) is eaiser ( as the sytem does not have many users) 2) more efficient 3) which would you guys do (code or sql query)? 4) Is a query even possible?
Any suggestions or comments in the right direction would be greatly appreciated.
Thank You.
|
|
|
|
|
A Stored Procedure should do the trick nicely (and more efficient, as you don't need to bring all those records on the client side except the sums).
In your SP, calculate the sum of the main column, calculate the sum of the side columns, and then return the sums that you need through output parameters of the SP.
"A democracy is nothing more than mob rule, where fifty-one percent of the people may take away the rights of the other forty-nine." - Thomas Jefferson
"Democracy is two wolves and a lamb voting on what to have for lunch. Liberty is a well-armed lamb contesting the vote." - Benjamin Franklin
Edbert
Sydney, Australia
|
|
|
|
|
I would create stored procs to perform the required calculations anbd return the results. SQL server is pretty efficient at calculating aggregates. In any case, this is bound to be more efficient than returning all the raw data over the network, then calculating the results. No matter how much faster your calculation is than SQL servers, it will never make up for the overhead of marshalling all that raw data over the network pipe.
|
|
|
|
|
Only 2 rows have actual data and 4 of them are just either yes (nor) no then calculate from those fileds depending and yes or no across 4 fields and only one field can be yes. Then you sum the row and that column.
Sorry to be pest - i am not very good a stored p.
|
|
|
|
|
Use Excel.
"When I get a little money, I buy books and if any is left, I buy food and clothes." --Erasmus
|
|
|
|