|
The problem for me is that I'm often enough set on the wrong track by how the question is formulated.
|
|
|
|
|
kbalias wrote: I get the TimeSpan value by subtracting the official working hours and the actual working hours.
Which would seem to me that you would need to store the "official working hours" as well. Given that then there is no point in storing the difference for the second value. Instead you just store the "actual working hours". If you need the difference then the caller subtracts the two.
If you want the difference anyways then store it as seconds. That insures enough precision and is probably a more business friendly value than Ticks.
|
|
|
|
|
<Getting my brain back into gear>
The standard way is to store the login and logout times in a table. The nettime is easily enough calculated from those two values. It's a part of normalization to not store values you can calculate from other columns in the database.
|
|
|
|
|
Agree with this. Answer upvoted.
|
|
|
|
|
I agree with Jörgen. It isn't advisable to store calculated values in the database. If any of the values used in a calculation changes you have to do a recalculation to keep the data up-to-date.
Consider the following:
create table timetest (
starttime datetime,
endtime datetime
);
insert into timetest values ( getdate(), getdate() - 1);
insert into timetest values ( getdate(), getdate() + 1);
select starttime, endtime, DATEDIFF(n, starttime, endtime) from timetest;
if either one of the datetime columns change, the result is still always correct.
However, what you can do is that you can specify a computed column if you like. To extend the previous example:
alter table timetest add timediff as DATEDIFF(n, starttime, endtime)
Now if you run a select * query to the table you will see the same results as in the previous example because the calculation is now part of the table structure (and always up-to-date).
|
|
|
|
|
hi,
I'm using an asp.net application with some sql SELECT statements, I want to ask you when is it worthwhile to execute statements with SQL query?
I guess that if I use the paging to get 10-rows pages from a 25-rows table, it's not profitable, and I have to do the paging at the server side code. But when my table has 10000 rows, it becomes worthwhile enough.
Am I right?
|
|
|
|
|
In my opinion you're right on track.
However, things could get complicated if the amount of rows in the result set is changing (because the amount of rows changes, criteria changes etc). So a straightforward solution would be using paging nevertheless. If the connection between the application and the database isn't slow and the database isn't over utilized, this shouldn't cause too much wait time.
Of course if you have tables that you know are always small, fetching all one time and then paging locally would be an easy thing to do.
|
|
|
|
|
At some point you are presenting too much data to a user.
If you are looking at a 25 row limit per page then probably the limit is well below 500 total rows.
At that point you know that the user knows what they are looking for. They are not just randomly scanning records. So make them tell you what they are looking for. Use that to create a query that restricts the total rows returned.
When creating servers I usually have a configurable maximum and user queries are run with a count(...) first and if the return is more than a maximum then I return an error. The GUI screens are responsible for providing sufficient input specific to one query to allow the user to provide enough data to get below the limit.
|
|
|
|
|
I you are returning 10k rows to a web browser you are sacked on the spot! One of the issues we have is that the users are always after data dumps to analyse in excel, the continious whine of just give me a data dump becomes really irritating. Now that we have volume policies it is easier to limit these.
As others have said, you need to filter the results to minimise the volume returned. I never use paging although I do use a top 1000 in procs with a potential of high volume and then inform the user they have exceeded the volume policy. By the time you chuck 1000 records into a grid with local filtering and sorting I see no requirement for paging.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: One of the issues we have is that the users are always after data dumps to analyse in excel
Does that not imply that something is missing from the application? I'm guessing because I don't know the business requirements, but if they are constantly extracting data into another tool to analyse, that suggests that they need some sort of information they can't get from the source application.
|
|
|
|
|
I have encountered this with several server, the cpu utilization with SQL server on multi core systems is very low i.e. one core does all the work and the others are idle, what is worse is that the queries are queued serially.
Anyone encountered this, is there a solution.
Its the man, not the machine - Chuck Yeager
If at first you don't succeed... get a better publicist
If the final destination is death, then we should enjoy every second of the journey.
|
|
|
|
|
Have you checked if processor affinity is defined? It could be that only one core is allowed. Also it could be that the disk subsystem is actually the bottleneck so CPU has to wait for the answers.
And what comes to the serial execution, I thing there are at least few possibilities:
- again waiting for disk subsystem
- serial executions is forced because of locking issues
- transactions are ran in serializable isolation level.
|
|
|
|
|
Thanks Mika, I will look into it...
In the meantime more information:
- We have a windows application server service connecting to sql server via integrated security on an internal threadpool of 10 threads. Each thread creates a connection when needed, and closes when finished.
- The server is an HP with 2 quad core xeon hyperthreaded (16 processors in windows) with 10k sata raid 5.
- Clients connect to the application server via tcp (no direct connection to sql).
Its the man, not the machine - Chuck Yeager
If at first you don't succeed... get a better publicist
If the final destination is death, then we should enjoy every second of the journey.
|
|
|
|
|
Ok, that's good background info. If you find anything suspicious or more information regarding the problem, let us know
|
|
|
|
|
How do you know that the application server is not the one that is queuing them serially?
|
|
|
|
|
Fair question, I presume when you have parallel threads each with their own connection they query in parallel.
Its the man, not the machine - Chuck Yeager
If at first you don't succeed... get a better publicist
If the final destination is death, then we should enjoy every second of the journey.
|
|
|
|
|
Seems reasonable as long as you have verified that. And also verified that it is not in some other way serializing requests.
|
|
|
|
|
I just spent 2 hours googleing best practices for storing picture's for a asp.net MVC project that I am starting.
There are two school's of thought. One is to store the Picture's in a database (for ease of backup's) Another is to use the site's file system. (For speed)
This project will be for my family.(Very Large Family I might add) I want to keep Pictures Referenced to the user that uploaded them. And only share some of them with other member's
My question: What do You think about storing the file's in the database? And why?
Thanks in advance
Any good resources or link's appreciated
Frazzle the name say's it all
|
|
|
|
|
I use the file system but then I have 1000s of images, the economy of backing up the database and moving it to my dev environment alone dictates that I do not want the image files inside that backup. What do I care if the images are trashed on the server, I have less timely backups of them elsewhere, I certainly don't want to move them over the wire every time I take a backup of my data.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: I certainly don't want to move them over the wire every time I take a backup of my data.
That is a good point I don't think I will have that many images but you never know at design time If the project will be a hit with the User's.
Frazzle the name say's it all
|
|
|
|
|
I put them in the database, they're safer there. In the file system, someone may overwrite* or delete a file. I don't backup my databases though.
* Replace your favorite shot of Granny with a picture from your cousin's bachelor party (or vice versa).
|
|
|
|
|
PIEBALDconsult wrote: * Replace your favorite shot of Granny with a picture from your cousin's bachelor party (or vice versa).
as I am almost a grandfather This would make me Upset!!!
Thank You for the reply I will consider this issue when I Finally decide which way to go
Frazzle the name say's it all
|
|
|
|
|
There are few questions that you should have answer to make this decision:
File System:
- Images sizes are smaller than 2 MB.
- No editing required on the images in future
- Faster access
Database:
- Image size is on a higher side 2+ MB.
- Easy maintenance and backup required
There is an another approach of FILESTREAM if you are planning to use sql server 2008.
http://msdn.microsoft.com/en-us/library/cc716724.aspx[^]
hope it helps.
|
|
|
|
|
|
I just read your article. Interesting so basically all that gets stored in the database is a GUID and correct me if I'm wrong a Hash of the file size? And the file itself goes in the file system? I don't see the benefit in storing the info 2 times. Other than maybe so it can be strongly typed.
Maybe I missed the Point I will reread the article
Frazzle the name say's it all
|
|
|
|