|
A forth table with a table type, id field, inactive start, inactive end date, and reason code where the type and id are the primary key will give you a good starting point. It is better design to create three tables but if done right the one will serve you well.
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
If this is the main aspect for the design of the database, I would create the 3T primary keys the same way I would if it was an accounting system. A character key with the first 3 digits designating building, next 3 floor and the last three room (altered to your specs of course). This way you could include ranges of any size for your shutdowns without having to create multiple records for each structure.
It would come in very handy when developing a calandar of building projects. I usually try to avoid having any real meaning (besides row identifier) in my primary keys but, in this case it may make sense.
Building 5: 005000000
Second Floor, blg 5: 005002000
Room 234, 2nd Fr, blg 5: 005002234
Inactive Table
--------------
PrimaryKey
StartStructureId
EndStructureId
StartDate
EndDate
Reason
|
|
|
|
|
I would create the additional table on Room with an ID and from to date. If you deactivate a floor then create a record for each room on the floor. Please, do not prefix your ID field with any intelligence - it is a fundamentally bad design.
The reason for the room level table is to give you the most flexibility without inflicting a complex set of rules (around the level you want to disable). You could then create views that would count the number of disabled rooms and compare it to the floor, room count to identify a disabled floor/building.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Ok, I would set this up and try it but I don't have the equipment handy.
What would you say the TPS on SQL Server 2005 on a 64 bit Windows Server would be for the following scenario:
Table 1: Each record is 1k
Table 2: An audit table matching access records to Table 1, each record is 1k
All either inserts or selects? The other data in the other tables can be moved to another server if necessary. Anyone one with experience with a dedicated SQL Server machine that can tell me what I can expect as average TPS performance, on let's say a 8-core machine with 64 gigs of ram, and scsi HDD of sufficient size. (Assuming the machine specs are realistic)
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
Your performance will be I/O bound.
It seems to be very hard for programmers to understand this, but RAM and processor cores have very little to do with database server performance when writing. The bounding factor for writes (inserts, updates, deletes) is how fast the transaction log records can be written out to the log file. Transaction log writes are ideally sequential and are fastest when the drive is dedicated to the transaction log, as the disk head will always remain in approximately the right position to write the next batch of log records. Putting the data on the same disk as the log will cause dirty page flushes to be interleaved with log writes, causing the log writes to be held up while the head moves to and from the data pages.
It's best if you do have dedicated drives to preallocate the files to close to the capacity of the drive (array), and ensure each file is contiguous. There is a benefit to sequential I/O but file fragmentation will turn what looks sequential to SQL Server into random-access on the drive itself. File-grow operations are very expensive.
You should consider the capacity of the drives in terms of I/Os per second, not in terms of raw gigabytes writeable. Unfortunately you won't see this quoted for drives as it depends so much on the workload and pattern of access. SQL Server does batch up log records and will write them in batches of anything from 512 bytes up to 16kB and possibly even more, but remember that the log records for a transaction must be written fully to disk before SQL Server will consider the transaction committed (the Durable property, the D of the ACID properties).
In terms of selecting data, that depends on how you access it. SQL Server will cache data in RAM when it is accessed (which will probably be randomly, one 8KB page at a time as it's required but SQL Server also does 'read ahead' to perform larger I/Os in one command). Ideally you want an index that covers all of the terms in your WHERE clause. The query plan will show this as an 'index seek' (clustered index seek for the clustered index). However, with sufficient RAM, and the data already cached in RAM, you may not notice that the query is in fact simply reading every row from start to finish, shown as 'table scan' or (clustered) index scan. This will appear as 100% CPU, but adding more or faster CPUs generally won't make it go any faster and it'll still show 100% CPU: what you're actually seeing is the processor bringing the data from main memory into its caches, which is bound by the CPU and/or memory bus speeds. The OS can't account for how many CPU cycles were burnt retrieving data through the cache hierarchy, it just sees them as cycles consumed by this thread. In contrast when retrieving data from disk, it knows the thread can't be scheduled. (SQL Server actually uses asynchronous I/O, it puts a thread waiting for I/O onto another request so it doesn't incur the OS scheduling overhead, unless there's no more work to do.)
On a lightly-loaded multiprocessor/multicore system, you might also see some parallelism being invoked so each core processes one part of the data set, if the table is already cached in RAM. Don't be fooled by this: once you have a lot of concurrent transactions, the other cores will be busy processing those requests and you'll see very little parallelism.
Best case, you can get very high SELECT performance if the query uses a highly selective index (so that one unique row will be found) and the data is already cached in RAM: it will only need to follow a few page references before locating the actual row. Worst case is a non-clustered index scan of a large table not currently in memory or that won't fit in memory. (Non-clustered index lookups require a clustered index or bookmark lookup on the base table if you've SELECTed a column that doesn't appear in the non-clustered index.)
All you can really do is try to provide a representative data set and workload for the system and try it out.
DoEvents: Generating unexpected recursion since 1991
|
|
|
|
|
And as I don't have the machine I am still left with my question. I suppose I can rephrase for clarity: Estimated TPS on a 5 grand machine for the given simple table structure. I am hoping to get at least 7 records per data page to make life simple.
The key structure is actually fairly complicated making the index question too hard to ask for such a simple task so for brevity assume simple, 64 bit integer keys randomly distributed and an equal number of inserts:selects. (my scenario will involve a clustered and a non-clustered index) and millions of records, perhaps more.
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
Mike's answer is correct. You really need more info to even estimate the answer. What's the sequential IO rate of the transaction log drive for the database, and is it dedicated to the database or shared between databases? Are the log files on a dedicated drive? As far as reads go, how spread out do you expect the queries to be so that the most queried results will be cached in memory or will the queries be spread out making reads more IO bound?
The last time I speced out a high performance database server, most of the money went into a 20 drive raid pod and memory for read cache processor wasn't much of an issue, but I haven't really looked at any of that stuff since SANs have become more common.
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|
|
Is it really so hard to either not reply to my question or to give rough estimates?
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
Sorry, I would have found my response useful, the same with Mike's. I have 20 orange trees how many apples can I harvest?
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|
|
My question was more akin to I have 20 acres and a decent tractor how many bales of hay could I expect to harvest. I can pick up the phone right now and get many answers to that question including ranges to include many factors. Yet for some reason there seems to be a desire for people to make posts which don't offer substantiative support. If you don't have an answer or a range and your response is the exact same as someone else's, why bother with it as it certainly doesn't help me out. If I wanted the answer, "It depends on the exact configuration used" I would never have asked the question.
Again, not to point out the obvious, but unless you think I am a complete moron what possible help do you think your post or Mike's offered me?
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
It should tell you that you need to get some more information. For instance you could run some transactions through your dev database note the number of ios generated per transaction and be able get a rough estimate based on the number of ios that your server can handle.
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|
|
I really fail to understand how it is so hard to answer a simple question with an answer. Your statement of what I need to do is exactly what I asked from someone else. I hate to be rude, but are you stupid?
Need a C# Consultant? I'm available.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
|
|
|
|
|
WTF is wrong with you. You didn't ask a simple question. You asked a complicated question with no clear answer. If you want me to sum it up simply so you might be able to understand: Maybe, it depends on a lot of different factors.
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|
|
Hi! I have one problem with a query in Sql Server. I want to get all the records requested for some date provided by the user. The type of the record is datetime, and I want to search only by date, like the record is '2008-06-23 10:03:44.627', and I am trying to do something like this '2008-06-23%'
SELECT * FROM Products WHERE Date = '2008-06-23%'
and it gives me error "Conversion failed when converting datetime from character string."
any help how I can get the data?
|
|
|
|
|
your record is like date with time string..So first you have to convert without time string then only you can get....
Rajendran.AL
|
|
|
|
|
This should do it
SELECT * FROM Products WHERE CONVERT(VARCHAR(20),DATE(),105) = '23-06-2008'
Bob
Ashfield Consultants Ltd
|
|
|
|
|
it says:
'DATA' is not a recognized built-in function name.
|
|
|
|
|
That must be because you type DATA instead of DATE
Bob
Ashfield Consultants Ltd
|
|
|
|
|
same error...
'DATE' is not a recognized built-in function name
|
|
|
|
|
Sorry, its my fault. I did the query using getdate() and forgot to take off the brackets when I posted using your code.
SELECT * FROM Products WHERE CONVERT(VARCHAR(20),DATE,105) = '23-06-2008'
Bob
Ashfield Consultants Ltd
|
|
|
|
|
thx...that was the answer, have a nice day.
|
|
|
|
|
Thats OK, sorry I messed up on my first reply.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
if you have time, can you please describe what does it mean all that stuff in the CONVERT block. Thx
|
|
|
|
|
CONVERT(VARCHAR(20),DATE(),105)
VARCHAR is the datatype to convetr to. It should really have a length - say varchar(20) but defined as I do it just returns the correct number of characters.
DATE was your database column. For testing I tend to use getdate()
105 is the date format. There are loads of them, 101 to about 114, which return the date and/or time in different formats. Experiment to see which you like best.
Use the BOL to see the different formats.
Hope this helps.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Bob, use of most functions on the fields rather than on the parameter will inhibit the use of indexes. This could cause a fast index seek to turn into an index scan.
For this query I would use WHERE DATE >= '20080623' AND DATE < '20080624'. When you construct a date in this fashion it actually produces a datetime where the time parts are set to zero.
Also be aware that all date formats are open to interpretation based on server locale and user language, except ISO 8601 format YYYYMMDD.
DoEvents: Generating unexpected recursion since 1991
|
|
|
|