|
Well, I've used them for a few projects now (by choice) and I find them superior in ways that matter to me.
|
|
|
|
|
PIEBALDconsult wrote: I find them superior in ways that matter to me
Would you care to elaborate, I'm very interested to hear how they are of a benefit other than the distributed requirement I am aware of.Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I'm using GUIDs as keys in another database where multiple servers need to write to the same table. so this is the model of the distributed app you were talking about.
but even there it's not ideal. just yesterday I was thinking about changing this structure. in another table of the same database I'm storing rows that contain this GUID as a FK. there are many rows with the same GUID and I need to group these rows based on the GUID.
the problem is obviously the performance. I have a clustered index on a table that has this GUID as a FK on which I base the clustering. since new GUIDs don't come in a sequential order (they're generated randomly), each insert with a new GUID into this GUID-clustered table will force the index to be recalculated. this wouldn't happen with sequential autogenerated values like int.
Michal
|
|
|
|
|
If you opt for the identity type column the cost is a complex primary key on the merged table (ID & Server) and I'd think that may have a higher cost than the guid. As piebald suggested a custom guid based on an identity field and server may be a good idea! Never underestimate the power of human stupidity
RAH
|
|
|
|
|
So did Geralds ideas help? Never underestimate the power of human stupidity
RAH
|
|
|
|
|
hello, guys,
thanks much for your suggestions!
update: I have selected the 1.6 billion rows from the old table into a new table without any problems (it took a while, though). I did the cast to bigint on the way. the growth of the log file was nothing compared to me trying to change the Int32 to Int64 directly in the old table.
I scripted the constraints from the old table and now I'm adding them to the new table (will take a while I guess as after adding the constraints I'm re-checking them)
after that I'll take note what indexes were in the old table, I'll drop the old table, rename the new table and build the indexes on the new table.
I'll update you how it goes. obviously, I have a full backup of the last state of the database (file copy DB + LOG).
thanks again for you help so far!
Michal
|
|
|
|
|
update: so I'm again at the beginning.
the attempt to set the bigint column in the new table to Identity results in the SQL server growing the LOG so that it fills the whole disk and then it stops doing anything.
why should this simple operation be so dramatically demanding?
I'll now try to repeat the whole process, but first I'll create the new table with the Identity column already set up and only then I'll select into this table from the original table.
I'll appreciate any suggestions.
thanks,
Michal
|
|
|
|
|
Don't forget to turn indentity insert on Never underestimate the power of human stupidity
RAH
|
|
|
|
|
yes, I know, thanks. I've done this already with the same DB before.
let's see, I'll update you, thanks much.
Michal
|
|
|
|
|
actually, I'm thinking it might be a good idea to also set up all indexes (most importantly, the clustered one) on the blank new table before I do the bulk INSERT INTO.
this way SQL will be building the indexes along the way as it copies the data. I think this is potentially less time-and-space costly compared to SQL having to physically repartition the data in the clustered index later on, after the INSERT INTO has completed.
let's see
|
|
|
|
|
udpate:
so even this way, the LOG file grows so much that it fills the whole 1000 GB disk. which is ridiculous as the db itself without the indexes is only about 110 GBs big. it's 315 GBs with all indexes. in the new table I'm only using 2 out of the 6 original indexes. as always when one does some change, I've found out I'm actually no longer using the remaining 4 indexes, so I dropped them in the new table
I think the problem here is that although the recovery model is set to simple, the log file keeps track of all the transactions until the statement has terminated and all data has been fully written to the disk.
so I think the solution now is to try to do this INSERT INTO in batches and make sure the log file gets truncated after the completion of each batch.
I'm wondering just how much disk space this operation would eventually need to complete if done at once. it would be useful if SQL server provided a way to carry out these bulk operations in some kind of "unsafe" mode with no logging at all. unless I'm missing something, it's not possible to turn off all logging in the MS SQL server.
I'll update you
|
|
|
|
|
final update:
all done. the correct way on how to do this was:
1) create a new table with the same columns, change Int32 column to Int64
2) script out all keys and indexes from the old table and create them on the new table
3) make sure the DB is in the simple recovery mode
4) INSERT the rows from the old table into the new table in batches, truncating the LOG after each batch. I was also inserting the identity column values, so I have also set the IDENTITY_INSERT to ON before the batch loop.
that's it
thanks for your help, guys.
Michal
|
|
|
|
|
Hi guys, I hope you can help me with this query. Maybe because it is saturday morning I can't seem to work this out. I have 2 tables, Sales and SalesDetails. To simplify things say the Sales table consists of ID, and InvoiceDate, and SalesDetails consists of ID, Quantity, SaleID, ProductID. Now I want to return a list of the total daily sales for a specific month for a specific product, i.e. 01/03/2010 - 5, 02/03/2010 - 10 etc. Hope someone can help
|
|
|
|
|
Try this
SELECT
SaleDate,
SUM(Amount)
FROM SalesDetail SD
INNER JOIN Sales S ON S.SaleID = SD.SaleID
WHERE SaleDate BETWEEN ThisDate AND ThatDate
GROUP BY SaleDate
Tricky bits will be if you have store time component in your SaleDate, you will need to strip the time off.
You can also use the month/year component of the SaleDate instead of the between but it is messier.Never underestimate the power of human stupidity
RAH
|
|
|
|
|
It worked. Thank you muchly. The dates are stored with no time component, so that was not an issue. The final SQL came out thus:
SELECT
s.InvDate, SUM(sd.Qty) as Amount
FROM SalesDetails sd
INNER JOIN Sales s ON s.ID = sd.SaleID
WHERE s.InvDate BETWEEN DATE(dteStart) AND DATE(dteEnd)
AND sd.ProdID = ProductID
GROUP BY s.InvDate;
|
|
|
|
|
I'm curious, what database are you using as this does not work on SQL Server
DATE(dteStart) Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi
I am using MySql, querying it from VB.Net. deStart and dteEnd are VB variables. MySql does not like VB date variables, and so it is best to convert to date in your SQL Query.
|
|
|
|
|
Hi,
I have a few questions. First of all, what's the flexibility and strength of MSAccess 2007 and how much data and transactions can it support?
Second, I would like to know whether it's optimum to use MSAccess 2007 as database for a small banking system, provided it's a real life application and not a demo/tutorial one. The system will have approximately 250-300 customers per year and their daily transactions for the whole year that'll count upto 1,10,000.
If MSAccess isn't optimum then which light version DBMS can I use? How about MySQL or something else?
Third, is there any light version for MSSQL server that fits into my scenario, for almost all the MSSQL versions I have seen are developer/enterprise editions?
Regards
Priyamtheone
|
|
|
|
|
The strength of MSAccess is that it has a builtin GUI, a small footprint and xcopy deployment.
The weakness is that it doesn't scale, I wouldn't recommend it for more than ten concurrent users.
I would instead start with either Oracle Express edition or MS SQLServer Express edition. Both are for free, in both cases the biggest limitations are: won't use more than one processor, 1 GB Ram, and max DB size is 4GB.
This limits you less than you might think and both options are easy to upgrade."When did ignorance become a point of view" - Dilbert
|
|
|
|
|
I'm not sure about the 2007 version but Access was NOT a multiuser, it worked OK as a mutliuser DB until it didn't and when you asked for support from MS they would stand on the disclaimer that it is NOT multiuser.
Definitely use the Express versions (I use MSSQL but that is because of my skill set) and set up a proper client server environment. MSSQL is reasonably easy to set up and very easy to work with.Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Imagine an app that where modification of values in the GUI are immediately updated to a SQL database.
Given this scenario, would you still open and dispose of a SqlConnection object for each update? My guess would be yes because the real connection to the database should be managed via connection pools.
Any rationale for either tactic would be greatly appreciated.
|
|
|
|
|
treefirmy wrote: Any rationale for either tactic would be greatly appreciated.
I'd open it just before the Connection is used, and close it as soon as I'm done with it; Take a look at tip #3 on Connection Pooling[^].I are Troll
|
|
|
|
|
I have a database that holds information including isbn numbers for books. I discovered that the isbn is not specifically a 10 digit number it can actually have a character in it so i need to change it to an nvarchar but when i try to do so it says i can't that the table needs to be dropped. Surely I don't have to delete the table and reenter thousands or records just to change that property or do I?
Please help thanks.
|
|
|
|
|
if you can't change a field, perform these steps:
- add new field (with a new name)
- copy data from old to new field
- delete old field
- optional: rename new field
You can perform these interactively, using SQL Sever Management Studio, PHPmyAdmin, or whatever fits your database.
|
|
|
|
|
wow that worked perfectly. Luc i can't thank you and the other's enough for all your help you have given me. Hopefully one day I will be able to help others and return the favor in which you have given me.
Thanks again.
|
|
|
|