|
Why you don't use varchar column datatype in you DB? Why field in DB have to be integer datatype? I suggest to use varchar column in DB.
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
|
|
|
|
|
That is why i'm asking the question - i don't know. Won't the performance fall enough to take the cpr-fields datatype into consideration?
|
|
|
|
|
As you know if you store value (example) 000012 as integer in DB then you will get only 12 value and zeros before 12 value will be eliminated automatically.So in your case I would use varchar datatype in DB.
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
|
|
|
|
|
That's an easily corrected formatting issue, and not a reason to prefer integer over varchar.
|
|
|
|
|
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
|
|
|
|
|
Store it as a string. We have a very similar format for SSN and it's proven to be best in string.
If you want to search the whole SSN (using equality operator) or by describing the start of it (using LIKE '...%'), it will be efficient with string as long as you have an index on it.
If you want to search efficiently using subparts of the SSN (for example Year BETWEEN XX and YY), you can break it down to other calculated fields such as year, month etc and index them. Create a trigger which fills the calculated fields and never let the user to modify them.
Also if you can, enforce a uniqueness on the index on SSN, the search will be faster. Typically SSN should be unique but I don't know if you have exceptions and even if you do, do you encounter them in your application.
Also use triggers to verify the correct format of SSN so you won't have accidental mistakes/duplicates.
Hope this helps,
Mika
|
|
|
|
|
Mika > Thanks a lot! That answer was very helpful.
Blue_boy > You could format the integers when you show it to the user and easily insert the missing 0. But i'm convinced - i shall use varchar for SSN
|
|
|
|
|
Jan Sommer wrote: Thanks a lot! That answer was very helpful
No problem
|
|
|
|
|
Jan Sommer wrote: You could format the integers when you show it to the user and easily insert the missing 0
Yes,but your question was how to store value in DB like string or integer,by the way the SSN format 000000-0000 you can not store in DB as integer so you must use varchar.
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
|
|
|
|
|
You have no idea what you are talking about. You are confusing data storage representation and UI visualization. They are not related.
|
|
|
|
|
Rob Graham wrote: You have no idea what you are talking about.
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
|
|
|
|
|
I have started to use SSIS lately and i need to know if it can extract data from GBase and Universe datasources.
All i know is that it can extract from almost everywhere, but not sure. I came across this below site but above datasources are not listed. Any help!!
http://ssis.wik.is/Data_Sources
|
|
|
|
|
samerh wrote: started to use SSIS lately
poor bugger - programming by GUI, I hate it.
If you can get a driver for GBase (never heard of it) you should be able to interact with the data. Isn't universe a Business Object/Crystal OLAP structure in which case you will need to relevant drivers for that too.
Having said that the pretty GUI tolls probably won't support your drivers anyway unless there is a generic dialog.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I have just created a trigger in a database but I want get it to run now on every row in the table. I figured the best way to do this would be a table wide update that does nothing. But I can't work out how to create such a thing.
I have tried
UPDATE table
SET column = column
But no luck!
|
|
|
|
|
I suspect your trigger isn't doing what you expected as your update will do the update as you wanted.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
My bad. I tried a non-effect update query on my database where col1=col1 and according to the client statistics captured in SQL Server Management Studio, the rows affected were 43,657 in my database.
I would then believe that the trigger logic would have been invoked.
|
|
|
|
|
If this is a one time thing, why don't you take the basic code from your trigger and wrap it in some TSQL to call it. You may have to use a cursor to loop through your set of records to get it to work.
I'm not 100% convinced that your UPDATE statement would do anything, the optimizer might be smart enough to see that no action is actually required.
|
|
|
|
|
Based on a test I just performed, I beleive the trigger should have fired. (See earlier post ...)
Go back and revisit your trigger logic.
|
|
|
|
|
I'm pretty sure it will, I've used the same technique in the past. A quick way to check, start a transaction, do the update and without a commit or rollback, see (on another connection) if you can do anything with the table. If the optimiser was smart enough to know not to do an update the table will not be locked.
My best guess (without seeing the code) is that the trigger is not working correctly - maybe expecting single records in the inserted/deleted tables?
Bob
Ashfield Consultants Ltd
|
|
|
|
|
David Mujica wrote: I'm not 100% convinced that your UPDATE statement would do anything, the optimizer might be smart enough to see that no action is actually required
That would be a serious bug. Optimizer cannot make decisions which would compromise or change the logic.
If it would eliminate row updates based on the fact that no data is changed in the statement itself, it also would have to understand trigger logic. Also what if the table contains a timestamp column, foreign keys etc.
Typically optimizers ensure that the logic is not changed and if there is even a slight chance that the logic may be affected, optimizer revokes such permutations because of the uncertainty.
|
|
|
|
|
hi,
I am trying to create a custom installation package that won't take my end users very many steps.
Currently there are
-- Install .dot net (which I am not worried about because most of the machine have it already (and it runs with the setup.exe that I created from Visual Studio)
-- Install SQL Server Express with management studio Create the database by running the sql scripts within Management Studio and install the software itself. Though this process does work, I would like to make it a little easier. Our end users aren't exactly technically expert and will run into problems. Also with my limited experience of SQL Server, it is hard to rectify created instances, etc without a lot lof work that they won't understand.
my question is can we have a custom package, which consists of all the above and runs in single step.
modified on Monday, December 22, 2008 1:59 AM
|
|
|
|
|
Hi,
I'm fairly new to database programming. So, here is the problem I'm having.
Here is a basic description of the two tables I'm using. (The database is a MySQL RDBMS)
PersonTable
PersonId, INT, AUTO_INCREMENT (Primary Key)
Name, VARCHAR(32)
ImageTable
ImageId, INT, AUTO_INCREMENT (Primary Key)
ImageBlob, LARGE_BLOB
PersonId, INT (Foreign Key)
Basically, I want to insert the person record into the person table, next insert the image for that person into the image table, and finally link the ImageTable.PersonId to the PersonTable.PersonId.
So, here is the code I'm currently using
CRecordset personTable;
CRecordset imageTable;
personTable.Open(CRecordset::snapshot, "SELECT * FROM person");
imageTable.Open(CRecordset::snapshot, "SELECT * FROM image");
personTable.AddNew();
personTable.m_Name = "Joe";
personTable.Update();
personTable.Requery();
personTable.MoveLast();
long personId_ForeignKey = personTable.m_PersonId;
imageTable.AddNew();
imageTable.m_PersonId = personId_ForeignKey;
imageTable.Update();
The above code is not safe when clients concurrently submit records to the database. This is due to my faulty assumption that the last record is the one that the current client inserted, which may or may not be true depending on the interleaving of operations.
My question is this, is there a way to safely get the value of the primary key after inserting the record into the table, so that I can use it to populate the foreign key columns of the other table?
-Will
|
|
|
|
|
Have a look at LAST_INSERT_ID() function. That should give you the generated value for auto increment column. Also you don't need to requery the row anymore if you just get the value of PersonId.
|
|
|
|
|
Hi,
Probably this is not a question to ask in this forum . but I could not decide. Just putting things in abstract level. I have a primary function which insert some rows to DB0 tables and calls subFun1 and SubFUn2 who connection to different databases and insert some rows in tables in there.
Say I get error in subFun2 function I need to rollback everything what was inserted till that point from DB0 tables, DB1 tables and may from DB2 tables also. How to achive that here?
If exception occurs in subFun2, it will throw exception to primary function and only DB2 and DB0 transactions will be rolled back but not from the DB1 transactions. Any workaround for this
sub primaryfunction()
try
SqlConnection0 = Connection to DB0
SqlTransaction0
Insert/Update some table
call function SubFun1
call function subFun1
call function subFun2
SqlTransaction0.commit()
catch exception
SqlTransaction0.rollback()
end sub
sub SubFun1
try
SqlConnection1 = Connection to DB 1
SqlTransaction1
call function SubFun1
call function subFun1
call function subFun2
SqlTransaction1.commit()
catch exception
SqlTransaction1.rollback()
throw exception
end sub
sub SubFun2
try
SqlConnection2 = connection to DB2
SqlTransaction2
Insert/Update some table
SqlTransaction2.commit()
catch exception
SqlTransaction2.rollback()
throw exception
end sub
|
|
|
|
|
If I understood correctly, based on your description you need distributed transactions. You start a transaction, perform all operations on all databases and commit after that. MS DTC will take care of the transaction coordination.
In the semi-code you have commit at the end of SubFun1 which will result to commit point in DB1, and DB2 will have it's own transaction. Instead of using database scoped SqlTransaction, have a look at System.Transactions Integration with SQL Server[^].
Mika
|
|
|
|