|
Have a read of my posts - I am saying that yes this can be done, however there are too many risks involved for me to consider it a suitable pattern for administration on a database.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
vkEE wrote: Do I have to issue a rollback statement In general, yes. Transactions are pending until either a commit or rollback.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
But like GuyThiebaut mentioned, since the update query errored out, the connection would be lost and hence as good as rollback.
|
|
|
|
|
You are guessing that the data has not been updated.
How do you know that the data has not been updated?
Because the connection was dropped?
Because the Microsoft manual said so or because GuyThiebaut told you so?
That is very sloppy thinking and will get you into trouble one day...
You need to check your updates and KNOW that the update did or did not happen.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
I know the update did not go through because the table has a trigger on the field called updatedate, which updates this field to current datetime when a change occurs, and there were no rows in the database with the time I had performed an update.
Thanks.
|
|
|
|
|
vkEE wrote: and there were no rows in the database with the time I had performed an update
...and you are the only person who could have updated data at that time?
It pains me to see you so confident in such a precarious system.
At least I wont be there to say "I told you so" when the SHTF the next time you rely on this sort of method to update a large number of rows.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
vkEE wrote:
But like GuyThiebaut mentioned, since the update
query errored out, the connection would be lost and hence as good as
rollback. More importantly; no single part of it was comitted! (a rollback also ends the transaction and cleans it up, but a pending transaction is hardly a disaster - confirming the change by committing it would have been)
You can check whether there are pending transactions;
SELECT @@TRANCOUNT;
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Run the rollback - it cannot hurt
|
|
|
|
|
Hi,
I'm not sure this is the best way to do this. I have an agent running as a service on thousands of servers. Each server can delete and add several hundred records into one table everyday.
I have set the KEY as an integer using IDENTITY(1,1). During the initial testing I have noticed that SQL Server does not reuse the KEY. This makes me wonder what happens when I've added and deleted 4 Billion records (The max size of an Integer). Does it automatically wrap and I don't need to worry about this?
This is SQL Server, and I'm wondering if I should be doing a database REORGs like we do with DB2.
I know it would of been nice to be able to use a KEY and not worry about this. From what I've read the serial number from each distributor is supposed to be unique however I've run into duplicates right from the start where the distributor is using zero's as the serial number.
Thanks,
Glenn
Glenn
|
|
|
|
|
|
Yes, I can define it that way, but that only moves the potential problem, doesn't fix it.
Glenn
|
|
|
|
|
gmhanna wrote: that only moves the potential problem, doesn't fix it.
How much data are you inserting?
Assuming 1,000,000 servers, each inserting 1,000,000 records per day, it would take you just under 25,252 years 332 days to use up the capacity of a bigint column starting at 1 and ignoring negative keys. If you start at the minimum value, it would take just under 50,505 years 298 days.
At that point, I think it's fair to say it's Someone Else's Problem.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Bit it sure is nice to see someone planning for the future. He definitely learned from Y2K.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
Go with bigint on all transaction tables, int breaks when it hits the limit. The horizon for bigint is beyond your life time so settle for that.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
gmhanna wrote: wonder what happens when I've added and deleted 4 Billion records
1) Instead of running a delete command, use truncate table. That will reseed the identity field, thus allowing you to regenerate the numbers. (Not that there is anything wrong with what others suggested of just using a BIGINT for the ID field. Once MAX INT has been reached you will no longer be able to insert records into that table.)
2) If you are unable to truncate the table, then you can reseed the key. More information can be found here http://blog.sqlauthority.com/2007/03/15/sql-server-dbcc-reseed-table-identity-value-reset-table-identity/[^]
A quick example
CREATE TABLE Test(id INT IDENTITY(1, 1), TestValue INT)
GO
INSERT INTO test VALUES(1)
INSERT INTO test VALUES(2)
INSERT INTO test VALUES(3)
INSERT INTO test VALUES(4)
INSERT INTO test VALUES(5)
GO
SELECT * FROM test
GO
DELETE test
GO
INSERT INTO test VALUES(1)
INSERT INTO test VALUES(2)
INSERT INTO test VALUES(3)
INSERT INTO test VALUES(4)
INSERT INTO test VALUES(5)
GO
SELECT * FROM test
GO
TRUNCATE TABLE TEST
GO
INSERT INTO test VALUES(1)
INSERT INTO test VALUES(2)
INSERT INTO test VALUES(3)
INSERT INTO test VALUES(4)
INSERT INTO test VALUES(5)
GO
SELECT * FROM test
GO
DBCC CHECKIDENT(test, reseed, 1)
GO
INSERT INTO test VALUES(1)
INSERT INTO test VALUES(2)
INSERT INTO test VALUES(3)
INSERT INTO test VALUES(4)
INSERT INTO test VALUES(5)
GO
SELECT * FROM test
GO
DROP TABLE test
gmhanna wrote: This is SQL Server, and I'm wondering if I should be doing a database REORGs like we do with DB2.
From IBM's website, looks like the same feature would exist in SQL Server as clustered index management (more information http://technet.microsoft.com/en-us/library/ms189858.aspx[^])
Common sense is admitting there is cause and effect and that you can exert some control over what you understand.
|
|
|
|
|
Hi everyone!
I'm starting a project on BIDS. I want to create cubes based on replication databases for Salesforce and PRISM as data sources
Does anyone had an experience with Salesforce reports developing?
I will very appreciate for any suggestions you might have!
Thanks in advance.
Kind Regards
Karina
|
|
|
|
|
It's entirely doable, our EDW team does it.
Loosely
1) Pull data from SalesForce.com into a staging environment (we use their dbamp application to do it)
2) Pull data from Prism into a staging environment (AS400 ODBC connection)
3) Pair the data together into Facts and Dimensions. Log data miss matches for user reporting
Both systems should have customer numbers that match up, it entirely probably some where Salesforce.com is getting your Prism Customer numbers (or should be).
4) Once a solid data model has been built generating a cube off of that data is then easy.
For more conceptual information -> http://en.wikipedia.org/wiki/Data_warehouse[^]
Common sense is admitting there is cause and effect and that you can exert some control over what you understand.
|
|
|
|
|
Hi!
I am trying to create a statement, that deletes rows where UpdateDate is older than 7 day and than selects that rows for me. I need to get those rows back to my application to write the log file. In 2 steps (select and than delete by ID) is no problem. But is there a way to do this in one single step?
What I need to get back is table.ID, table.CreationDate, table.Name ... It is nearly the complete row.
THX for your help!
|
|
|
|
|
Member 9935038 wrote: But is there a way to do this in one single step? What do you mean with "single step"?
If you need them to be a single operation then it'd be wrapped in a transaction. That'd mean that there'd be two more extra statements.
If you're wondering if there's a command that does both a select and a delete, then the answer is no, but if you need that you can wrap 'em both in a stored procedure.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I imagine MySql has temp tables like Microsoft Sql does. You can insert the values you want to log into a temp table and then delete the records and then select from your temp table at the end.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
I have 2 database,
oracle transaction with multi OracleConnection c #
Table 1 in the database 1 <pre lang="c#">
Table 2 in the database 2
When save new data to Table 1, I want to save the data in Table 2 in the database 2
How can I do this in oracle transaction
<pre lang="c#"><pre lang="c#"><pre lang="c#"> using (OracleConnection connection1 = new OracleConnection(connectionstring1) )// databsse1
{OracleConnection connection2=new OracleConnection (connectionstring2);//database2
connection.Open();
OracleCommand command = connection.CreateCommand();
OracleTransaction trans;
trans = connection.BeginTransaction(IsolationLevel.ReadCommitted);
command.Transaction = trans;
try
{
command.Text =sql1//insert into table1 in database1
command.ExecuteNonQuery();
command.Text =sql2//insert into table2 in database2
//Here the problem by looking for a solution
How can I do that
trans.Commit();
}
catch (Exception ex)
{
trans.Rollback();
}
}
|
|
|
|
|
Create a new Command with connection2 and execute your sql2 in the new command. Don't forget to set the same transaction for the new command.
|
|
|
|
|
Just a thought.
if the tables structures are the same have you not thought about using database triggers?[^]
Every day, thousands of innocent plants are killed by vegetarians.
Help end the violence EAT BACON
|
|
|
|
|
Hi,
First, I didn't see a mongoDB forum here so I think that's the closest place to ask a DB question.
Ok, so I'm using MongoDB in my web application and using C# driver to interact with it.
I ran across a problem that I can see a lot of people encountered and that is: How can I deserialize a JSON string that I get from mongo when I have a field, called DateAndTime which has the value much like this one:
ISODate("2014-04-19T06:01:17.171Z")
I get an exception that this is, and of course it is, an invalid JSON type.
I came across a solution in Stackoverflow which said to do this:
string json;
var v = Newtonsoft.Json.JsonConvert.DeserializeObjectAsync<T>(json);
but this also didn't helped.
How can I get around this issue or solve it?
Or perhaps is there a way to configure how mongo will save dates?
|
|
|
|
|
We have a requirement to do some serious record by record processing and I have had some suggestions to do the processing in c# rather than use a TDE or cursor. Personally I prefer TSQL but I suspect that is because that is what I am used to.
What has been suggested is that we use Dapper [^]and loops or linq to process the records then update back to the database. I will prefer using bulk copy to write the records back but that requires a datatable and converting from ORM List<> to data table for multi million recordsets may be detrimental to the performance.
Both processes would run on the same server or at least of the same spec.
Any opinions on which would be the fastest method of processing!
[edit] link recovery [\edit]
Never underestimate the power of human stupidity
RAH
modified 16-Feb-14 7:09am.
|
|
|
|