|
I'm doing some homework (well kind of homework, its not graded or handed in, but recommended that you do it) and I know how do all the basic stuff but I want to put it all in a transaction that can rollback if something goes wrong. I know some of the keywords BEGIN TRANSACTION, ROLLBACK, COMMIT but I just don't know how to form it etc. Any help would be appreciated.
Matt Newman
...armed with what? spitballs!? - Zell Miller
|
|
|
|
|
Well, it depends on what you are doing. However the basics are the same regardless
BEGIN TRANSACTION
-- Do stuff
IF @SOME_SUCCESS_CONDITION
COMMIT TRANSACTION
ELSE
ROLLBACK TRANSACTION
Obviously you can check for failures at each point you do something with the database and rollback and exit there if it didn't work. The idea being that either everything works, or it appears that nothing ever happened.
Do you want to know more?
Not getting the response you want from a question asked in an online forum: How to Ask Questions the Smart Way!
|
|
|
|
|
Ok, basically I'm just doing insert commands, do I have to check if the inserts succeeded or if it fails will it automatically go to ROLLBACK ? I guess I'm just not sure how to get the success condition.
Matt Newman
...armed with what? spitballs!? - Zell Miller
|
|
|
|
|
If it is a single command then you don't need to transact it. SQL Server will do that for you (hence Transacted-SQL).
If you have a stored procedure with more than one command then you can check each command has succeeded e.g.
UPDATE authors SET au_id = '172 32 1176'
WHERE au_id = "172-32-1176"
IF @@ERROR = 547
print "A check constraint violation occurred"
Remember that the @@ERROR is only for the last command, so if you want to check its value a number of times in order to, say, perform different actions depending on what the error is you need to store it in the following statement.
DECLARE @errorCode int
UPDATE authors SET au_id = '172 32 1176'
WHERE au_id = "172-32-1176"
SELECT @errorCode = @@ERROR
IF @errorCode = 0
COMMIT TRANSACTION
IF @errorCode = 547
ROLLBACK TRANSACTION
Does this help?
Do you want to know more?
Not getting the response you want from a question asked in an online forum: How to Ask Questions the Smart Way!
|
|
|
|
|
Colin Angus Mackay wrote:
Does this help?
Yeah, i think so. Basically what I am working on now requires multiple inserts and if one fails the whole then they all have to fail.
Matt Newman
...armed with what? spitballs!? - Zell Miller
|
|
|
|
|
Hi people.
I am using ADO 2.5 type library in C++BUilder 5, I have a memory leak, and need to find a solution.
Basically I open a connection in a thread, keep it open, and make regular queries to the connection to get data. After I am done with each record set, I close the record set. Even if I end the thread and restart another, memory is not freed and keeps climbing. I will include relevant code here:
I have confirmed that the queries are one of the sources of the leak, because if I increase the frequency of the queries, the memory leak accelerates accordingly.
CoInitialize(NULL);
cn = CoConnection::Create();
rs = CoRecordset::Create();
cm = CoCommand::Create();
AnsiString ConnString = TagService->GetSetupConnectString();
// setup connection parameters
cn->ConnectionString = WideString(ConnString.c_str()).Detach();
cn->CommandTimeout = 15;
cn->CursorLocation = Adodb_tlb::adUseClient;
// open connection
cn->Open(NULL,NULL,NULL,adConnectUnspecified);
while(....)
{
cn->Errors->Clear();
rs->Open(WideString(Query.c_str()).Detach(), (IDispatch*)cn, Adodb_tlb::adOpenStatic, Adodb_tlb::adLockOptimistic,0);
// do stuff with the data here.
dummy2Var = 0;
p1 = rs->Fields->get_Item(dummy2Var);
PrinterName = (AnsiString)p1->get_Value();
rs->Close();
Sleep(....);
}
cn->Close();
cn->Release();
CoUninitialize();
Any help would be very appreciated
Best wishes to all!
André
|
|
|
|
|
MC++
Ok. You can re post it to MC++ Thread. Sometime it may help you out eventhough it is Ado.net discussion.
Sreejith Nair
[ My Articles ]
|
|
|
|
|
How can I Run SQL Server Agent from VB.NET and get execution status (i.e., Success or Failure). Using VB 6.0 I was able to Declare the connection WithEvents and then use the _ExecuteComplete event of the connection to determine when and how the SQL Server Agent completed the job. The same method doesn't seem to work with VB.NET.
Any help would be appreciated.
Thanks,
Emilio Crespo
ecrespo@pollotropical.com
|
|
|
|
|
I'll try to explain this.
I have a Windows form with a datagrid.
The datagrid binding has been set as MyDataGrid.SetDataBinding(MyDataView, "")
The dataview has also a RowFilter to show only some records.
Now, the problem is that I need to do some calculations whenever something is changed in that table.
For that I use the RowChanged and RowDeleted events.
Now, I have no problems with rows deleted, or rows that are modified, but I do with rows added.
When I add a new row, the RowChanged event does fire, and checking the row property of the event argument I can verify all the data in that row is correct.
However, a that point, all the dataset methods don't seem to "see" that row.
Very strange because the row is being showed on the datagrid.
More strange, if I make any other change, the row becomes "available".
It was driving me crazy because a Select method executed on the datatable would not return the newly added row, but it would if other change was made subsequently.
Just to be sure, in the RowChangedEvent I saved the dataset at that stage using the WriteXml method, then I looked at it and it did not include the new row.
But if I check the count property of the datatable, it has increased by one.
so it seems that by the time the RowChangedEvent is fired, the row is not available to the dataset methods even though the row IS available to the dataview
to make things even weirder. I have some code that checks for data integrity in the RowChanged event, if the data is wrong is display a message box and then calls the RejectChanges() method of the row.
That in turn triggers the event again and it exits the procedure because the row is null.
Now if that happens (meaning, if I enter wrong data to trigger the rollback) all subsequent insertions work fine.
even weirder, if I exclude the event handler before calling RejectChanges() (so the RowChanged event won't be trigger by the rollback) the problem persists, meaning new rows are not "seen".
this does not make any sense to me, and only happens if I do my calculations from the RowChanged event. If I add a button and use the button's click event it's fine, so it looks that when the RowChanged event is fired, the new row is in a sort of intermediate state (not the added state but some unforseen state).
This looks like a bug, but I really have no clue what's going on.
I hope this is understandable, it's so weird that I can't make any sense of it.
|
|
|
|
|
|
Hello All,
I'll start with professing my ignorance on this subject! I am calling a stored proc from a sql server 2000 db. The stored proc is as follows:
ALTER Procedure dbo.glpImportFromProcurement
@Account varchar(8000),
@Amount varchar(8000),
@JVType Char(3),
@SourceDocumentString varchar(80),
@SourceDocumentID Int,
@Vendor int
AS
/* Purpose: This imports all pre-encumberance, encumberance, expense, commitments to elite from procurement
Input: @Account - this is the account number(s) (comma seperated) that the transaction hits
@Amount - this is the amount(s) (comma seperated) that match the account number. The accounts must be in double quotes.
@JVType - "pre"(pre-encumberance), "enc"(encumberance), "exp"(expense) When an expense, it will also create the commitment
@SourceDocumentString - This is either the requisition#, PO#, or Invoice#. This is for display purposes for the client to link it to procurement manually
@SourceDocumentID - This is either the Requisition ID, PO ID, or Receipt ID. This is so we know where the transaction came from on a system level
@Vendor - This is the vendor ID for the transaction. We need this to finish the other side of the journal and for the commitment.
Output: Creates appropiate journal and if needed (expense items) a commitment
Return code
Zero = success
< Zero nothing to process
> Zero (failed) and raises
Last Modified:
*/
return 0
(This is just a shell proc that will be filled in later)
So..., when I execute this through my command object it seems to execute fine, but I never get the zero the proc is attempting to return, regardless of what execution method I try. Is there something I'm missing here? I had thought that it was necessary to declare one of the parameters as output in order to return a value.
Thanks,
Ryan
|
|
|
|
|
|
Hello,
I have a question regarding stored procedure desing that provides the
optimal performance. Let's say we have a table Products that consists of
three columns: Name, Status, RegistrationTime. All columns are indexed and
users should be able to lookup data by any of the columns. We have two main
options to design stored procedures for data retrieval:
1. Design separate stored procedures for each search criteria:
LookupProductsByName, LookupProductsByStatus, LookupProductsByTime.
2. Write a generic stored procedure that will fit any search criteria:
CREATE PROCEDURE GetProducts (
@Name varchar(20),
@Status int = NULL,
@FromTime datetime = NULL,
@ToTime datetime = NULL)
AS BEGIN
SELECT
[Name],
[Status],
[RegistrationTime]
FROM [Products]
WHERE [Name]=CASE
WHEN @Name<>NULL THEN @Name
ELSE [Name]
END
AND [Status]=CASE
WHEN @Status<>NULL THEN @Status
ELSE [Status]
END
AND [RegistrationTime]>=CASE
WHEN @FromTimestamp<>NULL THEN @FromTimestamp
ELSE [RegistrationTime]
END
AND [RegistrationTime]<=CASE
WHEN @ToTimestamp<>NULL THEN @ToTimestamp
ELSE [RegistrationTime]
END
ORDER BY [RegistrationTime]
END;
The second option is very attractive, because it is obviously easier to
maintain such code. However, I am a little concerned about performance of
such stored procedure. It is not possible to foresee what index should be
used, index can only be selected each during procedure execution, because
search criteria can include either Name, Status or RegistrationTime. Will it
make this SP inefficient? Or perormance difference in such case is not big
(if any) and we should choose the second option because of its significant
code reduction?
Thanks in advance
Вагиф Абилов
MCP (Visual C++)
Oslo, Norway
Hex is for sissies. Real men use binary. And the most hardcore types use only zeros - uppercase zeros and lowercase zeros.
Tomasz Sowinski
|
|
|
|
|
The 2nd option is not only less efficient than the first, it is less efficient than not using a stored procedure at all. It is also harder to maintain and optimize, because of the complexity.
In my experience, for user-searches, stored procedures are a poor fit. This is because user's demand compex, dynamic searches, and stored procedures are unable to efficiently handle that. In addition, such stored procedures become hellishly long and complex.
my blog
|
|
|
|
|
But you wouldn't defent using SQL statements directly, would you? While I see your point regarding performance, I can't understand reasons for not using stored procedure, or at least user-defined functions. Yes, I agree, for Web searches it is hard to write tens of SPs just to cover all possible cases, but at least UDF makes it possible to pack the code inside the database and not expose SQL directly to clients.
BTW, thanks for the opinion regarding efficiency. Looks like too much of generalisation will harm performance.
Вагиф Абилов
MCP (Visual C++)
Oslo, Norway
Hex is for sissies. Real men use binary. And the most hardcore types use only zeros - uppercase zeros and lowercase zeros.
Tomasz Sowinski
|
|
|
|
|
Well, since you asked...
I would (and do) use SQL statements directly, specially in a web-based applications. Stored procedures offer some benefits, but they also have some drawbacks, such as portability and maintainability. Personally, I almost never use them, but I accept that many people do.
The main benefit of stored procedures is one of security -- you can protect your database against adhoc user queries. In a web application, this benefit does not exist, because the user never has direct access anyway.
In other situations, there are alternatives such as writing a network service component (web service or .net remoting) as an intermediate layer. The business value of such a web service is far more than that of stored procedures.
my blog
|
|
|
|
|
I see your point. Although in our team we try to avoid direct use of SQL. We have multiple tiers, and letting Web developers write SQL directly will make the system uncontrollable.
BTW, I just received a reference to this outstanding article: Dynamic Search Conditions in T-SQL[^]. Exactly on this subject.
Вагиф Абилов
MCP (Visual C++)
Oslo, Norway
Hex is for sissies. Real men use binary. And the most hardcore types use only zeros - uppercase zeros and lowercase zeros.
Tomasz Sowinski
|
|
|
|
|
Steven Campbell wrote:
The main benefit of stored procedures is one of security -- you can protect your database against adhoc user queries. In a web application, this benefit does not exist, because the user never has direct access anyway.
It is always wise to reduce the attack surface of your application. Even if you think that someone cannot get that far in it is still wise to protect all layers as there will always be some bit of code that was writted by a recent graduate, or someone working late on a Friday night, that is not as well thought out that could let someone in.
Stored procedures also have the benefit of being precompiled, and therefor preoptimised. SQL Server 2000 has gone some way to optimising directly called SQL statements by caching the compiled versions so that if the query is run again it already has it in a compiled state and doesn't need to recompile it. But SPs will be precompiled always.
I would suggest the argument that direct SQL is better than Stored Procedures due to portability issues is a false one. Having worked on many database systems over the years I have found that even directly called SQL statements need to be changed when moving from one database system to another, so you are still not guaranteed portability calling SQL directly.
I would also suggest that the argument that directly called SQL statements are easier to maintain is incorrect. If you create stored procedures with a defined interface and the internal structure of the datamodel changes, all you need to do is change the Stored Procedure and everything that calls it will be no wiser to the changes further down. It has all the benefits of an extra layer of abstraction. Calling SQL directly on tables will mean that if the table changes then all the calls to it have to be updated and that can be a lot of work (especially if the database is used by many applications.)
Do you want to know more?
Not getting the response you want from a question asked in an online forum: How to Ask Questions the Smart Way!
|
|
|
|
|
I cannot echo Colin's last point strongly enough. Modularity is A GOOD THING. As Colin suggests, using sprocs frees the rest of your application from needing any knowledge about the underlying table structure. This will matter when you have to make a schema change for V2.0 and spend an eternity tracking down broken select/insert/deletes all over your code. If you use sprocs, all you need to do is assure that they still behave the same, and you're done.
Web apps are actually quite risky, especially if you use query string data to build your sql statements. There are ways to exploit this to get malicious code run on your sql box. Passing the query string data as parameters to a sproc eleviates much of this problem (My experience here is a little limited, so if I am way off, someone please contradict me )
Bill
|
|
|
|
|
Reducing the attack surface of your application is a good thing, but there are many other things I would recommend before getting to stored procedures:
* do not use sa to login
* encrypt your database connection string
* use parameterized queries
The efficiency/speed argument is bogus, especially considering murphy's law. In the time you'll take to finish the average 6 month project, processors will have improved enough to make the small "precompile" argument meaningless. The same could be said of many techniques for improving performance. Just let it go, and concentrate on creating a well structured application that is easy to maintain. Performance is a concern, but one more likely to be solved by a well placed index.
I'll admit that stored procedures are a useful abstraction for programmers that do not break down their applications into multiple well defined areas of responsibility. However, if you are in the habit of creating multi-tiered, well structured applications:
* SQL is in one place, or sometimes there is no SQL at all (O/R Mapping).
* Code is easily ported to different database platforms (again, the SQL is in one component that can be swapped out or regenerated)
* There is a layer of abstraction between the business code and the database already
I am content to let people use stored procedures if they want to. Most applications have bigger design problems than the decision whether to use stored procedures or not.
my blog
|
|
|
|
|
Steven Campbell wrote:
* do not use sa to login
* encrypt your database connection string
* use parameterized queries
The first and third of these I actually discussed in a blog entry of mine about a week ago so I am well aware of them.
Talking about Murphy's Law: Encrypting a connection string can be more trouble than it is worth. A project I saw recently had encrypted strings, but eventually comments started to appear next to them showing the string in decrypted form to make them easier to work with (it defeats the purpose really - Also, it is better to design a security system that people will not be inclined to break in order to do their everyday jobs). The other problem with encrypted connection strings is that you have to store a key to decrypt them somewhere. Better to use a trusted connection then it doesn't matter whether you know what my connection string is or not as you will still need to know the name and password of a trusted account and access it through a trusted domain.
Steven Campbell wrote:
I'll admit that stored procedures are a useful abstraction for programmers that do not break down their applications into multiple well defined areas of responsibility
Even if the application is broken into well defines areas I still think it is better to use stored procedures because they allow extra boundary checking. There are instances where a value on a record must be cross checked against other date to ensure validity. Constraints on tables can only go so far, for more complex checks SPs are better.
Again, security comes into play here as more security checks can be performed, even if they should have been performed elsewhere you cannot guarantee that another part of the application has not been compromised already.
Do you want to know more?
Not getting the response you want from a question asked in an online forum: How to Ask Questions the Smart Way!
|
|
|
|
|
Steven Campbell wrote:
* do not use sa to login
I strongly agree. That's one of the reasons I like sprocs: I can create a user that has only execute permissions on the sprocs, no direct select, insert, update (much less schema change) permissions.
Steven Campbell wrote:
* encrypt your database connection string
One point to make here: no matter how you encrypt/protect your connection string, your application has to have the ability to decrypt it. So it's inherently vulnerable. It has to be, or your application could not read it. Certainly: it adds another layer of protection I am not sure it's worth the effort.
Just my thoughts...I certainly cannot quantify any of this.
Bill
btw: I have enjoyed reading this thread. Thanks!
|
|
|
|
|
I won't go in to the optimization discussion, but another way to do the same query without the case statements in the where clause is as follows:
CREATE PROCEDURE GetProducts (<br />
@Name varchar(20),<br />
@Status int = NULL,<br />
@FromTime datetime = NULL,<br />
@ToTime datetime = NULL)<br />
AS BEGIN<br />
SELECT<br />
[Name],<br />
[Status],<br />
[RegistrationTime]<br />
FROM [Products]<br />
WHERE [Name]=IsNull(@Name, [Name])<br />
AND [Status]=IsNull(@Status, [Status])<br />
AND [RegistrationTime]>=IsNull(@FromTimestamp, [RegistrationTime])<br />
AND [RegistrationTime]<=IsNull(@ToTimestamp, [RegistrationTime])<br />
ORDER BY [RegistrationTime]<br />
END;
Hope this helps.
Jeremy Oldham
|
|
|
|
|
Thanks. This is of course preferrable syntax. But I ran some tests and figured out that optimization really sucks. So I will go back to specialized stored procedures.
Вагиф Абилов
MCP (Visual C++)
Oslo, Norway
Hex is for sissies. Real men use binary. And the most hardcore types use only zeros - uppercase zeros and lowercase zeros.
Tomasz Sowinski
|
|
|
|
|
Hi All,
Can anybody tell what is the equivalent of "For Update" Statement of oracle in Sybase.
or
It would be very helpfull if u suggest/commemt on this
I 'am using vb.net and syabse. How to handle row locking while updation ?
Thanks in advance.
Vinay
|
|
|
|
|