|
Below is quoted from a book about ADO.NET, does anyone understand that? Could you please explain a little bit to me or refer me to some link could explain it?
"The .Net framework provides client-side connection pooling for both data providers for connections in a single process, and it also contains provision for working with databases that use server-side connection pooling."
Thanks in advance.
|
|
|
|
|
The functions in ADO.NET are asynchronism or synchronism?
such as, myConnection.Open, myCommand.ExecuteReader.....
|
|
|
|
|
SQL 2000 has a method in the query analyzer to debug a stored procedure. (Use object browser, select the stored proc, then right click and select debug).
This is really nifty, with the exception of passing dates to a datetime parameter. I cannot figure out the format for the date. String mm/dd/yy doesn't work, either with or without single quote.
I am stuck, any help would be greatly appreciated.
"It takes a minimum of redesign to turn a crucifix into a pogo stick"
|
|
|
|
|
yyyy-mm-dd works, IIRC. If all else fails, try converting to a number:
SELECT CAST( CAST( '2003-01-01' AS datetime ) AS int )
|
|
|
|
|
Dave S wrote:
String mm/dd/yy doesn't work, either with or without single quote.
That depends totally on the configuration of the database. There's always the date functions and/or ISO format "yyyy-mm-dd"
--
I am of The Clan of Xymox. I wear a pink kilt!
|
|
|
|
|
Does anybody know of a good tutorial / reference about the expression syntax for DataTable.Select. I am trying to craft a simple query that traverses a child table, the MSDN documenation is shockingly bad
Ryan.
|
|
|
|
|
I am pretty new to C# and the .net framework. I am currently developing a C# application that will be using both ADO.net and ASP.net. I will do my best to describe what my application needs to do eventhough I am not sure of the best way of implementing it. I have a web application that I would like to use as a client that will consume a web service. This web service, which I am trying to create will run SQL commands on an Oracle DB up on our server, read the needed information and generate a crystal report, and make this report file accessible from the client side so the user can choose which report they want to view and any parameters they want to pass. The main stipulation is:
1. My clients do not have to have Oracle Client installed on their machine, this is why I am thinking I need a remote web service that can use the client software up on the server to access the Oracle DB. So that the client software can communicate with the web service and get the needed information to display a crystal report on in formation located from the DB using SQL.
Can anyone give me any ideas on how to structure this project? What should the client do? The service? If you can please be specific as possible as I am somewhat of a newbie. Thanks for all of your help!!
Frank
|
|
|
|
|
Reading from your information.. looks like you want create some Report Application which the data come out from Oracle DB ? is it right ?
Well , you have many choice... but in my opinion, You can create a Web Application , Your web App pull the data from oracle ( we have ODP .NET , check it out on google ) and display it on Crystal Report Viewer for ASP .NET,
of course you can create your own SQL query before retrieve the data.
What is the benefit for your client ?
1. Zero Installation , client only need Web browser don't need oracle client
2. Easily to manage, Just update on server and.. voila all client get the latest software
Is it can help you ? Just email me when you need explaination about the technical details
Cheers,
"Courage choose who will follow, Fate choose who will lead" - Lord Gunner, Septerra Core
|
|
|
|
|
Hi,
I have a program to test the limitation of SQL Server. My idea is to create threads simulating the many clients to connect SQL Server and run a query over and over again, until any one of them fails.
But I am not sure about the connection pooling thing. Should I set the pooling='false'? Because if I use the pooling, then there will have a limitation due to the 'max pool size'. But if I turn off pooling, will it be taking too much time in connecting?
I am not quite get the connection pooling thing even after I read some articles, so could please anyone tell me something about that too?
Thanks in advance.
|
|
|
|
|
yyf wrote:
But if I turn off pooling, will it be taking too much time in connecting?
Connecting to a database is a costly operation. Pooling is much more effective as it "garbage collects" database connections.
First the pool is empty. 5 threads allocate 5 connections. The pool manager sets up 5 connections to the database, and returns handles of them to the client. Now there are 5 connections in the pool, but marked as "not available". The threads return the connections to the pool, leaving the pool with 5 connections, marked as "available". The next thread requesting a connection will get a connection instantly out of the pool. No connecting required.
The pool may also disconnect connections that have been idle for some period of time to reduce resource load.
If there's going to be a lot of connects/disconnects, then I would suggest using a thread pool. Even if you're not using several threads, as it may speed things up considerably.
--
I am of The Clan of Xymox. I wear a pink kilt!
|
|
|
|
|
Hi,
Environment - VB.NET, ASP.NET, SQL Server 2000.
In SQL 2000, I am sending an XML, which carries data for two tables. Let's say, I am inserting half of the fields in TABLE1 and rest in TABLE2.
Specifically, I want to use Transaction Processing for inserting the rows in both tables. If insertion in one table fails, any inserted data should ROLLBACK and come out of procedure with relevant error code.
Please advice or send me any example links. Thanks
Pankaj
Follow your goals, Means will follow you ---Gandhi---
|
|
|
|
|
In the query, assuming use of SQLXML facilities:
BEGIN TRAN
INSERT INTO Table1
SELECT
FROM OPENXML( )
WITH ( )
IF @@ERROR <> 0 THEN
ROLLBACK TRAN
ELSE
INSERT INTO Table2
SELECT
FROM OPENXML( )
WITH ( )
IF @@ERROR <> 0 THEN
ROLLBACK TRAN
ELSE
COMMIT TRAN Basic rule: if you want transactional behaviour, use a TRANSACTION . If your code is implemented as a number of SqlCommand s, it might look more like:
Dim connection As SqlConnection
connection = New SqlConnection( "connectionstring" )
Dim trans As SqlTransaction
Dim cmd1 As SqlCommand
Dim cmd2 As SqlCommand
' Set CommandText for cmd1, cmd2
Try
trans = connection.BeginTransaction()
cmd1.Connection = connection
cmd1.Transaction = trans
cmd2.Connection = connection
cmd2.Transaction = trans
cmd1.ExecuteNonQuery()
cmd2.ExecuteNonQuery()
trans.Commit()
Catch ex As SqlException
trans.Rollback()
End Try
|
|
|
|
|
Thanks for the response.
I am inserting data in two seperate tables by getting data from one XML. So, I want to make sure data either goes in both tables or none. For that I think Transactions are best choice as you explained in Ist example. Right?
Follow your goals, Means will follow you ---Gandhi---
|
|
|
|
|
Hi All!
Can anyone help me with this? It's driving me crazy!
I've reduced a problem application down to the following lines of code.
void TestFunc()
{
::CoInitialize(NULL);
char strTEST[1024];
_ConnectionPtr spConn(__uuidof(Connection));
spConn->Open( "Provider=SQLOLEDB.1;Persist Security Info=False;User ID=sa;Password=;Initial Catalog=TEST_DP;Data Source=(local)", OLESTR(""), OLESTR(""), -1 );
for ( int i = 0; i < 50000; i++ ){
strcpy( strTEST, "SELECT COUNT(*) FROM KGTEST WHERE COL1 = '" );
itoa( i, &strTEST[strlen(strTEST)], 10 );
strcat( strTEST, "'");
_RecordsetPtr spRS(__uuidof(Recordset));
spRS->CursorLocation = adUseClient;
spRS->Open( strTEST, spConn.GetInterfacePtr(), adOpenForwardOnly, adLockBatchOptimistic, -1);
spRS->Close();
spRS = NULL;
}
spConn->Close();
spConn = NULL;
::CoUninitialize();
}
This leaks memory like no tommorrow. It appears to be related to a known Microsoft problem, (Q312575) and the way that memory is cached in MDAC. However this was reported as fixed and does not occur with MDAC 2.7 (According to Microsoft, and this is running with 2.8, and I've re-installed just in case!)
The critical bit seems to be the SQL statement, if it is a constant (For example: SELECT COUNT(*) FROM KGTEST WHERE COL1 = '1' ) - everything is OK, however if the statement is constantly changing then the memory just leaks!
Can anyone shed any light on this?
|
|
|
|
|
I can't reproduce it on this system (Win2k SP4, SQL Server 2000 SP3, MDAC 2.7 SP1).
You should be able to hoist creation of the Recordset outside the loop.
As a matter of style, I would recommend using a Command object with a Parameters collection rather than concatenating strings.
If you still think it's leaking, take it up with MS.
|
|
|
|
|
Thanks for checking that, I'm using WinXP SP1 (+ Hot Fixes), SQL Server 2000 SP3, MDAC 2.8 - So it could be MDAC, I'm going to try and recreate your configuration on a seperate machine to try and proove it!
Moving the Recordset outside the loop does not solve it, but moving the connection inside nearly does (but obviously slows things down dramatically).
The concatentating came about because I was slowly reducing my program to the simpliest that it could be while still demonstrating the problem (I even removed changed from CString to char[] in case they were leaking!)
Regards,
Kevin.
|
|
|
|
|
Kevin,
Please! If you find an answer to this forward it to me.
Last time I was looking into this it wasn't listed as a
known problem. If you move the recordset creation outside
of the loop, as suggested by Mike Dimmick:
_RecordsetPtr spRS(__uuidof(Recordset));
spRS->CursorLocation = adUseClient;
for(...) {
spRS->Open(...);
spRS->Close();
}
spRS = NULL;
Does the situation change any?
Thanx for any info...
>>>-----> MikeO
|
|
|
|
|
Hi Mike,
I've spent hours looking at this and can get lots of differnet results.
The best solution (although no pratical for performance reasons) is to move the connection and Recordset inside the loop, this seems to leak less!
My understanding of MDAC is that it buffers memory allocations and the default setting is 500 queries * 1Mb. This is why running the test program the memory allocated seemed to drop rapidly to begin with then level off (the problem is I never seem to get it all back!). With the connection and Recordset inside the loop a much smaller leak is seen. The trick is the changing WHERE clause, if this is static then no problem (presumably because no malloc/free's are occuring)
According to a Microsoft artical you can adjust the 500 cached queries settings and the amount of memory allocated, I've tried setting these but cannot get any different results.
I let you know if I discover anything further.
Regards,
Kevin.
|
|
|
|
|
What is db scattered file read?? how do i rectify it??
venkat
Cheers,
Venkatraman Kalyanam
Bangalore - India
"Being Excellent is not a skill, it is an attitude"
Reality is an illusion caused by caffeine deficiency(one Microsoft Research scholor)
|
|
|
|
|
Hi all,
Is there any way to write an SQL SELECT statement that will tell me whether a column is NULL or not, without having to download the column's content?
I'm working on a content-management system that stores HTML pages as text records in a MS SQL Server table. Some pages have a 'default' page which is displayed by setting the text column to NULL. For the web interface I'd like a way to summarize the state of the content (e.g. whether it has been supplied or not), but it seems ridiculous to download the entire HTML content of a section just to see if it's NULL or not.
Oh, and I can't use Stored Procedures for reasons I won't go into
TIA,
Pete
|
|
|
|
|
Well, you could always do something like this:
SELECT id
FROM tblContent
WHERE ...
AND Content IS NULL
Modify it as you want to, the most important thing is the condition "column IS NULL" can be used to find out whether a column is null or not without having to download it.
Hope it helps,
Edbert
|
|
|
|
|
Hi Edbert, thanks for the reply.
I'd thought of that, but I'm not sure how well it works if I'm displaying info about /every/ row in the table. I'd have to do a SELECT for each row. Plus, each row has about 5 different text fields (each 'page' of content has 5 'sub-pages'). So that would mean 5*num_rows SELECTs!
I guess a similar approach would be to initially do the SELECT you suggest (for each of the 5 content columns), and then programmatically compare each id to the results. Hmmm, maybe that's what you meant in the first place.
It's annoying that there doesn't seem to be a 'cleaner' way to do it, but then I guess SQL isn't really the most powerful thing for this kind of problem...
Thanks for you help,
Pete
|
|
|
|
|
If you post the database structure and how you want your query to be I might be able to help you build the query.
|
|
|
|
|
You could use ISNULL( column, valueIfNull ) although you will still end up with the data if the column is not null. Your best bet would be to use a case e.g. CASE WHEN col IS NULL THEN 1 ELSE 0 END AS IsColNull. Please note this is not checked for syntax.
|
|
|
|
|
Hi guys,
I ended up re-normalising the database schema, a side-effect of which was that my NULL-checking problem is no longer an issue. Which is nice. I just wanted to say thanks for your help.
Cheers,
Pete
|
|
|
|
|