|
Can we use Array In SQL Server ,if yes can any body tell how to use the same.
Is it possible to call a stored procedure recursively???
Any answer to the above questions will be highly appreciated
|
|
|
|
|
Can we use Array In SQL Server ?
Yes..but using an indirect way : Create a temporary table like this :
<br />
CREATE TABLE #temp(Field1 INT PRIMARY KEY)
(put the above code in a stored proc or trigger etc..)
note the # before the name of the table :#temp this makes the temporary table available for the current connection.. ## will make it available to all connections
to free this table use:
DROP TABLE #temp
Is it possible to call a stored procedure recursively?
yes..but by default there is a limited number of nesting levels (31 I think) but you can change it..
hope this can help
|
|
|
|
|
Thanx for ur valuable suggestions.
Can i have a sample stored procedure which is called recursively.
|
|
|
|
|
Simply call the same stored proc again in the code:
like this:
CREATE PROCEDURE MyProc<br />
AS<br />
.<br />
.<br />
--your code here<br />
.<br />
.<br />
exec MyProc<br />
.<br />
.<br />
GO
the nesting limit is 32 levels (this time i'm sure..got it from MSDN)
you can test for the nesting level using :@@NESTLEVEL
like :
select @@NESTLEVEL AS 'Outer Level'
|
|
|
|
|
What is the best data Access method, that can be used in a large scale (.net/c#)
Enterprise Application.
For Data retrieve :
For Data Insert & update :
Gaurika Wijeratne. // www.gaurika.com
|
|
|
|
|
For Data retrieve :
it depends !! if you want to populate a listbox for example use DataReader (it's fase , readonly , forward only)
For Data Insert & update :
SQlCommands are good enough
|
|
|
|
|
I make a windows service with ATL,and when I want to open a database in the service work thread(I creat a new thread), it doesn't work??? And when start the service in SCM, a error show: debug error ??? I don't know why?
Please help me , and if you have one sample ,give me one?
and my code below:
//if( !m_db.Open( _T("SmsSendBuff"),FALSE,TRUE,_T("ODBC;UID=sa;PWD=") ) )
//these two open operation doesn't work both!!!
//if(!m_db.OpenEx("DSN=SmsSendBuff;UID=sa;PWD=",CDatabase::noOdbcDialog);
_ConnectionPtr Conn1("ADODB.Connection") ;
HRESULT hr=Conn1.CreateInstance( __uuidof( Connection ) );
_bstr_t bsDSN = (L"SmsSendBuff");
_bstr_t bsEmpty = L"";
Conn1->ConnectionString = bsDSN;
//hr = Conn1->Open("dsn=SmsSendBuff;uid=sa;pwd=;","","",0);
AfxMessageBox("open...db");
hr=Conn1->Open(bsDSN,_bstr_t("sa"),_bstr_t(""),0);/*bsEmpty, bsEmpty, bsEmpty, -1 );*/
my email and msn: diyzhaodiyzhao@hotmail.com
|
|
|
|
|
I wanted to know how I can create/design a DB that represents information in heirarichal or tree style.
and the tree levels can be of many levels.
|
|
|
|
|
Here's an example of a table that mimics a directory structure:
create table directories(directory_id int primary key, directory_name varchar(255), parent_directory_id int)
Then add a constraint on parent_directory_id to reference directory_id.
This will then become a "self referencing" table. Any rows that contain a NULL value as a parent_directory_id will be a root node. All other rows are sub nodes of other rows.
|
|
|
|
|
In the .NET documentation Microsoft is recommending usage of SQL server .NET data provider for accessing MSSQL 7 and above.
Will any one please tell me what are the advantages of using .NET library over OLEDB?
Uday
|
|
|
|
|
Uday Patil wrote:
In the .NET documentation Microsoft is recommending usage of SQL server .NET data provider for accessing MSSQL 7 and above.
I believe the main differences are in optimization for speed.
-Nick Parker
|
|
|
|
|
Hi Uday,
I think using SqlClient for accessing SQL Server 2000 in ADO.NET has good speed enhancments w.r.t. make-n-break connections, query update speeds and lot more.
DataSet too seems to have real good methods to update directly to the db. I have'nt fully explored this, since I am currently working with OleDb and IBM Db2 database.
Did this attend to your query?
Deepak Kumar Vasudevan
http://deepak.portland.co.uk/
|
|
|
|
|
Thank you Deepak and Nick,
I gone through many documents related to this. I found some good things abt SqlClient. But I dont think them sufficient enough for change my application to support SqlClient.
For this I am conducting a benchmark test. I will be having results in my hands within a day or two. I will definately update you regarding results.
Thanks and regards
Uday
|
|
|
|
|
Have any of you figured out a way to export and import text blobs with content greater than 64512 bytes? I have hunted and hunted through Google and the MSDN library, but nothing indicates a way to tell BCP to export more than the first 64512 bytes of the text on a row.
It may also help if there were a way I could set the default TEXTSIZE for all new connections, but I haven't found a way to do that either.
Any ideas? (If you have some nice little utility that can generate a series of INSERT statements to copy the content of the table, I could get that to work too. In fact, I'm starting to look for one, but hoping that I don't have to go that route.)
John
|
|
|
|
|
Hello:
I return a DataSet from a Web Service's Web Method. But I don's want to use the default XML Schema of the DataSet. I want to use a customer Schema file for this returned DataSet.
I found the DataSet.InferXmlSchema(ByVal fileName As String,ByVal nsArray() As String) method ,But I am not sure about the second parameter of this method.
Can you give some help about this topic or more detail information.
Thank you very much.
liuage
|
|
|
|
|
Hello,
I need to delete for instance tenth item from table returned by subquery in MSSQL. So the best solution is to have row with some index. I guess there is some "function" for this.
I mean something like this
DELETE FROM (
SELECT SomeFunction AS 'index'
FROM table
WHERE conditions
) t
WHERE t.index = 10
Thanks
Daniel Balas - Student
|
|
|
|
|
There is no T-SQL functionality that takes care of that. It would be impractical, since MSSQL (and others) return rows in "random order". You'l need to put on an index to be sure that it is deterministic.
However, you can probably fake the desired functionality, by using a subquery/join in your delete statement...
|
|
|
|
|
Hmm... thanx
Daniel Balas - Student
|
|
|
|
|
I have read a table from my db and want to create a list of the items in my table, for latter use etc.
I have the following code:
foreach( DataRow row in table.Rows )<br />
{<br />
Client client = new Client();<br />
client.SeqNo = 1;<br />
client.ClientCode = row[ "ClientCode" ].ToString();<br />
client.ClientID = row[ "ClientID" ];<br />
m_arlClients.Add( client );<br />
}
but when it gets to client.ClientID = row[ "ClientID" ]; I get a cast exception "cannot convert string to long" or somthing along those lines. My underlying database stores the ClientID as a long and that is how I want to store it in my client object.
How can I read the ClientID from the row obkect as a long?
|
|
|
|
|
First you should try a cast... (int)row["ClientID"]
If that doen't work, you could try int.Parse(row["ClientID"])
BTW: Have you checked for (row["ClientID"] == DBNull.Value)
|
|
|
|
|
Cheers for your reply.
Yes, I had tried a cast ... (long)row["ClientID"];
The field isn't null. But long.Parse( row["ClientID"].ToString() ); did the trick, thanks. I haven't been using C# very long, and have yet to learn all the tricks
Thanks once again.
|
|
|
|
|
Hello,
I'm looking for a way to determine and display the differences between two datasets.
I'd like to display both datasets in DataList format and highlight only those items that are different.
My plan so far has been to:
(1)fill one dataset with the results of a query for all data associated with Project1 (call it "ds1")
(2)fill a second dataset with all data associated with Project2 (call it "ds2")
(3)iterate through each item in each row of ds1 and check it's value against the corresponding item in ds2
Note: Each dataset will have exactly the same schema.
Once I've been able to determine a difference, I'm stumped on how to change the backcolor of only that DataItem in the DataList. I've found a few examples on changing the color during the ItemDataBound and ItemCreated events, but those examples didn't take into account a comparison to another dataset rather than evaluation to static criteria...
Any help is greatly appreciated!!!
|
|
|
|
|
I've written an app using ODBC.NET in c# using paramaterized queries. Taking timings for these queries to run gives me larger times than issuing ad-hoc SQL statements to the database. This isn't what I'd expect to see as I thought the database would cache my queries and then be able to execute them faster.
I've tried this on Oracle and SQL Server and seen the same results.
Has anyone else seen any similar results, i.e. paramaterized queries taking longer to execute than ad-hoc SQL? or is it just me ?
Ta,
Dave.
|
|
|
|
|
Anyone knows why am i getting this exception while using ado and what to do ?
Exception thrown for classes generated by #import Code = 8007007e
Code meaning = The specified module could not be found.
Source = (null)
Description = (null)
Thank you !
Also another question ---> which app posts the ado210.chm in the sysyem/ado folder ?
|
|
|
|
|
Hello, I need to work out what seems to be a pretty advanced query for SQL server and despite a lot of books and all the internet, I can't seem to classify this into an easily searchable-for problem to get a handle on it. Any help classifying this would be appreciated:
I am making an interface for quickly searching a database of text documents. I've already done the indexing part (sucking out the unique words etc), it's the searching part that I need to refine.
I have three tables involved:
The first "documents" is a table that contains many plain text documents and each document has a unique identifier.
The second table "srchdict" one is a dictionary index containing two columns, a list of unique words culled from the documents to be searched and a unique identifier column to uniquely identify each word.
The third table is the key between the "srchdict" and the "documents" and is called "srchkey". It contains two columns: each one is a unique identifier, one column is the id from the dictionary table of each unique word found in a document and the other is a unique identifier that indicates which document that word was found in.
So far so good and no problem populating those tables at all.
Here is the problem, without resorting to doing a whole bunch of multiple queries in code (i.e. I want to do it all at the SQL server), is there a way (stored procedure or single query) to pass in a list of words that are being searched for, and return a list of document ID's of documents that only contain *all* the words in the list.
I've done this before in c++ with an Access database and it involved querying for each word separately, and storing the resulting list of matched document id's in another temporary table until the list of search words was exhausted, then querying the temporary results table to pull out only the duplicate document ID's that appear as many times over as the number of search terms (this ensures that only documents that match *all* search words appear in the results, any that do not match all are discarded).
Any help, either along the lines of "that's the only way to do it" or just a classification of what type of query algorithm etc this is would be tremendously appreceated.
|
|
|
|