|
Here's an example of a table that mimics a directory structure:
create table directories(directory_id int primary key, directory_name varchar(255), parent_directory_id int)
Then add a constraint on parent_directory_id to reference directory_id.
This will then become a "self referencing" table. Any rows that contain a NULL value as a parent_directory_id will be a root node. All other rows are sub nodes of other rows.
|
|
|
|
|
In the .NET documentation Microsoft is recommending usage of SQL server .NET data provider for accessing MSSQL 7 and above.
Will any one please tell me what are the advantages of using .NET library over OLEDB?
Uday
|
|
|
|
|
Uday Patil wrote:
In the .NET documentation Microsoft is recommending usage of SQL server .NET data provider for accessing MSSQL 7 and above.
I believe the main differences are in optimization for speed.
-Nick Parker
|
|
|
|
|
Hi Uday,
I think using SqlClient for accessing SQL Server 2000 in ADO.NET has good speed enhancments w.r.t. make-n-break connections, query update speeds and lot more.
DataSet too seems to have real good methods to update directly to the db. I have'nt fully explored this, since I am currently working with OleDb and IBM Db2 database.
Did this attend to your query?
Deepak Kumar Vasudevan
http://deepak.portland.co.uk/
|
|
|
|
|
Thank you Deepak and Nick,
I gone through many documents related to this. I found some good things abt SqlClient. But I dont think them sufficient enough for change my application to support SqlClient.
For this I am conducting a benchmark test. I will be having results in my hands within a day or two. I will definately update you regarding results.
Thanks and regards
Uday
|
|
|
|
|
Have any of you figured out a way to export and import text blobs with content greater than 64512 bytes? I have hunted and hunted through Google and the MSDN library, but nothing indicates a way to tell BCP to export more than the first 64512 bytes of the text on a row.
It may also help if there were a way I could set the default TEXTSIZE for all new connections, but I haven't found a way to do that either.
Any ideas? (If you have some nice little utility that can generate a series of INSERT statements to copy the content of the table, I could get that to work too. In fact, I'm starting to look for one, but hoping that I don't have to go that route.)
John
|
|
|
|
|
Hello:
I return a DataSet from a Web Service's Web Method. But I don's want to use the default XML Schema of the DataSet. I want to use a customer Schema file for this returned DataSet.
I found the DataSet.InferXmlSchema(ByVal fileName As String,ByVal nsArray() As String) method ,But I am not sure about the second parameter of this method.
Can you give some help about this topic or more detail information.
Thank you very much.
liuage
|
|
|
|
|
Hello,
I need to delete for instance tenth item from table returned by subquery in MSSQL. So the best solution is to have row with some index. I guess there is some "function" for this.
I mean something like this
DELETE FROM (
SELECT SomeFunction AS 'index'
FROM table
WHERE conditions
) t
WHERE t.index = 10
Thanks
Daniel Balas - Student
|
|
|
|
|
There is no T-SQL functionality that takes care of that. It would be impractical, since MSSQL (and others) return rows in "random order". You'l need to put on an index to be sure that it is deterministic.
However, you can probably fake the desired functionality, by using a subquery/join in your delete statement...
|
|
|
|
|
Hmm... thanx
Daniel Balas - Student
|
|
|
|
|
I have read a table from my db and want to create a list of the items in my table, for latter use etc.
I have the following code:
foreach( DataRow row in table.Rows )<br />
{<br />
Client client = new Client();<br />
client.SeqNo = 1;<br />
client.ClientCode = row[ "ClientCode" ].ToString();<br />
client.ClientID = row[ "ClientID" ];<br />
m_arlClients.Add( client );<br />
}
but when it gets to client.ClientID = row[ "ClientID" ]; I get a cast exception "cannot convert string to long" or somthing along those lines. My underlying database stores the ClientID as a long and that is how I want to store it in my client object.
How can I read the ClientID from the row obkect as a long?
|
|
|
|
|
First you should try a cast... (int)row["ClientID"]
If that doen't work, you could try int.Parse(row["ClientID"])
BTW: Have you checked for (row["ClientID"] == DBNull.Value)
|
|
|
|
|
Cheers for your reply.
Yes, I had tried a cast ... (long)row["ClientID"];
The field isn't null. But long.Parse( row["ClientID"].ToString() ); did the trick, thanks. I haven't been using C# very long, and have yet to learn all the tricks
Thanks once again.
|
|
|
|
|
Hello,
I'm looking for a way to determine and display the differences between two datasets.
I'd like to display both datasets in DataList format and highlight only those items that are different.
My plan so far has been to:
(1)fill one dataset with the results of a query for all data associated with Project1 (call it "ds1")
(2)fill a second dataset with all data associated with Project2 (call it "ds2")
(3)iterate through each item in each row of ds1 and check it's value against the corresponding item in ds2
Note: Each dataset will have exactly the same schema.
Once I've been able to determine a difference, I'm stumped on how to change the backcolor of only that DataItem in the DataList. I've found a few examples on changing the color during the ItemDataBound and ItemCreated events, but those examples didn't take into account a comparison to another dataset rather than evaluation to static criteria...
Any help is greatly appreciated!!!
|
|
|
|
|
I've written an app using ODBC.NET in c# using paramaterized queries. Taking timings for these queries to run gives me larger times than issuing ad-hoc SQL statements to the database. This isn't what I'd expect to see as I thought the database would cache my queries and then be able to execute them faster.
I've tried this on Oracle and SQL Server and seen the same results.
Has anyone else seen any similar results, i.e. paramaterized queries taking longer to execute than ad-hoc SQL? or is it just me ?
Ta,
Dave.
|
|
|
|
|
Anyone knows why am i getting this exception while using ado and what to do ?
Exception thrown for classes generated by #import Code = 8007007e
Code meaning = The specified module could not be found.
Source = (null)
Description = (null)
Thank you !
Also another question ---> which app posts the ado210.chm in the sysyem/ado folder ?
|
|
|
|
|
Hello, I need to work out what seems to be a pretty advanced query for SQL server and despite a lot of books and all the internet, I can't seem to classify this into an easily searchable-for problem to get a handle on it. Any help classifying this would be appreciated:
I am making an interface for quickly searching a database of text documents. I've already done the indexing part (sucking out the unique words etc), it's the searching part that I need to refine.
I have three tables involved:
The first "documents" is a table that contains many plain text documents and each document has a unique identifier.
The second table "srchdict" one is a dictionary index containing two columns, a list of unique words culled from the documents to be searched and a unique identifier column to uniquely identify each word.
The third table is the key between the "srchdict" and the "documents" and is called "srchkey". It contains two columns: each one is a unique identifier, one column is the id from the dictionary table of each unique word found in a document and the other is a unique identifier that indicates which document that word was found in.
So far so good and no problem populating those tables at all.
Here is the problem, without resorting to doing a whole bunch of multiple queries in code (i.e. I want to do it all at the SQL server), is there a way (stored procedure or single query) to pass in a list of words that are being searched for, and return a list of document ID's of documents that only contain *all* the words in the list.
I've done this before in c++ with an Access database and it involved querying for each word separately, and storing the resulting list of matched document id's in another temporary table until the list of search words was exhausted, then querying the temporary results table to pull out only the duplicate document ID's that appear as many times over as the number of search terms (this ensures that only documents that match *all* search words appear in the results, any that do not match all are discarded).
Any help, either along the lines of "that's the only way to do it" or just a classification of what type of query algorithm etc this is would be tremendously appreceated.
|
|
|
|
|
You could try something like:
Select DocId, Count(Distinct WordKey) From srchkey<br />
Where WordKey In (Select WordKey From srchdict<br />
Where WordText In ("list", "of", "words"))<br />
Group By DocId<br />
Having Count(Distinct WordKey) = 3
If you wanted to use a stored-procedure then you could pass the list of words in using a temporary table.
Have you tried using SQL-Server's own free-text indexing?
Regards
Andy
|
|
|
|
|
Thank you Andy! I'll give that a try and look into the text indexing for SQL.
"Things are more like they are now than they ever were before."
-- Dwight Eisenhower
|
|
|
|
|
Hi Andy, just wanted to let you know that works perfectly and saved me a huge amount of time! Thank you again.
(slight modification, can't use Count on a uniqueidentifier so changed to a *)
- Cheers!
|
|
|
|
|
Select DocId, Count(Distinct WordKey) From srchkey
Where WordKey In (Select WordKey From srchdict
Where WordText In ("list", "of", "words"))
Group By DocId
Having Count(Distinct WordKey) = 3
Using the IN keyword is generally pretty slow. Using IN with a SELECT sub-query is even slower. It's generally a good idea to start out (i.e. prototype) like this, to see if you've got the right basic idea, but you'll want to rewrite the query to use JOINs for production use. As the poster noted, this will require a temporary table containing the search terms.
tblDocuments ( id INT PRIMARY KEY, document TEXT )
tblKeywords ( id INT PRIMARY KEY, keyword NVARCHAR(50) )
tblStatistics ( keyword_id INT, document_id INT, frequency INT )
with PRIMARY KEY as combination of keyword_id and document_id
tblCriteria ( term NVARCHAR(50) ) <--- a temporary table containing the search terms
SELECT D.id, count(D.id)
FROM tblCriteria C
LEFT OUTER JOIN tblKeywords K ON K.keyword = C.term
RIGHT OUTER JOIN tblStatistics S ON S.keyword_id = K.id
LEFT OUTER JOIN tblDocuments D ON D.id = S.document_id
GROUP BY D.id
HAVING count(D.id) = count(C.term)
This is what you should start out with. You probably want to rank the results based on the combined frequency of terms (from the tblStatistics.frequency column).
Make sure to have good indeces on the tblKeywords table, actually on all the tables. If the data remains fairly constant, you should not have a problem with indeces.
|
|
|
|
|
For each XVal, YVal in tblCurrent there exists at least one record in tblPrevious. Each XVal, YVal in tblPrevious has at least one record where EndTime will equal @currstart. All Col18 and Col19 values in tblPrevious are non-NULL decimals. All Col18 and Col19 values in tblCurrent are NULL.
Given these conditions can anyone explain why after running the following stored procedure Col18 will contain values for each XVal, YVal; but Col19 will sometimes contain NULL for a YVal greater than some apparently random number?
PROCEDURE CopyIDs
DECLARE @myXVal INTEGER,
DECLARE @currstart DATETIME
AS
DECLARE @myCntr INTEGER
SET @myCntr = 1
WHILE (@myCntr <= 250)
BEGIN
UPDATE tblCurrent
SET Col18 = prev.Col18, Col19 = prev.IDCol19
FROM tblPrevious prev INNER JOIN tblCurrent curr
ON prev.XVal = curr.XVal AND prev.YVal = curr.YVal
AND prev.EndTime = @currstart
WHERE curr.XVal = @myXVal AND curr.YVal = @myCntr
SET @myCntr = @myCntr + 1
END
About 80% of the time, the values will be copied for both fields. For The other 20%, after the first 60 to 200 records have been updated correctly the remainder of the records will contain NULL in the field Col19.
This procedure is typically called multiple times with different XVals. When there is a failure, 95% of the time only the first call will fail to copy Col19 correctly.
>>>-----> MikeO
|
|
|
|
|
After further review I realize that both Col18 and Col19 fail to copy. Another process overwrites the value in Col18 very shortly after the update fails.
Sorry for my confusion, but the question remains. Why would a stored procedure fail after the first several iterations through the loop?
>>>-----> MikeO
|
|
|
|
|
Here is my situation...
I have to develop tables for a specific product. Currently I have a "Product" table which has the "ModelId", "Features", etc. There is also a table I have called "ProductDetails" which some special performance data.
The problem is, this product can be "Mounted" in three different configurations. The product information for each of the three different mounts are identical but they have a different "ProductId".
What is the best way to design this?
Thanks
Mark Sanders
sanderssolutions.com
|
|
|
|
|
Why not have a Configuration table that specifies the attributes for the configuration. Then put a Configuration ID in the Product or ProductDetails table, which ever one makes more sense to you, to link the configuration to the productId.
Jeremy Oldham
|
|
|
|