|
kenn_rosie wrote: I created the sub SortEventHandler as below but I am lost as to
what is next to do?
You will probally get an answer in the correct form try the Web Devolopment form[^]
Good luck
BTW, if you wanted to sort the data before presenting it to the user use the Order By [column] in your select statement.
|
|
|
|
|
Hi,
I wonder if there is any limitation concerning the length of a sql statement.
I have the following situation:
I need to update a single column of a lot of records in a table.
Updating those 20K (20000) records one at the time consumes a lot of time.
In order to speed things up I collect those queries to create one big SQL statement.
Now the problem is the following whenever I place more that 455 update queries only the
first 455 are executed. The others are neglected, because no error has happened (neighter
in my try/catch block or in the SQL Server checked with the profiler). The size of the
sql statement (455 queries) is about 20Kb in size.
Any thoughts on this would be very appriciated
BTW:
I'm developing in VC++ 2005
on a SQL Server V2000
codito ergo sum
|
|
|
|
|
BadKarma wrote: Updating those 20K (20000) records one at the time consumes a lot of time.
In order to speed things up I collect those queries to create one big SQL statement.
Now the problem is the following whenever I place more that 455 update queries only the
first 455 are executed. The others are neglected, because no error has happened (neighter
You did what???
You have a big problem in your design. Like you've found out, if one statement fails, you have to consider the entire batch a failure. You can't catch an error because nothing throws one.
Best practice says that you call a stored procedure on the SQL Server to do the update for you and just pass the parameters to it, 20,000 times. The stored procedure can check to see if the update works and RAISEERROR if it failed for some reason. You can catch that error in your code. You also get the benefit of knowing exactly which one of these calls failed.
RageInTheMachine9532
"...a pungent, ghastly, stinky piece of cheese!" -- The Roaming Gnome
|
|
|
|
|
Dave Kreskowiak wrote: if one statement fails, you have to consider the entire batch a failure
I know for certain that all queries are valid. If I execute 400 of them in batch there is no
problem, even 455 of them. Only when I exceed the 455 queries 'limit' in the batch, then the last are neglected.
Due to what reason can a valid query be neglected ???
codito ergo sum
|
|
|
|
|
Because the query IS invalid, the last lines are being cut off. There is a limit to the size of a query string you can pass. I think it's 8192 bytes, but I can't find solid documentation on it.
RageInTheMachine9532
"...a pungent, ghastly, stinky piece of cheese!" -- The Roaming Gnome
|
|
|
|
|
First of all thanks for your time too reply.
The queries i execute look like this:
UPDATE MyTable SET MyField = 1 WHERE Data = 1;
UPDATE MyTable SET MyField = 1 WHERE Data = 2;
UPDATE MyTable SET MyField = 1 WHERE Data = 3;
UPDATE MyTable SET MyField = 1 WHERE Data = 4;
this goes on up until Data reaches 1000.
So like you can see the size of the query is relative small not even 50 characters.
The batch statement though, can reach the size of more the 20k of characters but shouldn't be
a problem because the max bacth size is 65536 * network packet size (see response from Sl0n[^])
When I execute this batch to an limit of 455 queries, there is no problem. The size of this batch is over 8192 byte limit (its arround 18K). When I add more queries to the bacth every query past this 455 limit just doesn't get executed and the batch statement is returned as a success.
So even though I stay well under the limits, those queries aren't executed ???
Or do I miss something?
codito ergo sum
|
|
|
|
|
BadKarma wrote: So like you can see the size of the query is relative small not even 50 characters.
The batch statement though, can reach the size of more the 20k of characters but shouldn't be
With emphasis on "shouldn't". It's not so much the size of the query in characters, but I'm thinking the number of statements in the batch might be your problem. It's not listed in the specs anywhere. There are other minor possibilities, but I've got no time to test them.
Maximum statements cachable in a connection, limits on uncompiled vs. compiled code vs. batch code, ...
What you're doing is just never done outside a stored procedure in the database (at least I've never seen it done), so data on your technique is very hard to come by.
You can try the theory out by batching together statements 400 at a time. ORrrrrrr, you could just write the batch query using an SQL loop, since your data doesn't appear to change from iteration to iteration, just the WHERE clause changes.
RageInTheMachine9532
"...a pungent, ghastly, stinky piece of cheese!" -- The Roaming Gnome
-- modified at 11:46 Monday 30th January, 2006
|
|
|
|
|
SQL Server 2000 has a limit on the batch size, which is 65,536 times the network packet size, which is, by default, 4096 bytes, so what we have is somewhere around 256 Mb. So batch size is not an issue in your case. You may try issuing the same batch from Query Analyzer to see the results.
|
|
|
|
|
I am developing an n-tier application. I am using Business Entities to pass data between the tiers. I have few properties that holds data in form of DataSet. Whenever I need a result set (get all employees of XYZ dept.) I fill the dataset property of my Business Entity and pass the business entity to presentation layer. In presentation layer I bind the dataset to my gridview. Even if I have one record set I am using DataSet to pass the data.
My doubt is what difference it makes on performance if I use DataTable instead of DataSet.
Note:
1) I expect 200 simultaneous users accessing my application. On each request I pass 4-5 DataSet from DataAccess to BusinessLayer to Presentation Layer.
2) On an average I pass 25 records in each DataSet.
regards,
Himalay Oza
|
|
|
|
|
Himalay,
IMO, Datasets are better to bind to a gridview. Assuming you are using SQL 2000/2005 (you didn't specify which database you are using), 200 users at the same time and 25 records in each dataset, you should not have much in concern of performance.
I think your question is a matter of preference.
Paul
|
|
|
|
|
I create a form with a dataset (Fill from 3 dataadpater ).
My form work ok , and have a textbox and datagrid with a relation to my records in textbox.
I create a dataview to filter the records.
After that the textbox don´t change.
I have this code, and it don't work
editNome.DataBindings.Add(New Binding("Text", DsFunc, "TFuncionarios.Nome"))
I try this code , and my textbox show de result.
editNome.DataBindings.Clear()
editNome.DataBindings.Add(New Binding("Text", DsFunc.Tables("TFuncionarios"), "Nome"))
-----------------------------------
Private Sub btnNext_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnSeguinte.Click
If Me.BindingContext(DsFunc.Tables("TFuncionarios")).Position < DsFunc.Tables("TFuncionarios").Rows.Count - 1 Then
Me.BindingContext(DsFunc.Tables("TFuncionarios")).Position += 1
End If
but another thing appens the data grid don't move when i go to next record.
i think is something with the databinding.
What i must do to make textbox related with my datagrid ???
|
|
|
|
|
Hai
I am using SQl Server 2005 BETA 2. When i try to import data from an Access, into one of my DB using Server Mangement Studio(Right Clicking DB and Select Import Menu Item) . The Import Fails with an error
Error 0xc02020f6: Data Flow Task: Column "Description" cannot convert between unicode and non-unicode string data types.
(SQL Server Import and Export Wizard)
How can i solve this problem.
|
|
|
|
|
I'm an SQL nebie, so bear with me:
I need to collect the following data after parsing a bunch of c/c++ files and then storing the parsed data as a serious of records that store the symbol type (is it a function class/struct/union, variable, enum etc), the symbol name, offset, and a few other values.
I would like to enter this data into one or more tables for easy searching later on.
My question is: is it better to design multiple tables, for example one table for classes/structs/unions, another table for functions, another for variables/member vars, and a fourth for miscellaneous stuff, or one single table? I would prefer whatever results in a faster return of the results.
If I have multiple tables, what is the proper way to glob all the results (from selecting across multiple tables) into one result set?
¡El diablo está en mis pantalones! ¡Mire, mire!
Real Mentats use only 100% pure, unfooled around with Sapho Juice(tm)!
SELECT * FROM User WHERE Clue > 0
0 rows returned
Save an Orange - Use the VCF!
|
|
|
|
|
My initial reaction is that this would best be in several tables.
I would create a small symbol_types table with an identity field as primary key, and a unique index on the symbol_type field (NVarChar)
similarly a Files table with the file path and an ID (identity) field.
The main table would contain symbol_name, TypeID (ID from symbol_types), PathID ( ID from files) and the rest of the story (but look for other opportunities to segregate repeated string data into a lookup table like "symbol_types"). This table should likely have a compound primary key on Symbol_name,TypeId and PathID to allow for multiple occurences of the same symbol_name (either that or an identity field for primary key, and just a unique index on the previously mentioned three fields). The idea here is to consolidate repeated strings into a single table so they can be represented by the integer key for the row that contains the one instance of the string. This makes it much easier and less expensive to update a symbol type name (to correct spelling for instance), and reduces the record size in the main table by replacing the repeated strings with an integer value. This makes for much faster selection queries, since extracting groups of records from the main table ( all symbols of a certain type, for instance) a match on integer id's rather than a string comparison) - "Select [symbol name] from [symbol table] inner join [symbol types] on [symbol name].[TypeID] = [symbol types].id where [symbol types].[typename] = 'member'").
The proper way to "glob them all together" is to use an inner join query " select [S.symbol name], [T.Typename],[P.Path]...from [symbol table] as S inner join [symbol_types] as T on S.TypId = T.ID inner join Files as P on s.Pathid = P.ID" (warning: syntax may not be quite correct, check documentation for your database on proper way to make inner join). Note that you often cannot directy update the database using the resulting recordset, but must instead update the underlying base tables one by one.
Absolute faith corrupts as absolutely as absolute power
Eric Hoffer
All that is necessary for the triumph of evil is that good men do nothing.
Edmund Burke
-- modified at 23:15 Monday 23rd January, 2006
|
|
|
|
|
Wow! Thanks for the info. I appreciate the detailed response, since I don't know that much about SQL!
¡El diablo está en mis pantalones! ¡Mire, mire!
Real Mentats use only 100% pure, unfooled around with Sapho Juice(tm)!
SELECT * FROM User WHERE Clue > 0
0 rows returned
Save an Orange - Use the VCF!
|
|
|
|
|
Well I tried this technique with SQLite. And got all the data entered, my main symbols table had over 11,000 entries, and the functions table over 7200. And getting things to join takes FOREVER (about 20 secs)!!! It's insanely slow (this on a P4 3 GHz with HT). So while I much prefer the organization of this method, it seems like it just isn't going to work well with sqlite. I suspect that this is a problem with sqlite.
Just for kicks here's how I created the tables:
CREATE TABLE Types (
TypeID INTEGER PRIMARY KEY
, TypeInfo CHAR(125)
);
CREATE TABLE Templates (
TemplateID INTEGER PRIMARY KEY
, TemplateSignature CHAR(125)
);
CREATE TABLE Files (
FileID INTEGER PRIMARY KEY
, Path CHAR(255)
);
CREATE TABLE Symbols (
Id INTEGER PRIMARY KEY
, Kind INTEGER
, Name CHAR(125)
, FileID INTEGER
, TypeID INTEGER
, Offset INTEGER
, LineNumber INTEGER
, TemplateID INTEGER DEFAULT 0 NOT NULL
, Parent INTEGER DEFAULT 0
);
CREATE TABLE Functions (
FuncID INTEGER PRIMARY KEY
, Signature CHAR(125)
, SymbolID INTEGER
, ReturnTypeID INTEGER NOT NULL
);
I left out the FK stuff as sqlite ignores it apparently.
For the query I tried:
select symbols.name,functions.signature from symbols, functions where
functions.symbolid = symbols.id;
It does eventually return, but it's impossible to use, so for now I think I'm just going to go with the single table approach. The query is a little more difficult to formulate, but it's *really* fast.
¡El diablo está en mis pantalones! ¡Mire, mire!
Real Mentats use only 100% pure, unfooled around with Sapho Juice(tm)!
SELECT * FROM User WHERE Clue > 0
0 rows returned
Save an Orange - Use the VCF!
|
|
|
|
|
Well it apepars I spoke too soon. The problem was that I didn't have an index built for the functions.sybolid column. Once I did that the query completed instaneously! Woot!
¡El diablo está en mis pantalones! ¡Mire, mire!
Real Mentats use only 100% pure, unfooled around with Sapho Juice(tm)!
SELECT * FROM User WHERE Clue > 0
0 rows returned
Save an Orange - Use the VCF!
|
|
|
|
|
If your use cases are such that at most of the places you need to lookup objects of one particular type (e.g., give me a class by this name, give me all the variables with this property, ...), you may be better off storing objects of each type in a separate table. This will help in better performance because less number of records needs to be filtered during a query for a particular type of object. This will also provide better data normalization if the information for each type is quite different than that for the other ones.
If you are developing your application in C# or some other managed language, you may use an OR-Mapping product to totally avoid dealing with tedious low-level infrastructire code in ADO.NET or OleDb for data integration. For example, with NJDX OR-Mapper, you may do the following:
1- Define your domain model classes (say AClass, AStruct, AUnion, AVariable, AnEnum, etc.) with properties for name, offset, etc.). For example,
class AClass {<br />
string name;<br />
int offset;<br />
...<br />
}
If some classes are very similar, you may define them in a class-hierarchy.
2- Define OR-Mapping declaratively like:
CLASS AClass TABLE CLASSES<br />
PRIMARY_KEY name<br />
;<br />
<br />
CLASS AUnion TABLE UNIONS<br />
PRIMARY_KEY name<br />
;
3- Create the database schema using NJDXSchema tool. This will create tables CLASSES, UNIONS, etc, with proper columns and primary keys.
4- Write your application using NJDX APIs. For example, the following code will insert a new object c1 of type AClass in the database:
njdx.insert(c1, 0, null);
The following code will fetch an AUnion object having name="someUnion"
Object oid = ObjectId.createObjectId("AUnion;name=someUnion");<br />
AUnion myUnion = njdx.getObjectById(oid, true, 0, null);
The following code will fetch all AClass objects into the myClasses variable.
ArrayList myClasses = njdx.query("AClass", null, -1, 0, null);
If you have defined some of your classes in a hierarchy, you can fetch all the qualifying objects of all the classes in that hierarchy with one query call.
Essentially, your code will be more object-oriented and easier to evolve. And you will avoid all the complexities of SQL.
Damodar Periwal
Software Tree, Inc.
Simplify Data Integration
http://www.softwaretree.com
-- modified at 18:55 Tuesday 24th January, 2006
|
|
|
|
|
The database is intended to hold a standard description of potententially 10s of thousands of class types. It's going to be populated after scanning a directory (and potentially sub directories) and parsing all the c/c++ files there. So creating a table per class might get a little unwieldy.
¡El diablo está en mis pantalones! ¡Mire, mire!
Real Mentats use only 100% pure, unfooled around with Sapho Juice(tm)!
SELECT * FROM User WHERE Clue > 0
0 rows returned
Save an Orange - Use the VCF!
|
|
|
|
|
I did not mean creating one table per end-user class. In my example, the CLASSES table will hold information for all the instances of the AClass class. So essentially, you would be dealing with only a handful of tables - one for all the end-user classes, one for all the end-user structs, one for all the end-user unions, etc.
Damodar Periwal
Software Tree, Inc.
Simplify Data Integration
http://www.softwaretree.com
|
|
|
|
|
Hi,
I need to implement a function that sends emails on a perticular date and time. The date and time is specified by the user when he posts a new event into a on-line calendar. The user should be allowed to decide when an event notification email will be sent to a group of people.
The database used for the project is SQL server. I learn about SQL server job scheduler can do it. But I don't know how to add a job into SQL server job list when a record is submited into database from web form.
Is there anybody having any advices?
Thanks in advance!
|
|
|
|
|
firstly add the time and date from web form to database table for each user.
then add only one job that run after each minute in Management-->SQL Jobs.
Now what this job will be done ?.
It will first check the date and time of users and then it will send mail using following code if the user is reach to its exact time.
Code you can view from following link:
http://www.vbforums.com/archive/index.php/t-280616.html
and read about how to add a new SQL server job in database in detail.
Qaisar Mehmood Mughal
...............................................................................
May Allah gave u success in all fields of life. (Aameen)
...............................................................................
Software Engineer
ISF - Islamabad Software Factory
3rd Floor, Software Technology Park
Constitution Avenue, Islamabad 44000, Pakistan
www.isf.com.pk
qaisar@isf.com.pk , qmmughal@gmail.com
Phone: +92 (51) 2825387 : 0321-5226623
|
|
|
|
|
Thank you very much, Qaisar!
It definitely will help a lot.
|
|
|
|
|
Hello everyone,
I have a rather large query that returns some fields, including a date field. Some dates are in the future, some are in the past, and some of the results are null. I need to find a way to arrange them so the nearest dates to today are displayed, then the ones further away, then the older ones, then the nulls.
Example: given that getDate() = Jan 23 2006 2:41PM
------------
1 Jan 30 2006 3:00PM
2 Jan 20 2006 1:14PM
3 NULL
4 Dec 31 2006 11:00AM
5 Jan 24 2006 7:00PM
6 NULL
I want the result set to be arranged like so:
-----------
1 Jan 24 2006 7:00PM
2 Jan 30 2006 3:00PM
3 Jan 20 2006 1:14PM
4 Dec 31 2006 11:00AM
5 NULL
6 NULL
As perviously mentioned the query is huge so I'd like to avoid using a union if at all possible, just to reduce clutter. But if it needs to be done, it needs to be done.
Any ideas? Thanks in advance!
-------------------
abort, retry, fail?
|
|
|
|
|
jszpila wrote: As perviously mentioned the query is huge so I'd like to avoid using a union if at all possible, just to reduce clutter
Yes, it can be done without UNIONs
SELECT dateColumn,
ABS(DATEDIFF(minute, dateColumn, @distanceFromThisTime)) AS TimeDifference,
CASE WHEN dateColumn IS NULL THEN 0 ELSE 1 END AS Nothing
FROM myTable
ORDER BY CASE WHEN dateColumn IS NULL THEN 0 ELSE 1 END DESC,
ABS(DATEDIFF(minute, dateColumn, @distanceFromThisTime)) ASC
In the above code:
dateColumn is the column that contains the date information.
@distanceFromThisTime is a datetime variable that contains the time that the query is centred on, probably getdate() from your description.
The ORDER BY clause first of all sinks the NULL s to the bottom, then it orders the remaining stuff in ascending order.
Finally, you can obviously ditch the TimeDifference and Nothing columns, I put them in to illustrate what values might be returned if you wanted to try this code in query analyzer first.
Does this help?
ColinMackay.net
"Man who stand on hill with mouth open will wait long time for roast duck to drop in." -- Confucius
"If a man empties his purse into his head, no man can take it away from him, for an investment in knowledge pays the best interest." -- Joseph E. O'Donnell
-- modified at 17:48 Monday 23rd January, 2006
|
|
|
|
|