|
hi all
i want to bind a radiobutton in the datagrid to a cloumn in the dB of datatype BIT and i want to make it checked if the bit col value is false and vice-versa
thank
abdelhameed81
|
|
|
|
|
I think a check box would be a better option. Radio buttons are generally used when there is more than two options and only one is possible. Where check boxes are used for true/false values.
Mike Lasseter
|
|
|
|
|
Hi,
I am using SQL Server 2005.
I have a script that was generated to create all my tables.
How do I create a script (INSERT) that takes all the data in my current tables, and creates INSERT statement for each. I want to take all my data that I have locally and insert it into the online database.
Regards,
ma se
|
|
|
|
|
I usually write some SQL like this:
SELECT 'INSERT INTO MyTable(Column1, Column2) VALUES ('+Column1+', '+Column2+')'
FROM MyTable
And then I run it (using a text output, rather than the grid) and then I can copy and paste it back into the query window.
|
|
|
|
|
Thanks for this.
Please explain what you mean by +Column1+, is this the coloumn name?? Or a value of a column?
Does this create a whole bunch of Insert statements?oes it work?
You have my MyTable listed twice, is this correct?
|
|
|
|
|
ma se wrote: Please explain what you mean by +Column1+, is this the coloumn name??
Yes - It is the column name. I don't know what your column names are so I had to use something.
ma se wrote: Or a value of a column?
No. It is the name of the column.
ma se wrote: Does this create a whole bunch of Insert statements?
Yes.
ma se wrote: Does it work?
Why would I answer this if I didn't know it would work?
ma se wrote: You have my MyTable listed twice, is this correct?
Yes - As a learning exercise tell me why they are both needed.
The only other comment I have is that the +Colunm1+ may have to be CAST or CONVERT ed to coerce the value of a column into a string, or replace a null column value with a string that says 'null' . Again I don't know your data model to show you how to do that.
|
|
|
|
|
Thanks for the answers, and I want to apologise for "Does it work?" It was supposed to be "How does it work?"
I'll try it soon and get back to you soonest to let you know.
|
|
|
|
|
Hi,
I haven't tried this out yet, going to do so soon, but I was wondering if there wasn't a faster method? What happens if I have like 500 tables (for example)??
The reason why I want to go this way is because I always import/export my data, but then the Identity Specification is set back to No. How can I export my tables and stored procedures to the online database with setting the Identity Specification to No, and keeping it Yes?
Regards
ma se
|
|
|
|
|
ma se wrote: I haven't tried this out yet, going to do so soon, but I was wondering if there wasn't a faster method? What happens if I have like 500 tables (for example)??
You could always create a DTS (in SQL Server 2000) or SSIS (in SQL Server 2005) package - but that isn't an "INSERT script".
If you have 500 tables then you could always come up with a program to automate the generation of the script by having it examine the schema of the tables and auto generate something.
ma se wrote: How can I export my tables and stored procedures to the online database with setting the Identity Specification to No, and keeping it Yes?
Use SET IDENTITY_INSERT[^] as appropriate.
|
|
|
|
|
Hello,
I'm trying to gather the schema information for a tables columns using ADO.Net with OleDb (an Access Database) and Sql (a SQL Server database) and found some problems:
For OleDb/Access:
method 1 to get the MetaData Collections
OleDbConnection currentconnection = ...;<br />
DataTable schema = currentConnection.GetSchema("COLUMNS");
method 2 to get the schema for the commands result:
OleDbCommand cmd = ...;<br />
OleDbDataReader reader = cmd.ExecuteReader(CommandBehavior.KeyInfo);<br />
DataTable schema = reader.GetSchemaTable();
Both give me the correct list of columns, but the values for IsNullable (from method 1) and AllowDBNull (method 2) differ!?
I'm using the Northwind database for example purposes and for one table (Order Details) I get IsNullable=true for the first foreign key and IsNullable=false for the second foreign key.
I also get IsNullable=false for the other three columns that have a Column_Default set.
Method 2 seems to give the correct values: AllowDBNull=false for the two foreign keys and AllowDBNull=true for the default-value-columns.
In another table (Categories) method 1 returns IsNullable=false for a VarChar column and method 2 returns AllowDBNull=true for the same column, while this time I think method 1 gives the correct answer because SQL Server says IsNullable=false and AllowDBNull=false for this column.
How comes that? Am I missing a difference in this two values? Because I saw many websites using them as synonyms, e.g. http://www.koders.com/csharp/fid74A751F1A6F8ACEB258469216B7C263679DAD88A.aspx
The other problem is when using the SQL Server (again using northwind database). This time both methods return identical values, if IsNullable=false then AllowDBNull=false too.
But: It returns AllowDBNull=false for default-value-columns too? How can i then do an Insert without setting these fields for using the default values?
Sorry for this long text, I'm looking forward for help,
regards J.Schumann
|
|
|
|
|
Did no one got this problem before?
Can somebody help me please!
Thank you in advance,
J.Schumann
|
|
|
|
|
I am drawing a blank on this and i have non eo fmy samples where i am, so i cant for the life of me remembwer how to do this, but...
When doing an Insert into a table that has Identity Seed turned on for the primary key, how do you return the id of the new row?
i know you end the insert statement, then it is somehting like this:
Insert Into dbtable_x (Stuff) Values(@stuff); @selectID
I knwo im not that far off the mark, but computers are some damn picky about syntax....
oooookie dokie.... select @@Identity
At least i figured it out...
-- modified at 13:00 Monday 2nd October, 2006
______________________
Mr Griffin, eleventy billion is not a number...
|
|
|
|
|
You're safer with using SCOPE_IDENTITY() instead of @@Identity
Below is an excerpt from the SQLTeam:
The gist: Use scope_identity() rather than @@IDENTITY or you might not get what you're looking for.
"ManyTSQL books show you how to use @@Identity to get the identity of the most recently added row. Many articles online, or in magazines show the same. What you might not know is that it is potentially a source for some very hard to trace bugs in your application.
@@Identity is potentially a very, very bad thing! In almost every case, you should use scope_identity() instead.
Why? @@Identity returns the most recently created identity for your current connection. When you first use it, it might be fine. Until someone adds a trigger. If the trigger causes another identity to be created, guess which identity you'll get in your call to @@Identity? Not nice.
"A democracy is nothing more than mob rule, where fifty-one percent of the people may take away the rights of the other forty-nine." - Thomas Jefferson
"Democracy is two wolves and a lamb voting on what to have for lunch. Liberty is a well-armed lamb contesting the vote." - Benjamin Franklin
Edbert
Sydney, Australia
|
|
|
|
|
Dear CPians,
I need to keep a mirror table of an existing table within the same database.
I need my clone to reflect to the original one.
I need to have all inserts and updates but NO deletes!
Has anyone got any ideas of how should I do that?
Thanking you in advance.
|
|
|
|
|
Triggers on the source table to copy the changes to the mirrored table.
|
|
|
|
|
Hi All,
I need to send BIGINT values as a Parameter for IN Tag.But My Example is not working .It says
Error converting data type varchar to bigint.
please check the below tested sample:-
DECLARE @UidAgentGroupID VARCHAR(100)
SET @UidAgentGroupID='1,2,3,4'
SELECT markup_id FROM Agent_Markup_Fare WHERE UidAgentGroupID IN(@UidAgentGroupID)
Then How can I send BigInt values(more than one) to IN Tag(IN(@UidAgentGroupID)) ? I can not use while loop for this senario..
Can you please help me to solve this?
Thanks in Advance.
|
|
|
|
|
The SQL compiler uses your comma delimited string as a single variable. It can't find a way to convert it to the Big integer it needs. It doesn't embed the string into your sql statement to compose a new statement.
You must use dynamic SQL to solve these types of issues. This allows you to build the statement you desire on the fly.
DECLARE @UidAgentGroupID VARCHAR(100)
SET @UidAgentGroupID= N'1,2,3,4'
DECLARE @sql NVARCHAR(1000)
SET @sql = N'SELECT markup_id FROM Agent_Markup_Fare
WHERE UidAgentGroupID IN(?INVARIABLES?)'
SET @sql = REPLACE(@sql,'?INVARIABLES?',@UidAgentGroupID)
EXEC(@sql)
The REPLACE statement can be replaced with simple string concatination. I find that the REPLACE method is easier to read and therefore, more maintainable. It can save you a lot of debug time when you are using multiple replacement variables.
|
|
|
|
|
Hi,
Many Thanks for reply.
I need to send Parameter Values for SQL Statement.I am using SQL Server 2000
So I have tried like this:
DECLARE @sqlStatement VARCHAR(1000)
SET @sqlStatement='SELECT @adultmarkup=SUM(dbo.CalculateMarkup(@fareprice,adultMarkup)),@childmarkup=SUM(dbo.CalculateMarkup(@fareprice,childMarkup)),@infantmarkup=SUM(dbo.CalculateMarkup(@fareprice,infantMarkup)),@studentmarkup=SUM(dbo.CalculateMarkup(@fareprice,studentMarkup))
FROM Agent_Markup_Fare WHERE fareid=@fareid AND (AgentsSubGroupID IN(INVARIABLES))'
SET @sqlStatement = REPLACE(@sqlStatement,'INVARIABLES',@AgentPreferencesIDS)
EXEC(@sqlStatement)
But Error Message Appeared :-
Must declare the variable '@fareprice'.
But All the Variables are declared on the Stored Procedure.
Can you please tell me how to recover the problem?
Thanks in advance.
|
|
|
|
|
Consider the Execute call to be a different function call. It is not aware of the local variables in your stored procedure. Therefore, they are undefined inside of the execute. Try something like this:
DECLARE @sqlStatement NVARCHAR(1000)
SET @sqlStatement='SELECT AgentsSubGroupID
FROM Agent_Markup_Fare
WHERE AgentsSubGroupID IN(INVARIABLES)'
SET @sqlStatement = REPLACE(@sqlStatement,'INVARIABLES',@AgentPreferencesIDS)
CREATE TABLE #tmpTbl (AgentsSubGroupId INT NOT NULL)
INSERT INTO #tmpTbl
EXEC(@sqlStatement)
SELECT
@adultmarkup=SUM(dbo.CalculateMarkup(@fareprice,adultMarkup)),
@childmarkup=SUM(dbo.CalculateMarkup(@fareprice,childMarkup)),
@infantmarkup=SUM(dbo.CalculateMarkup(@fareprice,infantMarkup)),
@studentmarkup=SUM(dbo.CalculateMarkup(@fareprice,studentMarkup))
FROM
Agent_Markup_Fare
WHERE
fareid=@fareid AND
AgentsSubGroupID IN (SELECT AgentsSubGroupID FROM #tmpTbl)
DROP TABLE #tmpTbl
This turns your CSV list into a temporary table. This allows you to use the set based operations in SQL as normal. It may not be optimal but, it should be enough to get you started in the right direction. Don't forget to drop the table when you are done with it. It should have a connection scope but, cleanup is always advised.
|
|
|
|
|
Hello database gurus,
Say I have a table 'List' which represents a forward-only linked list:
Id NextId Text
---------------------------------
1 6 This is item 1
2 5 This is item 5
3 10 This is item 7
4 2 This is item 4
5 3 This is item 6
6 8 This is item 2
7 9 This is item 9
8 4 This is item 3
9 NULL This is Item 10
10 7 This is Item 8
and I want to create a query which outputs 'This is item 1', 'This is item 2'..... down to 'This is item 10', what's the best way to do this?
I've managed to do it with recursion, but this can't be good for long lists. What about using a cursor or some clever index? Any ideas? Thank God I've never been asked anything like this at interview...
Regards,
Rob Philpott.
|
|
|
|
|
This isn't really what SQL is desgined to do. If you already know the order in which the rows are supposed to come, why not add an "OutputOrder" column then user ORDER BY OutputOrder
Assuming the start is always Id=1
DECLARE @Id int;
SELECT @Id = 1;
WHILE @Id IS NOT NULL
BEGIN
-- Do stuff you need to do here.
-- The next line moves to the next item in the linked list.
SELECT @Id = NextId FROM MyTable WHERE Id = @Id;
END
Please note, that this is incredibly inefficient. You might want to use this to populate an OutputOrder column which you can then use later to ensure the output is efficiently selected in the correct order.
|
|
|
|
|
Thanks for the reply Colin,
Why do you say this isn't really what Sql is designed to do? It seems perfectly acceptable, desirable even (well to me at least!) to have an abstraction where a database can maintain a linked list of data. My operations would be 'return in order', 'delete item in list', 'insert item in list'.
As you suggest, you could have a column which maintains the order, but then inserts and deletes would be slow because you'd need to update this field in all later records. I guess what I'm looking for is the solution with a good worst-case insert/delete/select.
Regards,
Rob Philpott.
|
|
|
|
|
Rob Philpott wrote: Why do you say this isn't really what Sql is designed to do? It seems perfectly acceptable, desirable even (well to me at least!) to have an abstraction where a database can maintain a linked list of data. My operations would be 'return in order', 'delete item in list', 'insert item in list'.
Because SQL is a set based language. It operates more efficiently on small numbers of large set operations rather than large numbers of small set operations.
Traversing a linked list requires a large number of small set operations. e.g. Each SELECT is a set based operation. If you have to traverse a linked list you need to perform a large number of select operations. If you have an Order column in the table then you just need one SELECT statement with an ORDER BY clause.
Remember also, that a linked list is really just an implementation detail of one way to store a set of data. It is useful in certain scenarios. I used linked lists many-a-time when I programmed in C++, but I also used arrays when the need arose.
To get around the insert and delete problem while holding an Order column. the Order column could increment by 2. An insert would put the new row with an odd number (first set based operation), Immediately after, if you are using SQL Server 2005, you could renumber the Order column by using the row numbering feature and multiplying the result by 2 to get the new value of the Order column (second set based operation). Delete operations don't actually have to re-order anything as the Order column is just for ordering and nothing else. Insert operations now trade off some of the speed they once had (because large tables will still take some time) for much faster speed on iteration.
As it stands the WHILE loop suggestion I gave before will have (on anything more than a handful of rows) unacceptable iteration performance when getting the data out. (Large numbers of small set operations)
|
|
|
|
|
Thanks again for you input Colin,
I guess when you reach a situation like this it's probable that the model isn't quite right and needs a rethink. In my experience, everything fits nicely into place when done properly, and this doesn't sit well!
What I was trying to achieve was a generic note facility, where users could add notes to different things. eg. in a database I might have tables Car, Book, Person, Tree representing the objects I'm storing data about. In each of these tables I wanted a 'NoteId' column which could point to the first of a list of notes which could be added and removed at will by users. This way the same note table could be used by different objects.
I suppose the other more traditional way of doing this is via use of one or more intermediate tables which map one-to-many objectIds to Notes eg. CarNotes, BookNotes, PersonNotes, TreeNotes, and I guess this is good because it allows FK constraints to be implemented, but I just wanted to avoid these extra tables. They're almost 'clutter'.
What's the weather like in Scotland? It's been a bit stormy down in London over the past few days.
Regards,
Rob Philpott.
|
|
|
|
|
From what you describe I don't think you need intermediate tables. A compound primary key on the note table might suffice, assuming you don't have one note that needs to be shared among multiple other tables.
The MainId would be what other tables use to link to the notes - so it uniquely defines a group of notes. The SecondaryId is just to uniquely identify each note within the group.
Perhaps that might work. If your system is going to be busy and there may be the scenario where two processes want to create a new group of notes simultaneously then I'd recommend a GUID for the MainId because they won't collide as you cannot use an Identity because it would cause problems when you wanted to put a second note in an existing group and you can't use MAX(MainId) because two processes might try the same thing at the same time.
Your other option is intermediate tables. I don't really think they clutter up the place, but if your think they do, then you could always prefix their name with something to keep them out of the way.
|
|
|
|
|