|
You could try something like this:
SELECT a.pk, b.pk
FROM FirstDatabase.dbo.TableName AS a
FULL OUTER JOIN SecondDatabase.dbo.TableName AS b WHERE a.pk = b.pk
WHERE a.pk IS NULL
OR b.pk IS NULL
pk = primary key, if you have a compound key then you will need all the columns that make up the primary key.
The results of the query should (I haven't tested it) return any rows that exist in one database, but don't in the other.
If you want to return all rows that have differences then you might want to add to the WHERE clause:
OR a.column1 <> b.column1
OR a.column2 <> b.column2 ...and so on for each of the columns.
ColinMackay.net
Scottish Developers are looking for speakers for user group sessions over the next few months. Do you want to know more?
|
|
|
|
|
Sorry I should have mentioned the two DB's are actually running on 2 different servers
|
|
|
|
|
That's okay. The naming convention extends to servers. Just add in the server name like this:
ServerName.DatabaseName.SchemaName.TableName
SchemaName is dbo , unless you've set it up otherwise.
You will also have to link the two servers together. You might find this useful: MSDN: Configuring Linked Servers[^]
ColinMackay.net
Scottish Developers are looking for speakers for user group sessions over the next few months. Do you want to know more?
|
|
|
|
|
I'm having performance issue with this ASP.NET app. From SQL Server log there're MANY MANY log which resembles the following:
SQL Server log: Login succeeded for user 'APPL_ACCOUNT'. Connection: Non-Trusted.
I'm talking thousands in one morning. And I am suspecting that this is the cause of the performance degradation. I thought when you
<br />
conn.Open();<br />
ADO.NET draws from ADO.NET connection pool using existing connections in the pool and you don't "login" again? Is this assumption right? I ran a very very simple test:
<br />
using System;<br />
using System.Data;<br />
using System.Data.SqlClient;<br />
<br />
static void Main(string[] args)<br />
{<br />
String s_conn = "Data Source=127.0.0.1;Initial Catalog=pubs;User Id=sa;Password=secret;Max Pool Size=80;Min Pool Size=30;";<br />
IDbConnection oconn;<br />
Int32 i;<br />
<br />
oconn = new SqlConnection (s_conn);<br />
for(i=0; i<100; i++) <br />
{<br />
oconn.Open();<br />
<br />
oconn.Close();<br />
}<br />
<br />
return;<br />
}<br />
This did NOT generate a bunch of "Login succeeded" in my SQL Server's Server Log. But then, the surprise was, there was NOT even ONE "Login succeeded" registered.
Btw, I'm using NHibernate... And the way NHibernate open an ISession (corresponds to a IDbConnection) is as follows:
<br />
Dim _nhibernate_conn_factory ISessionFactory = BuildFactory() 'This is time consuming so we only create it once...<br />
<br />
Public Function GetWarehouseNHibernateSession() As ISession Implements IWarehouseConnectionManager.GetWarehouseNHibernateSession<br />
Dim conn As ISession<br />
Dim maxRetry, retryFreq As Int32<br />
<br />
Try<br />
<br />
conn = nhibernate_conn_factory.OpenSession()<br />
<br />
Catch ex As System.Data.SqlClient.SqlException<br />
'handle the exception<br />
Catch ex As Exception<br />
'handle the exception<br />
End Try<br />
<br />
Return conn<br />
End Function<br />
My finding is, for each "OpenSession" there's a corresponding entry in SQL Server's log:
"SQL Server log: Login succeeded for user 'APPL_ACCOUNT'. Connection: Non-Trusted."
Is this normal? On one of my page there's 35 OpenSession - seems like this is what's slowing down the application. Advice? Thanks Thanks!
If you want to look deeper into NHibernate's code, look here
ADO.NET connection pooling REF: http://www.15seconds.com/issue/040830.htm
NHibernate REF: http://nhibernate.sourceforge.net/NHibernateEg/NHibernateEg.Tutorial1A.html
|
|
|
|
|
I just finished reading about NHibernate a couple of days ago (I've only played with it for a while, haven't used it myself).
From my understanding, you need to use and keep just one session object for each user (and each page) that is logged in. All the samples I saw only use one session, and for updating the sample either loads from the database before updating, or stores the session in the ASP.Net session variable for later use.
Apologies if it's a bit unclear. I'm a bit tired after a weekend full of wedding (not mine) and moving out.
Edbert
Sydney, Australia
"A day without sunshine is like, you know, night."
|
|
|
|
|
Hey thanks. Just one question first. SessionImpl implements ISerializable. Do you think that it will be compatible with NLB (Network Load Balancing)+StateServer if I cache NHibernate.ISession/SessImpl in HttpSession?
|
|
|
|
|
I reckon it can if you use MSSQL Server for the HttpSession, but if you use in-proc session with multiple servers I'm not sure.
You don't actually need to cache the NHibernate.ISession itself.
I saw a different implementation which requery the database for the previous state before updating with data from user, and then commiting to the database (it was a sample on how to use NHibernate for ASP.Net, I don't like the sample for having to requery the database only for that).
There is a good BugTracker[^] project using ASP.Net 2.0 and NHibernate that you might want to look at for more real-life sample of using NHibernate.
Edbert
Sydney, Australia
|
|
|
|
|
I was thinking of caching NHibernate.ISession/SessionImpl in HttpSession on session start then close it in session ends. But then I want to make sure SessionImpl is compatible with NLB+StateServer before I make the changes. But, then... alternatively, I can cache it as page member variable - that's more messy/cluttered though.
|
|
|
|
|
try
<br />
conn.Open();
or
<br />
sess.Open();
If the connection/session comes from ADO.NET connection pool, a "login" will be executed against your SQL server instance. Now, depending on audit level configured on your particular SQL server instance (None, failure only, success only, All), if audit level = "All" then all your successful login would be logged on your SQL server Server Log with a message which resembles "Login succeeded... "
|
|
|
|
|
We're having some performance problems with our database where we are having to scan tables in order to find data created by a particular user. "Users" are just just stored in a table with name, password, etc. Some of our stored procedures require we find data belonging to a particular user.
Our database guy suggested a possible fix: each user will use SQL authentication to connect to SQL, passing his own user name and password in the SQL connection string. The index view will then only look at the user's data, it will be a view on the user's data only. He says this can be done because the view is showing data only for that user (identified in the connection string).
To me, I thought this sounds like a hack, but not being a big database guy, I don't really know if this is feasible or a good solution. Any thoughts from you guys?
Tech, life, family, faith: Give me a visit.
I'm currently blogging about: Connor's Christmas Spectacular!
Judah Himango
|
|
|
|
|
The pro is that the view will return only the user's data because, I'm guessing, that it will be defined as something like this:
CREATE VIEW SELECT * FROM MyTable WHERE User = USER(); However, I don't see where the performance gain is on this because it will still be doing the same as before (it's just hidden).
Passing the user name and password in the connection string means that your application will lose some of its ability to use connection pooling. (If this is a thick client windows application with only one user at a time using it then this isn't really a problem, however if you have a web application with many users accessing it then losing the ability to effectively use connection pooling could introduce performance problems - However, I've never done any testing on that so I don't know. I'm flagging it as it may be a concern and something to look in to).
From your description of the problem I'd say that you have a problem with indexes rather than anything else. It is also the easiest solution because it won't require views to be created or code to be changed to have dynamic connection strings and so on.
Look at what columns are being referenced most often in the where clause (and it sounds like it will be something like the UserId column) and index it. If you are using SQL Server 2000 there is an index tuning wizard you may want to look at.
Does this help?
ColinMackay.net
Scottish Developers are looking for speakers for user group sessions over the next few months. Do you want to know more?
|
|
|
|
|
Yeah I think it does help. I believe the performance gain is by the fact the view is created based on the current SQL user, something about how that works.
For indexing, there are certain tables we cannot index due to the table have TEXT fields. I understand the SQL 2005 now has varchar(MAX) which is index-able and can store something like 2 billions characters... we will have to look into that more though.
Thanks Colin.
Tech, life, family, faith: Give me a visit.
I'm currently blogging about: Connor's Christmas Spectacular!
Judah Himango
|
|
|
|
|
I doubt creating a per-user connection string will be of much help.
You might want to have a look at SQL 2005 partitioning[^] instead.
From what I have learned/been told (I haven't tested this myself, but there's a performance statistic on the article) partitioning segments your data for better performance - you can even store the data on different hard disks.
Edbert
Sydney, Australia
|
|
|
|
|
According to our DB guy, it isn't that the per-user connection itself gives good perf, but rather, it allows you to create indexed views showing data only for that user...
Thanks for the paritioning pointer, I'll have a look at that.
|
|
|
|
|
There is two tables HP_AccountName and T_Values. HP_AccountName includes all main accounts,T_Values includes all operations about accounts. Sample Tables r fallowing.
HP_AccountName............................|T_Values (An Account can be writen different lines. )
------------------------------------------|-----------------------------
HP_Level..HP_No..HP_AccountName...........|....MFD_AccountNo..MFD_Total
1.........1......Main1Account.............|....1200254........10,000
2.........10.....Main1SubAccount1.........|....1002865........15,000
3.........100....SafeBoxes................|....1009431........20,000
2.........12.....Main1SubAccount2.........|....2005454.........0,500
3.........120....Customers................|....2000024.........5,000
1.........2......Main2Account.............|....1205471........35,000
2.........20.....Main2SubAccount1.........|....1205471.........0,600
3.........200....Renders..................|....1205471.........0,400
...............................................2006300........48,500
All I want is a result Set Like
HP_No..HP_AccountName.........TOTAL
1......Main1Account...........81,000
10.......Main1SubAccount1.....35,000
100........SafeBoxes..........35,000
20.......Main1SubAccount2.....46,000
120........Customers..........46,000
2......Main2Account...........54,000
20.......Main2SubAccount1.....54,000
200........Renders............54,000
I worte something like that but i couldnt do axactly what i want
Select SPACE(HP_Level*3)+ HP_AccountName,
(Select SUM(MFD_Total) FROM T_VALUES WHERE MFD_AccountNo LIKE HP_No + '%') FROM T_ACCOUNTS
WHERE HP_Level<=1 order by HP_No
-- modified at 7:22 Thursday 16th February, 2006
|
|
|
|
|
You might want to put your table examples inside <pre></pre> blocks. That way it will be readable.
ColinMackay.net
"Man who stand on hill with mouth open will wait long time for roast duck to drop in." -- Confucius
"If a man empties his purse into his head, no man can take it away from him, for an investment in knowledge pays the best interest." -- Joseph E. O'Donnell
|
|
|
|
|
Hi.
Does anyone know of a good freeware tool to simply draw UML diagrams for relational databases?
TIA
|
|
|
|
|
Hi,
try on Google Poseidon for UML. Although I'm not an UML expert, I use this application and it looks great. It has a free Community Edition.
Best regards,
Juan Pedro Pérez
|
|
|
|
|
Juan Pedro Pérez wrote: I use this application and it looks great
I agree with Juan. I've been using the Community Edition for the past few years during my graduate studies and it works very nicely. You may also want to check out Argouml even though it is Java based.
Hope this helps,
Paul
|
|
|
|
|
Thanks guys.
Having sniffed around I also found VisioModeler at Microsoft which is free as an unsupported product. First impressions look promising.
|
|
|
|
|
Hello,
I'm new to ADO.NET and I'm working on my first little application. I feel I'm missing something really important about this architecture, and I'm messing things up.
Lets say I have a database in Access with a Table A. This Table A have an autonumerical PK, so I don't have to worry about it. Up to this point, great. I managed to configure my DataAdapter with commands so I don't have to worry about the PK, for Access does this work for me.
Next, I create a Table B that have as a FK the aforementioned Table A PK. I duplicate in a DataSet this two tables, create a new row in A and then intended to create related rows in B to this new A row. And here is where all the hell broke loose.
As a single-user non-concurrent app, I decided to control the A PK directy in it. During the dataset construction, I read the highest PK from the database and create in DataSet-Table A a PK field with the autonumerical True and its seed fixed to that maximun readed from the DB. Then I generate every new DataSet Table A rows with a controlled PK, so I'm able to control its value to generate related rows in DataSet Table B. Although I suspect is a clumsy solution, by now it works.
But the question readily arrises. What if my app should be multiuser? How the hell can I "predict" new values in autonumerical PK controlled by the DB so I can relate rows to other tables off-line in my dataset and then merge my transactions with that of other users without PK conflicts?
Please, drop me a line. I'm suspecting I miss something fundamental and any indication could change all my messy mental schemes about ADO.NET.
Thanks in advance and greetings from Spain,
Juan Pedro Pérez
|
|
|
|
|
Hi Juan. I wish my Spanish were as good as your English.
My initial question is why are you manipulating rows in an in-memory dataset, rather than just INSERTing a row in the database in TableA, then INSERTing related rows in TableB? Upon inserting the row in Table A, you can retrieve the newly created AutoNumber, by issuing a SELECT @@Identity .
Here's a (very raw, but complete) example of what I mean (in C#):
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.OleDb" %>
<%@ Page Language="C#" %>
<script runat="server">
void btnSubmit_Click(object o, EventArgs e)
{
const string kCONNECT_STRING =
"Provider=Microsoft.Jet.OLEDB.4.0; "
+ @"Data Source=c:\inetpub\wwwroot\tests\test.mdb";
const string kINSERT_STATEMENT =
"INSERT INTO [MyTable] ([E-Name], [E-Email]) "
+ " Values (@pName , @pEmail) ";
OleDbConnection con = null;
OleDbCommand cmd = null;
OleDbParameter pName = null;
OleDbParameter pEmail = null;
int insertedID = -1;
try
{
con = new OleDbConnection(kCONNECT_STRING);
con.Open();
cmd = new OleDbCommand(kINSERT_STATEMENT, con);
pName = new OleDbParameter("@pName", DbType.String);
pName.Value = tbName.Text;
cmd.Parameters.Add(pName);
pEmail = new OleDbParameter("@pEmail", DbType.String);
pEmail.Value = tbEmail.Text;
cmd.Parameters.Add(pEmail);
cmd.ExecuteNonQuery();
cmd.CommandText = "SELECT @@Identity";
cmd.CommandType = CommandType.Text;
cmd.Parameters.Clear();
insertedID = (int)cmd.ExecuteScalar();
Response.Write(
string.Format("Record has been inserted and assigned ID# {0}."
,insertedID) );
}
catch (Exception x)
{
Response.Write("There has been an error. ");
Response.Write(x.Message);
}
finally
{
if (cmd != null) cmd.Dispose();
if (con != null) con.Dispose();
}
}
</script>
<html>
<head>
</head>
<body>
<form runat="server">
<h3>Insert into Access</h3>
<table>
<tr>
<td>Name</td>
<td><asp:TextBox id="tbName" runat="server" /></td>
</tr>
<tr>
<td>Email</td>
<td><asp:TextBox id="tbEmail" runat="server" /></td>
</tr>
</table>
<asp:Button id="btnSubmit" runat="server" text="Insert"
onclick="btnSubmit_Click" />
</form>
</body>
</html>
|
|
|
|
|
I see. I think I was trying to do things "too off-line". I read from the very start that behind ADO.NET lies a disconnected philosofy, but it is clear that some tasks are very difficult to accomplish that way. I was trying to do all the changes off-line, and then run a bunch of DataAdapter.Update() to commit all the changes. In this case, when tables relies on PK of other tables, then the INSERTs should be done at once to obtain a solid PK, isn't it?
Thanks for your help,
Juan Pedro Pérez
|
|
|
|
|
I would say that ADO.NET supports a disconnected DataSet - disconnected in that it is not specifically tied to any brand of relational database (and in fact can be created entirely in memory if desired).
But ADO.NET does support a consistent model of connecting to any brand of relational database (that has an ADO.NET provider) to either query for data, or modify data.
A common operation for example, is to use connection-specific objects to issue a SELECT statement to a specific database. For SQL Server, these objects would be SqlConnection , SqlCommand , and SqlDataAdapter . For an Access file, these objects would be OleDbConnection , OleDbCommand , and OleDbDataAdapter . The DataAdapter object is then used to .Fill a disconnected DataSet with the SELECT results. The disconnected DataSet may then be used throughout your application as necessary.
This is particularly useful when designing N-tier applications, where you want to separate database-specific code (into a Data tier) from your presentation code (.aspx pages). For example, your Data tier (think: custom class) could have a method that encapsulates the code necessary to connect to an Access database and query for a list of employees. This method could return this data as a DataSet object. The presentation code (your .aspx page) could then call this method and databind the resulting DataSet object to, for example, a GridView .
Now imagine that you want to replace your backend database, upgrading from Access to SQL Server. You would need to change the database-specific code in your Data tier, but you would not have to change your presentation layer at all. As long as your method for retrieving employees from SQL Server returns the disconnected DataSet , which is what your presentation layer expects, you don't need to make changes to the presentation code.
This is certainly an over-simplification of a discussion on N-tier architecture and Data tiers, but hopefully it gives you a sense of why the disconnected DataSet object in ADO.NET is useful (but also why you'll still use Database-specific code in ADO.NET too).
|
|
|
|
|
Got it. Thanks for your clear explanation. The deployment in N-Tier you described is what I was trying to implement, with little success I find the learning curve of ADO.NET a lot stepped as conventional ADO, but I know it's worth the time. A lot of more hours of programming and everything will be OK.
Greetings,
Juan Pedro Pérez
|
|
|
|
|