|
Indra PR wrote: the transfer is so damn slow
Operations done over linked server may be performed very differently than those executed locally. You can use execution plan to investigate what's being done and why it's slow. So you should investigate each problem statement (especially multi database joins are often problematic).
The idea in optimizing multi server statements is that you transfer minimal amount of data using the linked server so in many cases the SQL may have to be rewritten.
Indra PR wrote: Are there any better ways to access database from multi-server in one application
Linked server is intended for this. Of course there are other techniques such as replication etc, but the data is not typically up to date in those scenarios or you cannot update the data on the other end (stand-by solution etc)
|
|
|
|
|
Mika Wendelius wrote: especially multi database joins are often problematic
I've read something about this before, it is said that the performance will be better if we create a temporary table from the table in other database first. Then we join the temporary table with the 2nd table. Is that right?
Mika Wendelius wrote: Linked server is intended for this. Of course there are other techniques such as replication etc, but the data is not typically up to date in those scenarios or you cannot update the data on the other end (stand-by solution etc)
If it so, then it is right for me to use linked-server, and I don't have any other choice of this, the only way is to optimize the query. Isn't it? Correct me if I'm wrong.
- No Signature Available -
|
|
|
|
|
Indra PR wrote: I've read something about this before, it is said that the performance will be better if we create a temporary table from the table in other database first.
That can be part of the solution but not the solution itself. The main point is how much data you tranfer through the link. Fore example if you have a query like
select ...
from linkedservertable
join localtable
where ...
SQL Server may have transfer the whole linked server table to this server before the join or row elimination can be done. If the same query can be written to format
select ...
from localtable
(possibly join to linkedservertavle)
where joiningcolumn in (select keycolumns
from linkedservertable
where restrictive conditions...)
the performance may be very different since the elimination may be done at the linked server. This is a very simple example, but hopefully points out the idea. Temporary tables can be used for that exact purpose if the operation cannot be re-written otherwise.
Indra PR wrote: then it is right for me to use linked-server, and I don't have any other choice of this
Based on your description I would say that linked server is a good way to handle your situation.
One more thing. If you insert/update/delete your data on both servers in a same transaction you are forced to use distributed transactions. MS DTC takes care of this, but it's good to know that this will add extra overhead to transaction handling, which then again makes the operation a bit slower. However if you want the transaction to be ACID, that's the correct way to do it.
|
|
|
|
|
Dear all, I have some problems with SQL Server 2000 that I want to ask. I am using the database with OLEDB connection that is called from VB.NET. The application that I make is an ERP software.
1. Sometimes, and it is often happened in the application, is an error saying "Connection is busy".
2. Then sometimes the application also shows a dialog box with a connection error saying "Cannot create a new transaction becase capacity was exceeded".
I don't know how to solve them, I've tried to look for the answers from Google, but the only answer that I found is that I should update the service pack into the latest version. But I've checked my database, and its version is SP4. From the application itself, some people said that maybe I forgot to close the connection, but I've made only one function to fill a datatable, and I've put the Connection.Close() in the end after doing every transaction. Are there any better answer for these?
- No Signature Available -
|
|
|
|
|
Check your connection pooling settings. You may be hitting the limit. Also check that you close the connections properly using Close method after you have used them and that you end the transactions gracefully.
|
|
|
|
|
I still don't know about the connection pooling Mika. Is it possible that the problem was caused by the fact that a lot of users of the application were accessing the database at the same time? Can you give me a little description about that, and what should I do to check it, or maybe a link of reference. Thanks before.
- No Signature Available -
|
|
|
|
|
Indra PR wrote: I still don't know about the connection pooling Mika
Connection pooling is quite well decsribed here: OLE DB, ODBC, and Oracle Connection Pooling [^].
Indra PR wrote: Is it possible that the problem was caused by the fact that a lot of users of the application were accessing the database at the same time
If you have a lots of users simultaneously (thousands of them) or if you have limited maximum amount of connections, that's possible. You can check the max connection count using:
SELECT @@MAX_CONNECTIONS
You can find the maximum capacity specifications here: http://msdn.microsoft.com/en-us/library/aa933149.aspx[^]
|
|
|
|
|
Indra PR wrote: I've put the Connection.Close()
although, you have put close connection request after every command. Does your program make sure that connection.close() is executed every time.
I mean to say, If there is a error during the transaction, is close() method called.
try using code as
try
{
Connection.Open();
}
catch(Exception EX)
{
}
finally
{
Connection.Close();
}
One thing you should try checking is
1. When Your application starts giving the error reported, at that instance of time how many users are connected to your server and on the particular DB.
2. How many requests from the application are running in pool of DB
|
|
|
|
|
Yep, my code is exactly as the same as yours. I still don't know about pool? Before, Mika also said something about DB pooling, or pooling connection. What is that? I've read the reference but still don't understand I only know that DB pooling is a feature in ADODB to minimize cost of opening connection, we have to set it first or what?
- No Signature Available -
|
|
|
|
|
What is the Edition of your SQL Server
|
|
|
|
|
SQL Server 2000 Enterprise Edition
- No Signature Available -
|
|
|
|
|
actually i have a database table have four fields rnkid, rnkartid,rnkuserid and rnk marks i want sum all the marks which have same rnk art id . pls hel tell how
|
|
|
|
|
If I understood you correctly, your query could be something like:
select sum(rnkmarks)
from your table
group by rnkartid
See SUM[^] and GROUP BY[^] for more details.
|
|
|
|
|
Hi,
I'm new to T-Sql programming in Sql server with C++ Application.
Plz provides some examples.
With Regards
Mahesh
|
|
|
|
|
|
Hi,
I need to transfer the data from xls sheet to sqlserver but actually the thing is if the source excel file has different sheets, in each sheet i have the data
and i need to move the entire data( all the data that is present in all sheets of the excel file) to a single table into sql server
like wise I have many xls files ( which have many sheets ) .
for eg:
excel file 1:
-> sheet 1
-> sheet 2
-> sheet 3
excel file 2:
-> sheet 1
-> sheet 2
-> sheet 3
excel file 3:
-> sheet 1
-> sheet 2
-> sheet 3
now i need to get the data from all of the files and i need to insert into a single table ( sql server) in ssis package
so plz help me by giving the solution .
thanks
|
|
|
|
|
AFAIK you would have to do this sheet by sheet. For more info: Excel Source[^]
|
|
|
|
|
Are you kidding or serious?
Any suggestions, ideas, or 'constructive criticism' are always welcome.
"There's no such thing as a stupid question, only stupid people." - Mr. Garrison
|
|
|
|
|
I am using MySQL version 5.1 on each of the Networked PC's. There are 7 i all. How do I Backup all the Databases on Server 1 and Append Restore the Backup on Server 2 so it gets all the changes from the First Server, I will do this with 7 Servers in Progression so they Loop Every 15 Minutes. I am using Windows XP professional on each of the 7 Servers and the all have the same Databases. Each Server would have all the New Records in the Tables from the Previous Server. I have tried using Replication on them all but it does not update whena recors is added or changed on any of the 7 Servers. When records are added or changed on any of the 7 Servers, the other 6 must be Updated with the change or addition. I appreciate any suggestions or help or Topology changes.
Regards
BC
|
|
|
|
|
BobClarkSQL wrote: ow do I Backup all the Databases on Server 1 and Append Restore the Backup on Server 2 so it gets all the changes from the First Server, I will do this with 7 Servers in Progression so they Loop Every 15 Minutes
Not sure what you're trying to achieve with this. If you constantly restore modifications using backups from one database to another, you'll have a stand-by server. However, you're unable to modify anything on the stand-by (or 7 stand-by's in this case). Is this really what you want?
BobClarkSQL wrote: I have tried using Replication on them all but it does not update whena recors is added or changed on any of the 7 Servers
AFAIK MySQL doesn't support multi-master replication (or in other words merge replication). The only thing that comes close is clustering[^], but that's intended for distributed scenarios.
modified on Thursday, January 22, 2009 9:39 AM
|
|
|
|
|
Yes. I have to have a solution that does not require any licensing or cost. The Information has to be updated automatically with some frequency (say every 5 to 15 minutes) so that when all 7 Stand Alone Servers have the same information and new records and record changes in all of there databases which would be identical in nature. It can be accomplished with any free tool or package that works with MySQL 5.1. They all communicate with each other and I granted access for each of them for Replication and set up Relay and Binary Log Files as is required with Replication but none of the servers get updated when new records are entered in any server or changed. If a Backup is done on any Server it takes less than a minute. If it can be Restored on the Next Server in the Change and Vice Versa, it should be OK as long as it can be deployed automatically and it is staggered so there is no Writing Conflicts.
Regards
BC
|
|
|
|
|
I'm sorry, but I don't get it. Consider for example following timeline:
Server A Server B
-------- ---------
insert 1 |
| insert 2
| |
Backup 1 |
| insert 3
| |
| Restore 1
When Backup 1 is restored in Restore 1, inserts 2 and 3 are lost.
|
|
|
|
|
Data Loss is not critical. If Migration is used every 5 minutes they all synchronize with each other fine. Each server is in a stand alone world. The other servers information is not critical as long as it gets there eventually.
Regards
BC
|
|
|
|
|
So I guess you have 7 databases on each machine.
Anyhow, the backup is taken using Backup[^] command and then it's restored using Restore[^] command. You can schedule these using for example Windows Task Scheduler and a command script.
|
|
|
|
|
The backup must be done automatically on Server A and then Restored Immediately Afterwards on Server B until all 7 Servers have the Previous Servers Information. When Server B completes its Restore from Server A it must Start a Backup and complete the Sequence chain until all are updated automatically. There are 10 databases on Each Server that are identical in nature.
Regards
BC
|
|
|
|