|
Actually the servers are using a SCSI controller with 4 10k rpm drives. i'd have to check the auto-commit tip, never thought about that before i can't do the bulk copies unfortuantely since i query the data during the day.
|
|
|
|
|
StevenWalsh wrote: i can't do the bulk copies unfortuantely since i query the data during the day.
You can if you load the data into a staging database and use a linked database with a cross database view on it. Then, in the quiet periods do the bulk insert.
|
|
|
|
|
Are any the indexes clustered on your large tables?
If you do you may have a lot of page splitting going on which can really slow things down with inserting 7m rows.
Do you have the transaction log on a separate drive?
Is the database on a separate drive from the application?
Continuous effort - not strength or intelligence - is the key to unlocking our potential.(Winston Churchill)
|
|
|
|
|
depending on the db used, and how you set it up it may have to grow itself to be able to store that data i.e. if you origionally set up the db for 3gb, and its taking up all 3 or has gotten bigger, any time you insert you'll have to wait on the database to go find another contiguous section of disc to put the new data on. on sql server 2k you can set the inital size and the growth rate as a percentage or fixed amount, we have a 1tb raid array so we can let it grow 1gb at a time, which lets it work for a while without needing new space. we also created the db several times larger than it was when we built the server, so it wouldnt need to grow for a long time. Contiguous disk space is your friend for data heavy applications, because if you can put all a table or tables together it will reduce seek time. also disabling the log file or setting the log file up to have a large amount of space could help too, if its logging the inserts then that file will get pretty big and have to go find more space for itself.
i've greatly sped up our db server and other databases by forcing it to clear the log file, and giving it quite abit of space for it since we regularly upload (insert, delete, update) over 500 million rows in a handful of tables. we upload them to a temp database on the main server then do compares to deteremin what needs to be added removed or changed, then run those queries, so we arent adding 500 million rows every few months, but we do have to mess with that much data, the inserts are ususally only 20 or 30 million, but it does run fairly fast(20 to 30 minutes at most including initial upload done with bcp) on the new machine. top end dell 5u 15k rpm sata2 drives..its sweet.
hopefully this was somewhat helpful
Please remember to rate helpful or unhelpful answers, it lets us and people reading the forums know if our answers are any good.
|
|
|
|
|
Actually that helps quite a bit, thanks for the advice
|
|
|
|
|
not a problem, the disadvantage is that if you wanted to make it all contiguous you'd have to drop the DB, defrag the hdd then recreate the db larger than it was before. but that would require a decent amount of downtime(a few hours depending on your server) and enough storage some place to back up your current data so it could be restored. also on mass inserts, i've found it usefull to drop the indices and recreate them after the insert, if you can do it all at once. you should see a noticable performance improvement if you try that, doing a lot of inserts and updates to an index can(sometimes) corrupt it quite easily, thus slowing down performance for any action on that table, or anything that is related to that table.
Please remember to rate helpful or unhelpful answers, it lets us and people reading the forums know if our answers are any good.
|
|
|
|
|
I'm having an access database table which consists of 4 different columns Day, Month, and Year-1 and Year-2 (e.g 22-July-2008 is present in Database as: 07, 22, 20, 08) as columns. So I have added a new Date column in a newly created Dataset which concatenates all these 4 fields and show it as a single date field. But now the problem is that, I'm not getting anywhere with how to link this newly created dataset with my Crystal Report.
Thanking you in anticipation.
My C#.Net code is as shown below:
private void button1_Click(object sender, System.EventArgs e)
{
try
{
myCon.ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\temp\as400.mdb";
myCon.Open();
string sqlQuery = "Select [CLNT#], CNAME, [CSS#], CADMHH, CADMYY, CADMMM, CADMDD, CSEX from SAMFILE_CLIENTP";
myAdp = new OleDbDataAdapter(sqlQuery, myCon);
PIN_DataSet PDS = new PIN_DataSet();
PDS.Clear();
myAdp.Fill(PDS,"SAMFILE_CLIENTP");
DataSet newPDS = new DataSet();
newPDS = PDS.Copy();
DataColumn dCol = new DataColumn(newPDS.Tables[0].Columns.Add("DT", typeof(String)).ToString());
for(Int32 i = 0; i < newPDS.Tables[0].Rows.Count; i++)
{
DataRow row = newPDS.Tables[0].Rows[i];
row["DT"] = row["CADMMM"].ToString() + "/" + row["CADMDD"].ToString() + "/" + row["CADMHH"].ToString() + row["CADMYY"].ToString();
}
PIN_Crystal_Report myReport = new PIN_Crystal_Report();
myReport.SetDataSource(PDS);
crystalReportViewer1.ReportSource = myReport;
}
catch(Exception obj)
{
MessageBox.Show(obj.Message, "X'eption");
}
}
|
|
|
|
|
Hi,
If you have already linked the PDS dataset to Crystal, don't create a new dataset. Instead, add a new datatable to existing PDS dataset and then add rows to it. After this you can see two tables at Crystal side.
Hope this helps,
Mika
|
|
|
|
|
Thank You Mika, I did it the other way round. After adding the new columns to the dataset. I have updated the dataset schema using .WriteXMLSchema (), this infact, updates the old schema with the one and shows the newsly added column in the fields explorer section of the Crystal Report.
Dataset has been updated at its actual location:
PDS.WriteXmlSchema(@"C:VS_Projects\PIN_DataSet.xsd");
And also, I didn't add the new dataset, I have used the old one instead.
Thanks again for all the help extended.
|
|
|
|
|
You're welcome
Mika
|
|
|
|
|
I have an access database with several fields........one of the field is having the field name as "Client#" in table Client_Table.
When I'm using this SELECT stattement :
"Select Client#, ClientName, ClientGender from Client_Table".
It's displaying an error : "Syntax error in date in quesry expression 'Client#'". But when I'm not using "Client#" its displaying the other records properly.
Thanks in anticipation.
|
|
|
|
|
is the column name actually "Client#"?
if thats the case try this...
SELECT [Client#], ClientName, ClientGender FROM Client_Table
Harvey Saayman - South Africa
Junior Developer
.Net, C#, SQL
you.suck = (you.passion != Programming)
|
|
|
|
|
Many many THANKS for that..........it did worked..........thanks.
|
|
|
|
|
Its a pleasure
Harvey Saayman - South Africa
Junior Developer
.Net, C#, SQL
you.suck = (you.passion != Programming)
|
|
|
|
|
hello,
My application depends on a sql server database I want to deploy my project in the client machine with out deploying the sql server express edition because it take some disk space and it some times cause some problems in its deployment(mainly if user don't have any experience).Is there any way to overcome this situation.
thanks
Dad
|
|
|
|
|
Have a centrally installed SQL Server that the client applications connect to.
|
|
|
|
|
As Colin Said,
you need to have a SQL Server wherer you data will be centrally accessed. SQL is not like Access , you cannot just install the SQl Server in Each machine.
Do you understand
Vuyiswa Maseko,
Sorrow is Better than Laughter, it may Sadden your Face, but It sharpens your Understanding
VB.NET/SQL7/2000/2005
http://vuyiswamb.007ihost.com
http://Ecadre.007ihost.com
vuyiswam@tshwane.gov.za
|
|
|
|
|
Hi all,
Im using two database db1,db2, the data will be insert,update will done in db1 will be uploaded once in a day.if i updated or inserted the records in another day means it should fetch that records alone and updated to db2 database,im using sql server 2000,asp.net2.0.how can i acheive this one.
Regrds
Kanna
|
|
|
|
|
Hi,
I currently have a site in ASP.NET connecting to SQL Server 2005, what we have picked up is that there are a lot of processes with the same process id. i.e. 64 appears 7 times in the activity monitor, half of which is either sleeping or suspended, the other 3 is runnable.
How is this happening? is it possible to have more than one process with the same ID? and a lot of these processed has a wait type of CXPacket.
Just want to state that I'm no sql guru so its a process of head bumping at this stage.
Thanks in advance
No matter how long he who laughs last laughs, he who laughs first has a head start!
|
|
|
|
|
Hi,
You are seeing parallel execution of statements which is basically a good thing.
However it can also be a problem (depending on the application) in which case you should reduce the amount of parallelism or even disable it. Usually if this is the case you also see very high (~100%) CPU load and actually your CPU becomes the bottleneck.
Mika
|
|
|
|
|
The problem at this stage is that it is causing the database to slow to a snail pace which is pulling the site down completely.
Is there a way that I can disable parallelism on only one database or do I have to do it on the entire sql server?
No matter how long he who laughs last laughs, he who laughs first has a head start!
|
|
|
|
|
Unfortunately you have to configure parallelism for the whole SQL Server. However you can configure parallel query so that it will not happen so often:
1. set cost threshold for parallelism for a higher value. For example if the value for this is 5, it means that if the operation is expected to take more than 5 seconds, parallelism shall be used. You can increase this value so that only 'long' operations use parallelism (say set theshold to 30 seconds or whatever is suitable).
2. don't let the database use all processors. If you have 4 processors, you can set max degree of parallelism to 2 which means that only 2 processors are used simultaneously. Value 0 means that all processors are used
To list your configuration use either Management studio or by T-SQL by configuring to see advanced options and then listing all option:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure;
GO
Hope that this helps,
Mika
|
|
|
|
|
Thanks a mil,
I'll give it a bash and hopefully it sorts the problem
No matter how long he who laughs last laughs, he who laughs first has a head start!
|
|
|
|
|
You're welcome
Mika
|
|
|
|
|
I am using SQL Server 2000 on back end and .Net C sharp on front end. Server is connected with more than 30 users.
On user’s computers there are different files having different formats like MS Word, Excel, Visio, PDF etc with more than 4 MB size.
I want to see these files through database, correct them and resend to users through database on LAN.
How Can I Do All This without sharing any directory?
Best Regards for your kind Help
|
|
|
|