|
thanks for your answer..
i will try to explain more..i'm developping a financial application (quotes, markets, charts,financial transaction,portfolio...)this application is no a webApp, but server-client..many users will use it at the same time..
data are very importants and can change at any moment ( prices for example)..for getting or setting them from the database, i'm thinking about using datasets, but i know that they work in disconnected mode, so perhaps it will be some problems when updating data for example...so,datasets are they the best way? can i use flags in database to check if the data has been changed before?or doing an other thing?
i hope that it's more clear know..
thanks again and sorry for my poor english
-- modified at 5:49 Wednesday 18th October, 2006
|
|
|
|
|
The datasets use an optimistic concurrency scheme which means that it checks for changes to the tables before updating the database, I seem to remember that some event is fired off so that you can decide what to do in those cases.
It might be worth thinking if you can restructure the way data is handled so that it works in a more disconnected mode, it will save resources on the server for one thing, it fits with the .NET way of doing things and will likely to be more future proof (whatever that means).
Sig cops got me...
|
|
|
|
|
How to make DataSet transparently remoted, or proxied, or called by ref?
I have some DataSets that work in main AppDomain of application,
(it is huge, so I want to connect to it, not load each time)
but I want to move it to separate server, so in one mode I want to place
DataSet in separate app and in second mode- to use it in main AppDomain.
Usual way for it to have object inherited from Marshal-by-ref,
but DataSet already inherited from by Value,
so simple approach is not working,
so question is-
can I create transparent proxy for my Dataset, custom or not - by else way?
Thanks you
|
|
|
|
|
Hello. Is it possible to use SQL Server 2000 Notification Services together with .NET 2.0 client. Thanks in advance.
|
|
|
|
|
Hi
How can I save a DateTime from my C# program into a SQL Server (datetime)
database column.
Shahzad
|
|
|
|
|
Hi,
directly in the Database via a SQL-Statement or into the DataSet ?
With a SQL_Statement you need to format the DateTime into a string ( in the format of the Database-DateTime f.ex. 2006-06-18)
The String you place into the SQL-Statement
|
|
|
|
|
i m using followint store proc but the SET DATEFORMAT not working
have any Idea!!
ALTER PROCEDURE [dbo].[sp_acc_Ins_Vouchers]
(
@voucherNo char(4),
@voucherDate datetime,
@voucherTypeCode char(4),
@voucherKey char(17),
@chequeNumber varchar(20)=null,
@chequeDate datetime=null,
@Posted bit=1,
@Cancelled bit=0,
@createdOn datetime=null,
@createdBy char(5)=null,
@comp_id int,
@Year_id int
)
AS
SET DATEFORMAT ymd
INSERT INTO tbl_acc_Vouchers(voucherNo, voucherDate, voucherTypeCode,
voucherKey,chequeNumber,chequeDate,Posted,Cancelled,createdOn,createdBy,
comp_id,Year_id)
VALUES (@voucherNo, @voucherDate, @voucherTypeCode,
@voucherKey,@chequeNumber,@chequeDate,@Posted,@Cancelled,@createdOn,@createdBy,
@comp_id,@Year_id)
|
|
|
|
|
You should use a SqlParameter object and define a parameter in your query text to match.
|
|
|
|
|
Let SQL server converts string for you:
update mytable set mydate = '10/19/2006' where myid = xxx
|
|
|
|
|
SELECT SO1.name AS Tab, SC1.name AS Col, SO2.name AS RefTab, SC2.name AS
RefCol
FROM dbo.sysforeignkeys FK
INNER JOIN dbo.syscolumns SC1 ON FK.fkeyid = SC1.id AND FK.fkey = SC1.colid
INNER JOIN dbo.syscolumns SC2 ON FK.rkeyid = SC2.id AND FK.rkey = SC2.colid
INNER JOIN dbo.sysobjects SO1 ON FK.fkeyid = SO1.id
INNER JOIN dbo.sysobjects SO2 ON FK.rkeyid = SO2.id
WHERE SO2.Name = 'tblcountry'
Rahul
|
|
|
|
|
Hi Rahul,
You could also do this via Enterprise Manager by right clicking the table and viewing dependencies. This is under the assumption that you have sufficient database privileges to execute this action.
Have a great day.
Thanks and Regards,
Aby
|
|
|
|
|
Hi Aby
It was nice to see ur reply.
But its limitation is that it will only give the object that depend on that
table.
But the above queries we can get the Table Name ,Column Name ,dependency On table and Reference Column Name Also.
With Warm Regards
Rahul Joshi
|
|
|
|
|
I have a trade data tables (about 10) and I need to retrieve information based on input parameters. Each table has about 3-4 million rows.
The table has columns like Commodity, Unit, Quantity, Value, Month, Country
A typical query I use to select data is "Select top 10 commodity , sum(value), sum(quantity) , column4, column5, column6 from table where month=xx and country=xxxx"
The column4 = (column2)/(total sum of value) and column 5=(column3)/(total sum of quantity). Column6=column5/column4.
It takes about 3-4 minutes for the query to complete and its a lot of time specially since I need to pull this information from a webpage.
I wanted to know if there is an alternate way to pull the data from server ?
I mean can I write a script that creates tables for all the input combinations i.e month x country (12x228) and save them in table (subtable-table) with a naming convention so from the web I can just pull the table with input parameters mapped to name convention and not running any runtime queries on database ??
OR
Can I write a script that creates a html files for each table for all input combinations save them ?
OR
Is there exists any other solution ?
|
|
|
|
|
Run the query through a SQL Profiler and optimize. 3 to 4 minutes is way to long for such a simplistic query. Just a guess:
CREATE INDEX
MonthCountry_IDX
ON
MyTradeTable
(
Month,
Country
)
I am assuming you have fewer unique [Month] rows than unique [Country] rows.
I would do pre-processing as a last resort. It creates on-going maintenance issues.
|
|
|
|
|
Actual query looks like below:
Column 7 computation is the killer since column7=column5/column4. column4 & column5 part runs twice here. I dont know if I can save column4 & 5 values in a variables and then compute column7 value just using the variables. That will save half of the time. Can it be done ??
Use test
Go
Declare @Country Int, @Month Int
Set @Country=5310
Set @Month=12
select top 10 a.commodity, sum(a.all_qy1_mo) as Quantity, sum(a.all_val_mo) as [Value],
CASE WHEN (select Sum(b.all_qy1_mo)from [2005exp] b
where a.commodity=b.commodity)<>0 THEN (sum(a.all_qy1_mo)/(select
Sum(b.all_qy1_mo) from [2005exp] b
where a.commodity=b.commodity))*100.00 ELSE NULL END as [Column4],
CASE WHEN (select sum(b.all_val_mo) from [2005exp] b
where a.commodity=b.commodity)<>0 THEN (sum(a.all_val_mo)/(select
sum(b.all_val_mo) from [2005exp] b
where a.commodity=b.commodity))*100.00 ELSE NULL END as [Column5],
CASE WHEN sum(a.all_qy1_mo)<> 0 THEN
sum(a.all_val_mo)/sum(a.all_qy1_mo) ELSE NULL END as [Average Price],
CASE WHEN (select Sum(b.all_qy1_mo)from [2005exp] b
where a.commodity=b.commodity)<>0 THEN
(((sum(a.all_val_mo)/(select sum(b.all_val_mo) from [2005exp] b
where a.commodity=b.commodity)))/((sum(a.all_qy1_mo)/(select
Sum(b.all_qy1_mo) from [2005exp] b
where a.commodity=b.commodity)))) ELSE NULL END as [Column7]
from [2005exp] a inner join concord c on a.commodity=c.commodity
WHERE a.cty_code=@Country and a.stat_month=@Month
GROUP BY a.commodity, c.descrip_1, c.quantity_1
order by [Value] desc
|
|
|
|
|
Since you are only looking for the top 10 values, I'd try breaking the SQL statement into smaller chunks.
select top 10 a.commodity,sum(a.all_val_mo) as [Value]
into #tmpCommodity
from [2005exp] a inner join concord c on a.commodity=c.commodity
WHERE a.cty_code=@Country and a.stat_month=@Month
GROUP BY a.commodity, c.descrip_1, c.quantity_1
order by [Value] desc
Then, join your larger SQL to the #tmpCommodity table and get rid of the top 10 (since you already have that). This way, the complex computed columns will not have to calculate for every commodity, but only the 10 you already have determined to be the top values.
--EricDV Sig---------
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them.
- Laurence J. Peters
|
|
|
|
|
It has to go back to original table (and not just top 10 values), since it has to calculate the sum(a.all_val_mo) without the condition a.cty_code=@country. The column actually calculates the % of their share in compared to all countries.
|
|
|
|
|
miamikk wrote: It has to go back to original table, since it has to calculate the sum(a.all_val_mo) without the condition a.cty_code=@country.
That could also be put in a separate query to speed things up. But no sense belaboring the point, it looks like you and Michael Potter have got it figured out.
--EricDV Sig---------
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them.
- Laurence J. Peters
|
|
|
|
|
My statement of using a profiler still stands. You have multiple corelated sub-queries which also need to be profiled. You may find out that processing it in multiple queries will be faster.
Hopefully, you have referencial integrity on the [commodity] keys in the [2005exp] and [concord] tables. That will take care of the necessary keys there. If not, you need to add indexes on the [commodity] columns in both tables.
Here is an in memory temporary table that will you can use to remove all of the correlated sub-queries that may be slowing down your query. Just add an additional inner join to your orignal query.
DECLARE @tbl TABLE
(
commodity INTEGER NOT NULL,
all_qy1_mo_commodity_total INTEGER NULL,
all_val_mo_commodity_total INTEGER NULL
)
INSERT INTO @tbl
(
commodity,
all_qy1_mo_commodity_total,
all_val_mo_commodity_total
)
SELECT
commodity,
SUM(all_qy1_mo),
SUM(all_val_mo)
FROM
[2005exp]
GROUP BY
commodity
I am sure it contains some typos but, the final query would look something like this:
select top 10 a.commodity, sum(a.all_qy1_mo) as Quantity, sum(a.all_val_mo) as [Value],
CASE WHEN t.all_qy1_mo_commodity_total <> 0)<>0
THEN (sum(a.all_qy1_mo)/(t.all_qy1_mo_commodity_total))*100.00
ELSE NULL END as [Column4],
CASE WHEN t.all_val_mo_commodity_total<>0
THEN (sum(a.all_val_mo)/(t.all_val_mo_commodity_total))*100.00
ELSE NULL END as [Column5],
CASE WHEN sum(a.all_qy1_mo)<> 0 THEN
sum(a.all_val_mo)/sum(a.all_qy1_mo) ELSE NULL END as [Average Price],
CASE WHEN t.all_qy1_mo_commodity_total<>0 THEN
(((sum(a.all_val_mo)/(t.all_val_mo_commodity_total)))/((sum(a.all_qy1_mo)/
(t.all_val_mo_commodity_total)))) ELSE NULL END as [Column7]
from [2005exp] a inner join concord c on a.commodity=c.commodity
inner join @tbl t on a.commodity = t.commodity
WHERE a.cty_code=@Country and a.stat_month=@Month
GROUP BY a.commodity, c.descrip_1, c.quantity_1
order by [Value] desc
Notice that it is a lot cleaner and the query engine doesn't have to repeat the subqueries muliple times for each row. You will still need an index on [Month], [Country] in the [2205exp] table.
|
|
|
|
|
After inserting an index on commodity and country column, the execution time has been reduced from 3-4 minutes to 10-15 seconds.
I tried the memory table, but I get an error "Column '@tbl.all_qy1_mo_commodity_total' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause".
|
|
|
|
|
My bad - that is what happens when you code without testing. Yes, the resulting columns in the THEN clauses would have to be in the aggregates.
Remove the INNER JOIN for the @tbl. Back to the Correlated Sub-queries but, on a much smaller table. I still think it will be much faster. Your first CASE would look like:
CASE
WHEN ((SELECT all_qy1_mo_commodity_total FROM @tmp where commodity = a.commodity) <> 0)
THEN (sum(a.all_qy1_mo)/
(SELECT all_qy1_mo_commodity_total FROM @tmp where commodity = a.commodity))*100.00
ELSE NULL
END as [Column4]
When you created the index on the [country] column, did you include the [month] column in the same index? It should go from most restrictive to least restrictive to make sure that the query engine examines the least amount of index keys/rows as necessary.
There should be no reason why you can't get this query down to a second or so.
|
|
|
|
|
My database server is too slow by no reason. I guess it is because the LOG file has become too large. I am looking for a stored procedure that clears the LOG file. Someone please help me with its name.
|
|
|
|
|
If you're talking about your transaction log, the only supported way of clearing it is to back it up. Space taken by committed transactions in the log will then be marked as reusable. The log will then reuse that space.
You should perhaps consider which Recovery Model you are using. The Full recovery model is generally recommended for any production system - it permits you to recover to the point of failure if the database files become corrupted or unavailable but you have a full backup and the transaction log is still OK. It also permits you to roll back to any point in time. The Simple recovery model only allows you to restore a full backup; you cannot make transaction log backups.
To reduce the size of the files, use the Shrink Database feature. You will probably need to select the 'Move pages to beginning of file' option to actually have much effect, since the pages that are most likely to be in use straight after doing this will be those at the end of the file.
All that said, a large transaction log does not generally cause a slowdown - SQL Server simply writes to the end of it, expanding the file if necessary. If the disk is starting to get full, the OS may be struggling to find free space for it to expand to. Best practice on a production server is for the transaction log to live on a drive (preferably a RAID 1 [mirrored] array of drives, for redundancy) separately from everything else. This prevents the drive heads from having to seek away from the current write position in the transaction log - log accesses are predominantly sequential-write except when a rollback occurs.
Another common reason for slowdowns as the amount of data increases is simply that your queries and indexes are poorly designed. Most likely one or more tables is having to be scanned - read from beginning to end - rather than using an index to find the rows required. This operation scales more-or-less linearly with the size of the table, getting progressively slower as you add more data. You should always have a representative amount of data in your database when performing pre-production tests, to ensure you catch these problems early.
|
|
|
|
|
Hello,
ive to call a stored procedure using c#. The procedure takes a couple of parameter and returns an out parameter as a cursor.
I take this cursor to show in a datagrid directly for example which runs fine so far. But now i have to make a selection of this cursor, meaning that i want to make a select statement on that cursor selecting specific fields and in a specific order.
Please, can someone tell me how to achieve this?
Best ragards
Trollpower
|
|
|
|
|
Access has had some concurrency problems when used with web applications. You are probably better off with SQL Express.
|
|
|
|
|