|
Hi
How can I save a DateTime from my C# program into a SQL Server (datetime)
database column.
Shahzad
|
|
|
|
|
Hi,
directly in the Database via a SQL-Statement or into the DataSet ?
With a SQL_Statement you need to format the DateTime into a string ( in the format of the Database-DateTime f.ex. 2006-06-18)
The String you place into the SQL-Statement
|
|
|
|
|
i m using followint store proc but the SET DATEFORMAT not working
have any Idea!!
ALTER PROCEDURE [dbo].[sp_acc_Ins_Vouchers]
(
@voucherNo char(4),
@voucherDate datetime,
@voucherTypeCode char(4),
@voucherKey char(17),
@chequeNumber varchar(20)=null,
@chequeDate datetime=null,
@Posted bit=1,
@Cancelled bit=0,
@createdOn datetime=null,
@createdBy char(5)=null,
@comp_id int,
@Year_id int
)
AS
SET DATEFORMAT ymd
INSERT INTO tbl_acc_Vouchers(voucherNo, voucherDate, voucherTypeCode,
voucherKey,chequeNumber,chequeDate,Posted,Cancelled,createdOn,createdBy,
comp_id,Year_id)
VALUES (@voucherNo, @voucherDate, @voucherTypeCode,
@voucherKey,@chequeNumber,@chequeDate,@Posted,@Cancelled,@createdOn,@createdBy,
@comp_id,@Year_id)
|
|
|
|
|
You should use a SqlParameter object and define a parameter in your query text to match.
|
|
|
|
|
Let SQL server converts string for you:
update mytable set mydate = '10/19/2006' where myid = xxx
|
|
|
|
|
SELECT SO1.name AS Tab, SC1.name AS Col, SO2.name AS RefTab, SC2.name AS
RefCol
FROM dbo.sysforeignkeys FK
INNER JOIN dbo.syscolumns SC1 ON FK.fkeyid = SC1.id AND FK.fkey = SC1.colid
INNER JOIN dbo.syscolumns SC2 ON FK.rkeyid = SC2.id AND FK.rkey = SC2.colid
INNER JOIN dbo.sysobjects SO1 ON FK.fkeyid = SO1.id
INNER JOIN dbo.sysobjects SO2 ON FK.rkeyid = SO2.id
WHERE SO2.Name = 'tblcountry'
Rahul
|
|
|
|
|
Hi Rahul,
You could also do this via Enterprise Manager by right clicking the table and viewing dependencies. This is under the assumption that you have sufficient database privileges to execute this action.
Have a great day.
Thanks and Regards,
Aby
|
|
|
|
|
Hi Aby
It was nice to see ur reply.
But its limitation is that it will only give the object that depend on that
table.
But the above queries we can get the Table Name ,Column Name ,dependency On table and Reference Column Name Also.
With Warm Regards
Rahul Joshi
|
|
|
|
|
I have a trade data tables (about 10) and I need to retrieve information based on input parameters. Each table has about 3-4 million rows.
The table has columns like Commodity, Unit, Quantity, Value, Month, Country
A typical query I use to select data is "Select top 10 commodity , sum(value), sum(quantity) , column4, column5, column6 from table where month=xx and country=xxxx"
The column4 = (column2)/(total sum of value) and column 5=(column3)/(total sum of quantity). Column6=column5/column4.
It takes about 3-4 minutes for the query to complete and its a lot of time specially since I need to pull this information from a webpage.
I wanted to know if there is an alternate way to pull the data from server ?
I mean can I write a script that creates tables for all the input combinations i.e month x country (12x228) and save them in table (subtable-table) with a naming convention so from the web I can just pull the table with input parameters mapped to name convention and not running any runtime queries on database ??
OR
Can I write a script that creates a html files for each table for all input combinations save them ?
OR
Is there exists any other solution ?
|
|
|
|
|
Run the query through a SQL Profiler and optimize. 3 to 4 minutes is way to long for such a simplistic query. Just a guess:
CREATE INDEX
MonthCountry_IDX
ON
MyTradeTable
(
Month,
Country
)
I am assuming you have fewer unique [Month] rows than unique [Country] rows.
I would do pre-processing as a last resort. It creates on-going maintenance issues.
|
|
|
|
|
Actual query looks like below:
Column 7 computation is the killer since column7=column5/column4. column4 & column5 part runs twice here. I dont know if I can save column4 & 5 values in a variables and then compute column7 value just using the variables. That will save half of the time. Can it be done ??
Use test
Go
Declare @Country Int, @Month Int
Set @Country=5310
Set @Month=12
select top 10 a.commodity, sum(a.all_qy1_mo) as Quantity, sum(a.all_val_mo) as [Value],
CASE WHEN (select Sum(b.all_qy1_mo)from [2005exp] b
where a.commodity=b.commodity)<>0 THEN (sum(a.all_qy1_mo)/(select
Sum(b.all_qy1_mo) from [2005exp] b
where a.commodity=b.commodity))*100.00 ELSE NULL END as [Column4],
CASE WHEN (select sum(b.all_val_mo) from [2005exp] b
where a.commodity=b.commodity)<>0 THEN (sum(a.all_val_mo)/(select
sum(b.all_val_mo) from [2005exp] b
where a.commodity=b.commodity))*100.00 ELSE NULL END as [Column5],
CASE WHEN sum(a.all_qy1_mo)<> 0 THEN
sum(a.all_val_mo)/sum(a.all_qy1_mo) ELSE NULL END as [Average Price],
CASE WHEN (select Sum(b.all_qy1_mo)from [2005exp] b
where a.commodity=b.commodity)<>0 THEN
(((sum(a.all_val_mo)/(select sum(b.all_val_mo) from [2005exp] b
where a.commodity=b.commodity)))/((sum(a.all_qy1_mo)/(select
Sum(b.all_qy1_mo) from [2005exp] b
where a.commodity=b.commodity)))) ELSE NULL END as [Column7]
from [2005exp] a inner join concord c on a.commodity=c.commodity
WHERE a.cty_code=@Country and a.stat_month=@Month
GROUP BY a.commodity, c.descrip_1, c.quantity_1
order by [Value] desc
|
|
|
|
|
Since you are only looking for the top 10 values, I'd try breaking the SQL statement into smaller chunks.
select top 10 a.commodity,sum(a.all_val_mo) as [Value]
into #tmpCommodity
from [2005exp] a inner join concord c on a.commodity=c.commodity
WHERE a.cty_code=@Country and a.stat_month=@Month
GROUP BY a.commodity, c.descrip_1, c.quantity_1
order by [Value] desc
Then, join your larger SQL to the #tmpCommodity table and get rid of the top 10 (since you already have that). This way, the complex computed columns will not have to calculate for every commodity, but only the 10 you already have determined to be the top values.
--EricDV Sig---------
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them.
- Laurence J. Peters
|
|
|
|
|
It has to go back to original table (and not just top 10 values), since it has to calculate the sum(a.all_val_mo) without the condition a.cty_code=@country. The column actually calculates the % of their share in compared to all countries.
|
|
|
|
|
miamikk wrote: It has to go back to original table, since it has to calculate the sum(a.all_val_mo) without the condition a.cty_code=@country.
That could also be put in a separate query to speed things up. But no sense belaboring the point, it looks like you and Michael Potter have got it figured out.
--EricDV Sig---------
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them.
- Laurence J. Peters
|
|
|
|
|
My statement of using a profiler still stands. You have multiple corelated sub-queries which also need to be profiled. You may find out that processing it in multiple queries will be faster.
Hopefully, you have referencial integrity on the [commodity] keys in the [2005exp] and [concord] tables. That will take care of the necessary keys there. If not, you need to add indexes on the [commodity] columns in both tables.
Here is an in memory temporary table that will you can use to remove all of the correlated sub-queries that may be slowing down your query. Just add an additional inner join to your orignal query.
DECLARE @tbl TABLE
(
commodity INTEGER NOT NULL,
all_qy1_mo_commodity_total INTEGER NULL,
all_val_mo_commodity_total INTEGER NULL
)
INSERT INTO @tbl
(
commodity,
all_qy1_mo_commodity_total,
all_val_mo_commodity_total
)
SELECT
commodity,
SUM(all_qy1_mo),
SUM(all_val_mo)
FROM
[2005exp]
GROUP BY
commodity
I am sure it contains some typos but, the final query would look something like this:
select top 10 a.commodity, sum(a.all_qy1_mo) as Quantity, sum(a.all_val_mo) as [Value],
CASE WHEN t.all_qy1_mo_commodity_total <> 0)<>0
THEN (sum(a.all_qy1_mo)/(t.all_qy1_mo_commodity_total))*100.00
ELSE NULL END as [Column4],
CASE WHEN t.all_val_mo_commodity_total<>0
THEN (sum(a.all_val_mo)/(t.all_val_mo_commodity_total))*100.00
ELSE NULL END as [Column5],
CASE WHEN sum(a.all_qy1_mo)<> 0 THEN
sum(a.all_val_mo)/sum(a.all_qy1_mo) ELSE NULL END as [Average Price],
CASE WHEN t.all_qy1_mo_commodity_total<>0 THEN
(((sum(a.all_val_mo)/(t.all_val_mo_commodity_total)))/((sum(a.all_qy1_mo)/
(t.all_val_mo_commodity_total)))) ELSE NULL END as [Column7]
from [2005exp] a inner join concord c on a.commodity=c.commodity
inner join @tbl t on a.commodity = t.commodity
WHERE a.cty_code=@Country and a.stat_month=@Month
GROUP BY a.commodity, c.descrip_1, c.quantity_1
order by [Value] desc
Notice that it is a lot cleaner and the query engine doesn't have to repeat the subqueries muliple times for each row. You will still need an index on [Month], [Country] in the [2205exp] table.
|
|
|
|
|
After inserting an index on commodity and country column, the execution time has been reduced from 3-4 minutes to 10-15 seconds.
I tried the memory table, but I get an error "Column '@tbl.all_qy1_mo_commodity_total' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause".
|
|
|
|
|
My bad - that is what happens when you code without testing. Yes, the resulting columns in the THEN clauses would have to be in the aggregates.
Remove the INNER JOIN for the @tbl. Back to the Correlated Sub-queries but, on a much smaller table. I still think it will be much faster. Your first CASE would look like:
CASE
WHEN ((SELECT all_qy1_mo_commodity_total FROM @tmp where commodity = a.commodity) <> 0)
THEN (sum(a.all_qy1_mo)/
(SELECT all_qy1_mo_commodity_total FROM @tmp where commodity = a.commodity))*100.00
ELSE NULL
END as [Column4]
When you created the index on the [country] column, did you include the [month] column in the same index? It should go from most restrictive to least restrictive to make sure that the query engine examines the least amount of index keys/rows as necessary.
There should be no reason why you can't get this query down to a second or so.
|
|
|
|
|
My database server is too slow by no reason. I guess it is because the LOG file has become too large. I am looking for a stored procedure that clears the LOG file. Someone please help me with its name.
|
|
|
|
|
If you're talking about your transaction log, the only supported way of clearing it is to back it up. Space taken by committed transactions in the log will then be marked as reusable. The log will then reuse that space.
You should perhaps consider which Recovery Model you are using. The Full recovery model is generally recommended for any production system - it permits you to recover to the point of failure if the database files become corrupted or unavailable but you have a full backup and the transaction log is still OK. It also permits you to roll back to any point in time. The Simple recovery model only allows you to restore a full backup; you cannot make transaction log backups.
To reduce the size of the files, use the Shrink Database feature. You will probably need to select the 'Move pages to beginning of file' option to actually have much effect, since the pages that are most likely to be in use straight after doing this will be those at the end of the file.
All that said, a large transaction log does not generally cause a slowdown - SQL Server simply writes to the end of it, expanding the file if necessary. If the disk is starting to get full, the OS may be struggling to find free space for it to expand to. Best practice on a production server is for the transaction log to live on a drive (preferably a RAID 1 [mirrored] array of drives, for redundancy) separately from everything else. This prevents the drive heads from having to seek away from the current write position in the transaction log - log accesses are predominantly sequential-write except when a rollback occurs.
Another common reason for slowdowns as the amount of data increases is simply that your queries and indexes are poorly designed. Most likely one or more tables is having to be scanned - read from beginning to end - rather than using an index to find the rows required. This operation scales more-or-less linearly with the size of the table, getting progressively slower as you add more data. You should always have a representative amount of data in your database when performing pre-production tests, to ensure you catch these problems early.
|
|
|
|
|
Hello,
ive to call a stored procedure using c#. The procedure takes a couple of parameter and returns an out parameter as a cursor.
I take this cursor to show in a datagrid directly for example which runs fine so far. But now i have to make a selection of this cursor, meaning that i want to make a select statement on that cursor selecting specific fields and in a specific order.
Please, can someone tell me how to achieve this?
Best ragards
Trollpower
|
|
|
|
|
Access has had some concurrency problems when used with web applications. You are probably better off with SQL Express.
|
|
|
|
|
I would also recommend SQL Express. Access is probably not too small, but it's more difficult to port and it's not overly extensible. I don't care for Access and I try not to use it when I can help it.
Steve Hanson
Tasen Software
New Hampshire (NH) Website Design & Software Development
|
|
|
|
|
I'm developing a network app for storing login info for access to the internet. Its a fairly small app so i'm wondering is SQL Express overkill or is Access to small.....if you understand what i'm saying!!!
|
|
|
|
|
If you use SQL Express you can scale up to a full server edition if you need to without much effort. If you use access then it would take more work to migrate.
|
|
|
|
|
I'll probably never scale up to server edition, cant see the app developing that big.
|
|
|
|
|