|
In our couple of runs we have identified 2 separate statements, the problem is this system is wiped each run and the data then passed through changes in size, so we probably have not found them all. We've also not been down all the varying processing paths.
So I was wondering if in doing a conversion to 2k8, you will always find statements that no longer work efficiently and you just have to find them all and fix. Or if there is something we might have set wrong?
If everyone says its the former then fine, I can plan for that, just want to make sure before saying we need to make more changes.
|
|
|
|
|
Hmmm ...
Let's look at this differently. Do you have an example of a SQL statement which took longer to execute under 2008 and have you found a solution ? (For example rebuild index)
Are you running SQL 2008 server in some "Compatibility Level" which is causing it work inefficiently?
For example, run
select compatibility_level from sys.databases where name=db_name()
and you should get 100 for SQL Server 2008.
I've recently upgraded an applicaiton from SQL 2005 to SQL 2008 and things seem to be just fine with the queries, etc.
I took a full backup from SQL 2005, created a brand new database on our SQL 2008 server and restored the backup. Very straight forward. No complaints. (The applicaiton which accesses the database was also upgraded, but I have no control over that because it is an application which we purchased.)
If you need more detailed help, send me a private message and I can point you to a SQL Server professional.
You can continue to post your questions here, I will do the best I can.
|
|
|
|
|
The Compatibility level is set to 100. We've not tried 80...
We do seem to have resolved the issue, the server now has 8Gb, rather than 4Gb. The 2000 server had 4Gb.
One query that took hours on 2008, but down to seconds once we added the 8Gb was:
update outMortgageAsset
set
OpeningBalance = z.OpeningBalance
, ClosingBalance = z.ClosingBalance
, AvgBalance = z.AvgBalance
from outMortgageAsset b
join
(
select
GroupID = a.GroupID
, MonthDate = k.MonthDate
, OpeningBalance = sum(k.OpeningBalance)
, ClosingBalance = sum(k.ClosingBalance)
, AvgBalance = sum(k.AvgBalance)
from EIRGroupAccountsToBeTotaled a
join
(
select
monthdate = b.ForecastMonth
, accountid = b.AccountID
, OpeningBalance = isnull(b.CurrentBalance, 0.0)
, ClosingBalance = isnull(c.CurrentBalance, 0.0)
, AvgBalance = (isnull(c.CurrentBalance, 0.0)
+ isnull(b.CurrentBalance, 0.0)) / 2.00000000
from EIRAccountBalancePerPeriod b
left join EIRAccountBalancePerPeriod c
on b.ForecastMonth = c.ForecastMonth - 1
and c.AccountID = b.AccountID
union all
select
ForecastMonth - 1
, AccountID
, OpeningBalance = 0
, ClosingBalance = isnull(CurrentBalance, 0.0)
, AvgBalance = (CurrentBalance) / 2.00000000
from EIRAccountBalancePerPeriod a
where ForecastMonth =
(
select min(ForecastMonth)
from EIRAccountBalancePerPeriod z
where z.AccountID = a.AccountID
)
) k
on a.accountid = k.accountid
group by a.GroupID, k.MonthDate
) z
on b.GroupID = z.GroupID
and fm = z.MonthDate
The row counts for the tables:
outMortgageAsset: 1148
EIRGroupAccountsToBeTotaled: 1499860, Clustered Index on GroupID, AccountID
EIRAccountBalancePerPeriod: 14347829, Non-Clustered on MonthDate, ForcastMonth, AccountID
|
|
|
|
|
Interesting.
So after you added the additional memory, the query executed faster?
What OS is the SQL 2008 Server running vs the SQL 2000 Server? (64 vs 32 bit?)
I'm thinking that the SQL 2008 server OS was struggling with memory; causing poor performance.
Does the system seem to be operating better with the 8GB of memory? It may have been just a memory issue. (Seems like Microsoft OS and Applicaitons need more and more memory with each new release. But memory is cheap these days ...)
Hope things are better.
|
|
|
|
|
Both are 64bit!
System's sitting at 93% physical used, with SQL taking ~6gb for its private working set.
Unfortunately just running a slightly larger dataset through, and now another earlier statement is showing the same signs. 50% cpu, but nothing else seems to be happening.
|
|
|
|
|
|
Grr, not getting very far.
Copied the db back to our dev server which hasn't had the 8gb upgrade, so its still 4gb.
Ran the same dataset through, the dev one finished in a reasonable time, the other is still going and now slowed down on yet another part of the process...
|
|
|
|
|
Might have found something, a high number of CXPACKET waits. We've set the MAXDOP to 1 to see if we now get consistent performance, if we're lucky it will consistent performance that matches the old server...
|
|
|
|
|
Hi,
I don't know why mySQL timestamp was change?! it' 3rd today but it's showing 4th when I try SELECT NOW or SELECT SYSDATE?
How can I fix it?
|
|
|
|
|
IMO, MySQL does not have its own date/time storage. Check your system's date/time.
|
|
|
|
|
I not have Connection String for Ms access 2010 to Windows 7 64 Bit
there find an error on "provider=Microsoft.Jet.OLEDB.4.0" is
The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine.
|
|
|
|
|
See here[^].
One of these days I'm going to think of a really clever signature.
|
|
|
|
|
We have a web application using SQL Server 2008 R2 that contains data for tracking time that another company has expressed interest in using, we initially set up the database such that all end-user tables have a foreign key to what company that row belongs to. Is there a good way to backup/restore the data for a specific company across the tables, such that if the new company says tomorrow that they want their data restored to yesterday; it can be restored without impacting all the other companies data as well? I'm trying to avoid writing some form of stored procedure that selects and then have to do an insert/update/delete to put the data back. Thanks for any insight.
Dennis Daugherty
|
|
|
|
|
Could you simply use bcp[^] to 'export' or 'import' the data. For your case, when exporting, you should define a custom query for the bcp to run.
|
|
|
|
|
I guess I could if I turned identity insert on and then just delete all for a company and then import ... I'll think on that one ...
|
|
|
|
|
Or if you have foreign keys between several tables, you could set the identity insert off to ensure that the keys remain the same as they originally were.
Another mechanism you could consider is SSIS where you could break the import/export to proper blocks, define a workflow and if necessary add logic.
|
|
|
|
|
Is it possible to supply a separate environment for their needs. Or are they really trying to reset their production environment back to a prior date.
We often have a need for a department to want a restore of a previous date for reporting/investigation reasons, this does not need to impact on the production environment as they get another database environment created specifically for their needs.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
This is more for a restore after they do something dumb ... like "I just deleted all the invoices for customer A when I meant B; can you get them back?" My answer would be I can put your data back to 3am last night, you'll loose everything you did today but your invoices will be back.
|
|
|
|
|
Ugh, nasty, especially when you mix other clients into the data, I would suggest this could only be fixed by a custom script, charge them a fortune as a lesson and hope like hell you can deliver.
I would then implement a logging/audit strategy on the database or separate each client to their own environment. If you screw up another clients data b/c of this issue your whole business case just went out the window!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi
I am trying to execute this query but its giving me an error
"Cannot find either column "D" or the user-defined function or aggregate "D.bonus", or the name is ambiguous.
"
Query:
MERGE INTO bonuses D
USING bonuses1 E
ON D.employee_id = S.employee_id)
WHEN MATCHED THEN
UPDATE SET
D.bonus = E.bonus
WHEN NOT MATCHED THEN INSERT (D.employee_id, D.bonus)
VALUES ES.employee_id, E.salary*0.1)
WHERE (E.salary <= 8000);
UPDATE bonuses
Set bonus=10000
FROM bonuses D
JOIN cbonuses1 E
ON D.employee_id = S.employee_id)
Any idea??
modified 1-Oct-12 7:56am.
|
|
|
|
|
UPDATE bonuses
Set bonus=10000
FROM bonuses D
JOIN cbonuses1 E
ON D.employee_id = S.employee_id)
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
How to Recover Deleted rows from sql 2000 Database using with some period parameter
eg: from 01/01/2012 - 20/05/2012
Display all deleted records on this period.
|
|
|
|
|
Restore the backup. AFAIK, there's no alternative for backups.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
There is no inbuilt-mechanism to restore or read deleted rows in SQL Server 2000. You must implement logical deletes instead of physical deletes to be able to achieve that. (For example, an IsDeleted(bit) column in every table).
As Eddy suggested, the only other ways is to work on a restored copy of the database.
|
|
|
|
|
Hi World!
what is the Windows Run Command for "SQLDataRoot" OR "SQLPath"
for exp:
%program files% --->open Program files.
i wanna something the open the :
c:\Program Files\SqlServer 2005 for DWebPro\MSSQL.1\MSSQL
in other word i need this path to give to my setup to copy the database to the data path of that instance is installed.
i know the instance name also but i cannot give the exact path.cause if sql server was installed in that machine there should be "MSSQL.1" and now it should change to :
c:\Program Files\SqlServer 2005 for DWebPro\MSSQL.2\MSSQL
So if i can give it something like %MSSQL$% it will solve the problem.
|
|
|
|