|
True about money values, floats / doubles are just no good. That's why, back in the 60's, IBM invented a 'decimal' machine as opposed to the traditional 'binary' machine, the IBM7070. Eventually, they added an entire 'packed decimal' arithmatic package to the IBM360 line so that financial applications written in COBOL would work with proper precision.
|
|
|
|
|
That was a BCD system wasn't it?
Google turns up a lot of hits on that numbering system too.
|
|
|
|
|
The 7070 used a two-out-of-five system for the decimal representation, the 360 was BCD for the packed decimal.
|
|
|
|
|
Windows 2008 Server, Access 2000 and 2010 (fails on both), Visual Studio 2010
Long story short, I'm trying to use the following query:
SELECT PARAM.param INTO CHKPARAM FROM PARAM WHERE ( ( PARAM.param = 'Y' ) )
When I execute this in my code it works about 15% of the time. If I add a 3.5 second sleep after the execution of this query before the "read" it seems to work about 95% of the time. The process looks something like this:
1. SELECT INTO chkparam (it should create the table and add 1 record to the table)
2. select from the table looking for the record
However the SELECT INTO doesn't always populate the table with the Y. Yes I verified that the table "param" does have a Y. My best guess is the query isn't completing before the code is executed. Is there some way to force completion of the query/transaction? I was alawys under the impression that control wasn't passsed back to the calling procedure until the function was completed. However it seems like I'm getting control back before the function is completing.
This is part of our test network, on our live older system we have been using DAO with the same query (obviously different implementation) for 10+ years with no problems. We found that using DAO on our new network slowed down the queries (plus you can't use it with anything new), thus the change to ODBC.
This is the code in quesiton (in the live version some of this is replaced by varaibles, but we have hardcoded this for testing for now until it works):
strSQL = _T("SELECT PARAM.param INTO CHKPARAM FROM PARAM WHERE ( ( PARAM.param = 'Y' ) ) ")
try
{
cdw.ExecuteSQL(strSQL);
Sleep(3500);
CString xstrSQL = _T("SELECT TOP 1 * FROM CHKPARAM");
CRecordset getinfoonly(&cdw);
getinfoonly.Open(CRecordset::snapshot, xstrSQL );
if (!getinfoonly.IsEOF())
{
CString tstring1;
short indx = 0;
getinfoonly.GetFieldValue(indx, tstring1);
}
else
getinfoonly.Close(); }
catch( CDBException* e )
{
AfxMessageBox(_T("Given SQL Expression \n") + strSQL + e->m_strError );
ok = FALSE;
}
|
|
|
|
|
MacRaider4 wrote: cdw.ExecuteSQL(strSQL);
How long is this statement taking to complete? Would SetQueryTimeout() be of any help?
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"Show me a community that obeys the Ten Commandments and I'll show you a less crowded prison system." - Anonymous
|
|
|
|
|
Well as mentioned, if we put in a 3.5 second sleep that seems to give it enough time. Granted that seems a bit excessive for, creating a table with one row and col with 1 values of 1 character. I'll try the SetQueryTimeout though to see if that makes any difference.
|
|
|
|
|
Ok we changed the Timeout to 1 second from 45 seconds... and it worked 2 of 3 times. After the failure on the 3rd attempt we stopped as it failed (did not insert the Y and no timeout error).
|
|
|
|
|
I'm not an SQL Expert, but in theory and based on my experience, You want to write SQL commands that does everything you need in 1 command execute, and just assume it worked or write more sql to return a reply.
So a conditional SQL Command based on certain parameters being met could look something like below.
IF (SELECT PARAM.param INTO CHKPARAM FROM PARAM WHERE PARAM.param = 'Y')
BEGIN
SELECT TOP 1 * FROM CHKPARAM
END
You should never have to sleep or wait for an anwser, for the underlying sql client inside the OBDC wrapper should take care of the timing, or socket/pipe and SQL Server latency, in producing your result. Are you destroying the cdw object before creating it again? Are you modeling your SQL Commands in something like EMS SQL Manager to test how long they take to process on a known good program code.
This is why I asked in a previous post of mine what others are using to talk to SQL Server, and went with the gut wrenching torture of using the native sql client sqlclin10 to talk to the server. I had really bad luck using ODBC back in 2002, and will never use it again.
|
|
|
|
|
This portion of the application is using an Access database not SQL Server (just wanted to be sure I'm clear about that). As far as I can tell the cdw object is not being destroyed before it's being created again, but I'll double check with our senior programmer.
It may have got lost in the original post, but this does work fine with ADO on our live network (Win Server 2000, Access 2000 and Visual Studio 6), this has happened with the move to 2008, Access 2000 and 2010 and VS 2010.
|
|
|
|
|
MacRaider4 wrote: with the move to 2008, Access 2000 and 2010 and VS 2010.
I don't have an answer, just thoughts about the process.
I know it was a access database, quite clear, but over time now in 2012, they sort of became the same to me, with very slight differences, from the client code perspective.
I was still thinking about your issue while eating lunch, and 2 things stick in my mind, which is the first call (Line 1), and then the next call below. I guess the code below was a fragment, and is never sent as a request. Must of been a test command to check for sanity.
CString xstrSQL = _T("SELECT TOP 1 * FROM CHKPARAM");
The other is the execution process, There all kind of the same, but yours was different.
Make Connection Object
Make and set SQL Command Object
Make Reader and Execute SQL Command
Wait and Read Results
Close Reader, Connection Objects
Destroy Objects
Overall, all I can think of since it fails on both Access 2000 and 2010, is that VS2010 called up a more modern version of ODBC, and that your current code sent out the proper credentials and established a valid session, made the execute request, the response was returned, but is getting lost, stuck, or not being sent at all back to the client, or the reader object was not able to load the response, because it's still working on the previous response.
The Microsoft world has changed alot since the days of VS6, Access 2000 and Server 2000. Technologies sort of have to match up like your original working project.
|
|
|
|
|
Basically what has happened is the general functionality of the program has gone to hell (for a lack of better term). 95% or so of our business is run off this application of which basically processes queries. (I'm still fairly new here and new to C++ so my explination will be a bit simple but should give you an idea).
Everything is based on a query. So you start with Query1 which normally retreives data from our SQL Server Database and stores the results in a temp (access), there is generally another "check" query looking to make sure that you got results (looking for a row count greater than 0), then you may or may not perform another query based on that and so forth. So yes this is part of a test we wrote into the larger application because of the issue we where having. That is why there is the SELECT INTO and then the SELECT FROM right after it, the SELECT FROM is checking to see if the table was populated as this application doesn't have a way to check the table for a value (i.e. can't check param for the Y, thus the SELECT INTO CHKPARAM where param.param = y). It may be a bit confusing and I'm sure I didn't explain it the best as I'm trying to keep this short. But I'll be happy to try to answer any questions until our senior guy gets in.
|
|
|
|
|
Here is some information that I ran across for something else. I know your using Access, but the principals should be the same. Could have something to do with ANSI characters in the older code you guys wrote. I wrote my SQL Commands in Unicode first, and it worked fine.
http://msdn.microsoft.com/en-us/library/ms811006.aspx[^]
Driver and SQL Server Versions
The following table shows which versions of the Microsoft SQL Server ODBC driver shipped with recent versions and service packs (SP) of Microsoft SQL Server. It also lists the operating system versions under which the drivers are certified to run and the versions of SQL Server against which they are certified to work.
Newer drivers recognize the capabilities of older databases and adjust to work with the features that exist in the older server. For example, if a user connects a version 2.65 driver to a version 4.21a server, the driver does not attempt to use ANSI or other options that did not exist in SQL Server 4.21a. Conversely, older drivers do not use the features available in newer servers.
For example, if a version 2.50 driver connects to a version 6.5 server, the driver has no code to use any new features or options introduced in the 6.5 server.
|
|
|
|
|
Spent some time over the weekend making some tweaks and this seems to be it... I got it working with a trial version of Access 2010 (seems to be 100% of the time) and about 95% with 2000. We plan to go with 2010 anyways so I guess it's "fixed". Thank you for your help!
|
|
|
|
|
That's great to hear. Your on a solid path now. Perhaps 1 more tweak and you'll get the Access 2000 working perfect as well.
|
|
|
|
|
Hello,
I am looking into a full-screen text scrolling app, and found this at code project: [^]
It is simple and fits my need. The only problem is that when I maximize it, its scrolling speed is way too slow (even when I take off the image and the delay iss set to 1ms).
Can anyone give me some hints or pointers (sample code, maybe) on how I can make it faster?
Thanks in advance.
|
|
|
|
|
looks like it draws all the text for each redraw. that will take a lot of time, if you have a lot of text to draw.
not sure how to speed that up without a major reworking of the control, though. ideally, you would render the text to an offscreen bitmap (or many bitmaps) only when the text or the size changed. then, you'd copy that bitmap to the screen on each redraw.
|
|
|
|
|
Well, I think it does the complete redraw every time because it's a scrolling text control... which from the looks of it... they mean the text automatically scrolls through the screen (i.e. moving text).
|
|
|
|
|
but if the text isn't changing, and the layout isn't changing, then there's a performance boost to be gained by only drawing the text once onto an in-memory bitmap then bitbltting that bitmap for redraws. drawing a bitmap of dims X,Y is faster than rendering text and then drawing a bitmap of X,Y.
|
|
|
|
|
|
OK guys, thanks for the pointers. Let me try that approach.
|
|
|
|
|
How to make it faster:
In the file ScrollerCtrl.cpp, set CScrollerCtrl::nSCROLL_DELAY to 1 and CScrollerCtrl::nSCROLL_PAUSE to 1.
That should speed it up a bit.
const int CScrollerCtrl::nSCROLL_DELAY = 1;
const int CScrollerCtrl::nSCROLL_PAUSE = 1;
const int CScrollerCtrl::nMARGIN = 5;
const int CScrollerCtrl::nFONT_SIZE = 12;
const int CScrollerCtrl::nFONT_WEIGHT = FW_SEMIBOLD;
const char* CScrollerCtrl::szFONT_NAME = "Comic Sans MS";
|
|
|
|
|
Add MemDC to your project.
That way your text drawing is done in memory, and then the entire screen is repainted in a flash.
A sample can be found here:
Flicker Free Drawing In MFC[^]
|
|
|
|
|
0 down vote favorite
share [fb] share [tw]
I am developing a dialog based application using VC++/MFC.
The application is required to do following processing:
1. Read input files those has a Header(one per file), Metadata section(multiple in single file) and Data Sections(multiple in single file)
2. Based on the One Data sample per x seconds, generate missing Data Section records E.g. The file contains Data Section have records for time T1, T5, T7 and T10 and Data sample rate is 1 then The output file should contain generated records for T2,T3,T4 and T6 along with existing records.
3. All Header section info is required to be written to one INI file text format. All Data Section records required to be written to separate binary file.
I am evaluating solution designs for it, considering below points:
A. Where to use threads ? One for reading, One for processing and one for Writing?
B. How to read input file? All file contents and retain in memory for processing? OR Read it by block by block (to handle large size file as well)?
C. How to do processing? On the fly right after reading -> process -> write ? OR Reade contents completely then processes it and then write.
In simple way I have to Read File -> Process data read -> and finally write it to different file format
Please give your valuable comments / design approaches for it.
|
|
|
|
|
A. Don't use threads where you don't need them. Depending on the size of these files, it doesn't sound like there's a reason to break the processing up into multiple threads. Only other reason I can think of to break this up into threads is if there's a large number of files and you can process them simultaneously, then look at creating a thread pool.
B. Again, it depends on the size of the file, if they're small text files, this doesn't really matter, if they are large binary files (or images, videos) then it might be beneficial to only read chunks at a time instead of loading the whole thing. I've had to process files > 4GB in size in which case loading to memory is just plain ol' impractical (or impossible depending on memory).
C. Once again, it's going to depend on file size and how much processing there is to do.
You really don't provide enough information to give a good recommendation, would have to know:
0. File size range.
1. Amount of processing involved.
2. Type of processing involved.
|
|
|
|
|
I agree, that the requirements as stated don't seem to make threads necessary.
I'll add though that processing requirements can make threads beneficial. If there is a resource the processing depends on that might see long delays, such as a remote data source, SQL query, etc..., the threads can keep data moving, if one file is taking a while to process. Note that this can require that you implement additional queues if the files must be delivered in some sort of relational order.
In this case, where a file must be processed every X number of seconds, it doesn't sound like the added effort of introducing threads is worth it.
I would entertain the concept of adding threads if...
1. The process can't finish a file before the next file enters the queue.
2. Another process needs the files to be processed faster because of some time sensitive constraint.
|
|
|
|
|