|
Basically what has happened is the general functionality of the program has gone to hell (for a lack of better term). 95% or so of our business is run off this application of which basically processes queries. (I'm still fairly new here and new to C++ so my explination will be a bit simple but should give you an idea).
Everything is based on a query. So you start with Query1 which normally retreives data from our SQL Server Database and stores the results in a temp (access), there is generally another "check" query looking to make sure that you got results (looking for a row count greater than 0), then you may or may not perform another query based on that and so forth. So yes this is part of a test we wrote into the larger application because of the issue we where having. That is why there is the SELECT INTO and then the SELECT FROM right after it, the SELECT FROM is checking to see if the table was populated as this application doesn't have a way to check the table for a value (i.e. can't check param for the Y, thus the SELECT INTO CHKPARAM where param.param = y). It may be a bit confusing and I'm sure I didn't explain it the best as I'm trying to keep this short. But I'll be happy to try to answer any questions until our senior guy gets in.
|
|
|
|
|
Here is some information that I ran across for something else. I know your using Access, but the principals should be the same. Could have something to do with ANSI characters in the older code you guys wrote. I wrote my SQL Commands in Unicode first, and it worked fine.
http://msdn.microsoft.com/en-us/library/ms811006.aspx[^]
Driver and SQL Server Versions
The following table shows which versions of the Microsoft SQL Server ODBC driver shipped with recent versions and service packs (SP) of Microsoft SQL Server. It also lists the operating system versions under which the drivers are certified to run and the versions of SQL Server against which they are certified to work.
Newer drivers recognize the capabilities of older databases and adjust to work with the features that exist in the older server. For example, if a user connects a version 2.65 driver to a version 4.21a server, the driver does not attempt to use ANSI or other options that did not exist in SQL Server 4.21a. Conversely, older drivers do not use the features available in newer servers.
For example, if a version 2.50 driver connects to a version 6.5 server, the driver has no code to use any new features or options introduced in the 6.5 server.
|
|
|
|
|
Spent some time over the weekend making some tweaks and this seems to be it... I got it working with a trial version of Access 2010 (seems to be 100% of the time) and about 95% with 2000. We plan to go with 2010 anyways so I guess it's "fixed". Thank you for your help!
|
|
|
|
|
That's great to hear. Your on a solid path now. Perhaps 1 more tweak and you'll get the Access 2000 working perfect as well.
|
|
|
|
|
Hello,
I am looking into a full-screen text scrolling app, and found this at code project: [^]
It is simple and fits my need. The only problem is that when I maximize it, its scrolling speed is way too slow (even when I take off the image and the delay iss set to 1ms).
Can anyone give me some hints or pointers (sample code, maybe) on how I can make it faster?
Thanks in advance.
|
|
|
|
|
looks like it draws all the text for each redraw. that will take a lot of time, if you have a lot of text to draw.
not sure how to speed that up without a major reworking of the control, though. ideally, you would render the text to an offscreen bitmap (or many bitmaps) only when the text or the size changed. then, you'd copy that bitmap to the screen on each redraw.
|
|
|
|
|
Well, I think it does the complete redraw every time because it's a scrolling text control... which from the looks of it... they mean the text automatically scrolls through the screen (i.e. moving text).
|
|
|
|
|
but if the text isn't changing, and the layout isn't changing, then there's a performance boost to be gained by only drawing the text once onto an in-memory bitmap then bitbltting that bitmap for redraws. drawing a bitmap of dims X,Y is faster than rendering text and then drawing a bitmap of X,Y.
|
|
|
|
|
|
OK guys, thanks for the pointers. Let me try that approach.
|
|
|
|
|
How to make it faster:
In the file ScrollerCtrl.cpp, set CScrollerCtrl::nSCROLL_DELAY to 1 and CScrollerCtrl::nSCROLL_PAUSE to 1.
That should speed it up a bit.
const int CScrollerCtrl::nSCROLL_DELAY = 1;
const int CScrollerCtrl::nSCROLL_PAUSE = 1;
const int CScrollerCtrl::nMARGIN = 5;
const int CScrollerCtrl::nFONT_SIZE = 12;
const int CScrollerCtrl::nFONT_WEIGHT = FW_SEMIBOLD;
const char* CScrollerCtrl::szFONT_NAME = "Comic Sans MS";
|
|
|
|
|
Add MemDC to your project.
That way your text drawing is done in memory, and then the entire screen is repainted in a flash.
A sample can be found here:
Flicker Free Drawing In MFC[^]
|
|
|
|
|
0 down vote favorite
share [fb] share [tw]
I am developing a dialog based application using VC++/MFC.
The application is required to do following processing:
1. Read input files those has a Header(one per file), Metadata section(multiple in single file) and Data Sections(multiple in single file)
2. Based on the One Data sample per x seconds, generate missing Data Section records E.g. The file contains Data Section have records for time T1, T5, T7 and T10 and Data sample rate is 1 then The output file should contain generated records for T2,T3,T4 and T6 along with existing records.
3. All Header section info is required to be written to one INI file text format. All Data Section records required to be written to separate binary file.
I am evaluating solution designs for it, considering below points:
A. Where to use threads ? One for reading, One for processing and one for Writing?
B. How to read input file? All file contents and retain in memory for processing? OR Read it by block by block (to handle large size file as well)?
C. How to do processing? On the fly right after reading -> process -> write ? OR Reade contents completely then processes it and then write.
In simple way I have to Read File -> Process data read -> and finally write it to different file format
Please give your valuable comments / design approaches for it.
|
|
|
|
|
A. Don't use threads where you don't need them. Depending on the size of these files, it doesn't sound like there's a reason to break the processing up into multiple threads. Only other reason I can think of to break this up into threads is if there's a large number of files and you can process them simultaneously, then look at creating a thread pool.
B. Again, it depends on the size of the file, if they're small text files, this doesn't really matter, if they are large binary files (or images, videos) then it might be beneficial to only read chunks at a time instead of loading the whole thing. I've had to process files > 4GB in size in which case loading to memory is just plain ol' impractical (or impossible depending on memory).
C. Once again, it's going to depend on file size and how much processing there is to do.
You really don't provide enough information to give a good recommendation, would have to know:
0. File size range.
1. Amount of processing involved.
2. Type of processing involved.
|
|
|
|
|
I agree, that the requirements as stated don't seem to make threads necessary.
I'll add though that processing requirements can make threads beneficial. If there is a resource the processing depends on that might see long delays, such as a remote data source, SQL query, etc..., the threads can keep data moving, if one file is taking a while to process. Note that this can require that you implement additional queues if the files must be delivered in some sort of relational order.
In this case, where a file must be processed every X number of seconds, it doesn't sound like the added effort of introducing threads is worth it.
I would entertain the concept of adding threads if...
1. The process can't finish a file before the next file enters the queue.
2. Another process needs the files to be processed faster because of some time sensitive constraint.
|
|
|
|
|
If you need it only once, and all sequentially, then don't store in memory what you don't have to store there. Use a streaming approach: get some, do some, store some.
And don't create lots of threads if you don't have to; a main thread for GUI, and app management; and one, or a few, for performing the boring, long winding tasks, is all that makes sense to me. Separating read/process/write when you don't have to is a recipe for disaster, as you would need to synchronize all these threads and the data they share. The better way would be to have say two threads both dealing with half of the files. Having lots of similar threads is not a good idea either, as they would contend for the same resources, such as disk I/O bandwidth.
|
|
|
|
|
Hello All,
I am writing an application to get some zipped files from a remote location [over WAN but within the domain]. The file data is huge [around 1.5 GB]. I need to unzip the contents of the file to a temporary location and then read an unzipped database to get required information.
Currently what I do is this:
1. I have written the code using FindFirstFile/FindNextFile.
2. Download the files to a temporary location.
3. Unzip the contents of the file.
4. Read the extracted database file.
5. Display the result.
But this process is slow and communicating over the network is slow as well. I understand that the download speed will be a major factor in getting the files faster, but is there any other tweak that I can follow to make this work in a better way?
P.S. I cannot do this on a on-demand basis as the datastructure in the zipped file doesn't support this and is not likely to be changed.
You talk about Being HUMAN. I have it in my name
AnsHUMAN
|
|
|
|
|
How about unzipping "on the fly", so as you receive the file from the network, start unzipping it right away getting along as it downloads?
> The problem with computers is that they do what you tell them to do and not what you want them to do. <
> If it doesn't matter, it's antimatter.<
|
|
|
|
|
Is there more than one zip file? Do you've to get all the zip files and then extract them to a single large file (as in a large file was split into several small ones by a compression utility)?
"Real men drive manual transmission" - Rajesh.
|
|
|
|
|
Affirmative! There is more than one zip file I need to extract but not to a single large file. All zip's have to be downloaded and unzipped individually.
You talk about Being HUMAN. I have it in my name
AnsHUMAN
|
|
|
|
|
So you can use something like the classic producer-consumer design. A thread keeps downloading zip files and another thread will simultaneously extract the downloaded zip files (and perhaps another thread will process the extracted files, depending on how fast all these things happen).
"Real men drive manual transmission" - Rajesh.
modified 7-Dec-11 2:04am.
|
|
|
|
|
Rajesh R Subramanian wrote: producer-consumer design
Can you provide some good links for the above Design pattern with Code Samples?
|
|
|
|
|
|
I think the producer consumer example is the way to go, but maybe a version of that using asynchronous workers would help the most. What I have in mind would limit the threaded complexity to a minimum while still achieving very good performance.
If you are using Qt then my answer would be slightly different but in short I would do like this
Assuming I understood you correct and you do not have access
to c++11 or concurrency library features like future or asynch run ., I am also assuming that it is OK that the zip files are unzipped in a non-deterministic order.
1. A thread, maybe the main thread, tells a worker to start downloading one or several zip files.
2. As soon as a zip file is downloaded it is handed over to another worker that is unzipping the file at the wanted location
3. When a zip file is downloaded and unzipped then this is communicated back to the original calling thread
4. Communication between "participants" is done solely through message queues. No shared memory except the queue, that way data sharing and thread synchronization issues are minimal.
5. The "design pattern" I have in mind for this is the active object. There is plenty of material if you google for it. My favourite is Herb Sutters take on it.
I wrote my version of active object, which should be easy to use, maybe after making minimal change to use whatever thread library you use.
Another example., although more of the fire and forget nature than your scenario., is the asynchronous logger g2log
that I recently wrote about. Check in the code how the messages are passed from the g2logworker to it's thread internal and you will see how easy it is
modified 7-Dec-11 2:55am.
|
|
|
|
|
Sounds like you need a client/server approach. Since a majority of your time is spent downloading, and only for the sake of opening the file, leave the file on the server and simply send the server a request to unzip the file and extract the necessary information from it.
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"Show me a community that obeys the Ten Commandments and I'll show you a less crowded prison system." - Anonymous
|
|
|
|