|
Well as mentioned, if we put in a 3.5 second sleep that seems to give it enough time. Granted that seems a bit excessive for, creating a table with one row and col with 1 values of 1 character. I'll try the SetQueryTimeout though to see if that makes any difference.
|
|
|
|
|
Ok we changed the Timeout to 1 second from 45 seconds... and it worked 2 of 3 times. After the failure on the 3rd attempt we stopped as it failed (did not insert the Y and no timeout error).
|
|
|
|
|
I'm not an SQL Expert, but in theory and based on my experience, You want to write SQL commands that does everything you need in 1 command execute, and just assume it worked or write more sql to return a reply.
So a conditional SQL Command based on certain parameters being met could look something like below.
IF (SELECT PARAM.param INTO CHKPARAM FROM PARAM WHERE PARAM.param = 'Y')
BEGIN
SELECT TOP 1 * FROM CHKPARAM
END
You should never have to sleep or wait for an anwser, for the underlying sql client inside the OBDC wrapper should take care of the timing, or socket/pipe and SQL Server latency, in producing your result. Are you destroying the cdw object before creating it again? Are you modeling your SQL Commands in something like EMS SQL Manager to test how long they take to process on a known good program code.
This is why I asked in a previous post of mine what others are using to talk to SQL Server, and went with the gut wrenching torture of using the native sql client sqlclin10 to talk to the server. I had really bad luck using ODBC back in 2002, and will never use it again.
|
|
|
|
|
This portion of the application is using an Access database not SQL Server (just wanted to be sure I'm clear about that). As far as I can tell the cdw object is not being destroyed before it's being created again, but I'll double check with our senior programmer.
It may have got lost in the original post, but this does work fine with ADO on our live network (Win Server 2000, Access 2000 and Visual Studio 6), this has happened with the move to 2008, Access 2000 and 2010 and VS 2010.
|
|
|
|
|
MacRaider4 wrote: with the move to 2008, Access 2000 and 2010 and VS 2010.
I don't have an answer, just thoughts about the process.
I know it was a access database, quite clear, but over time now in 2012, they sort of became the same to me, with very slight differences, from the client code perspective.
I was still thinking about your issue while eating lunch, and 2 things stick in my mind, which is the first call (Line 1), and then the next call below. I guess the code below was a fragment, and is never sent as a request. Must of been a test command to check for sanity.
CString xstrSQL = _T("SELECT TOP 1 * FROM CHKPARAM");
The other is the execution process, There all kind of the same, but yours was different.
Make Connection Object
Make and set SQL Command Object
Make Reader and Execute SQL Command
Wait and Read Results
Close Reader, Connection Objects
Destroy Objects
Overall, all I can think of since it fails on both Access 2000 and 2010, is that VS2010 called up a more modern version of ODBC, and that your current code sent out the proper credentials and established a valid session, made the execute request, the response was returned, but is getting lost, stuck, or not being sent at all back to the client, or the reader object was not able to load the response, because it's still working on the previous response.
The Microsoft world has changed alot since the days of VS6, Access 2000 and Server 2000. Technologies sort of have to match up like your original working project.
|
|
|
|
|
Basically what has happened is the general functionality of the program has gone to hell (for a lack of better term). 95% or so of our business is run off this application of which basically processes queries. (I'm still fairly new here and new to C++ so my explination will be a bit simple but should give you an idea).
Everything is based on a query. So you start with Query1 which normally retreives data from our SQL Server Database and stores the results in a temp (access), there is generally another "check" query looking to make sure that you got results (looking for a row count greater than 0), then you may or may not perform another query based on that and so forth. So yes this is part of a test we wrote into the larger application because of the issue we where having. That is why there is the SELECT INTO and then the SELECT FROM right after it, the SELECT FROM is checking to see if the table was populated as this application doesn't have a way to check the table for a value (i.e. can't check param for the Y, thus the SELECT INTO CHKPARAM where param.param = y). It may be a bit confusing and I'm sure I didn't explain it the best as I'm trying to keep this short. But I'll be happy to try to answer any questions until our senior guy gets in.
|
|
|
|
|
Here is some information that I ran across for something else. I know your using Access, but the principals should be the same. Could have something to do with ANSI characters in the older code you guys wrote. I wrote my SQL Commands in Unicode first, and it worked fine.
http://msdn.microsoft.com/en-us/library/ms811006.aspx[^]
Driver and SQL Server Versions
The following table shows which versions of the Microsoft SQL Server ODBC driver shipped with recent versions and service packs (SP) of Microsoft SQL Server. It also lists the operating system versions under which the drivers are certified to run and the versions of SQL Server against which they are certified to work.
Newer drivers recognize the capabilities of older databases and adjust to work with the features that exist in the older server. For example, if a user connects a version 2.65 driver to a version 4.21a server, the driver does not attempt to use ANSI or other options that did not exist in SQL Server 4.21a. Conversely, older drivers do not use the features available in newer servers.
For example, if a version 2.50 driver connects to a version 6.5 server, the driver has no code to use any new features or options introduced in the 6.5 server.
|
|
|
|
|
Spent some time over the weekend making some tweaks and this seems to be it... I got it working with a trial version of Access 2010 (seems to be 100% of the time) and about 95% with 2000. We plan to go with 2010 anyways so I guess it's "fixed". Thank you for your help!
|
|
|
|
|
That's great to hear. Your on a solid path now. Perhaps 1 more tweak and you'll get the Access 2000 working perfect as well.
|
|
|
|
|
Hello,
I am looking into a full-screen text scrolling app, and found this at code project: [^]
It is simple and fits my need. The only problem is that when I maximize it, its scrolling speed is way too slow (even when I take off the image and the delay iss set to 1ms).
Can anyone give me some hints or pointers (sample code, maybe) on how I can make it faster?
Thanks in advance.
|
|
|
|
|
looks like it draws all the text for each redraw. that will take a lot of time, if you have a lot of text to draw.
not sure how to speed that up without a major reworking of the control, though. ideally, you would render the text to an offscreen bitmap (or many bitmaps) only when the text or the size changed. then, you'd copy that bitmap to the screen on each redraw.
|
|
|
|
|
Well, I think it does the complete redraw every time because it's a scrolling text control... which from the looks of it... they mean the text automatically scrolls through the screen (i.e. moving text).
|
|
|
|
|
but if the text isn't changing, and the layout isn't changing, then there's a performance boost to be gained by only drawing the text once onto an in-memory bitmap then bitbltting that bitmap for redraws. drawing a bitmap of dims X,Y is faster than rendering text and then drawing a bitmap of X,Y.
|
|
|
|
|
|
OK guys, thanks for the pointers. Let me try that approach.
|
|
|
|
|
How to make it faster:
In the file ScrollerCtrl.cpp, set CScrollerCtrl::nSCROLL_DELAY to 1 and CScrollerCtrl::nSCROLL_PAUSE to 1.
That should speed it up a bit.
const int CScrollerCtrl::nSCROLL_DELAY = 1;
const int CScrollerCtrl::nSCROLL_PAUSE = 1;
const int CScrollerCtrl::nMARGIN = 5;
const int CScrollerCtrl::nFONT_SIZE = 12;
const int CScrollerCtrl::nFONT_WEIGHT = FW_SEMIBOLD;
const char* CScrollerCtrl::szFONT_NAME = "Comic Sans MS";
|
|
|
|
|
Add MemDC to your project.
That way your text drawing is done in memory, and then the entire screen is repainted in a flash.
A sample can be found here:
Flicker Free Drawing In MFC[^]
|
|
|
|
|
0 down vote favorite
share [fb] share [tw]
I am developing a dialog based application using VC++/MFC.
The application is required to do following processing:
1. Read input files those has a Header(one per file), Metadata section(multiple in single file) and Data Sections(multiple in single file)
2. Based on the One Data sample per x seconds, generate missing Data Section records E.g. The file contains Data Section have records for time T1, T5, T7 and T10 and Data sample rate is 1 then The output file should contain generated records for T2,T3,T4 and T6 along with existing records.
3. All Header section info is required to be written to one INI file text format. All Data Section records required to be written to separate binary file.
I am evaluating solution designs for it, considering below points:
A. Where to use threads ? One for reading, One for processing and one for Writing?
B. How to read input file? All file contents and retain in memory for processing? OR Read it by block by block (to handle large size file as well)?
C. How to do processing? On the fly right after reading -> process -> write ? OR Reade contents completely then processes it and then write.
In simple way I have to Read File -> Process data read -> and finally write it to different file format
Please give your valuable comments / design approaches for it.
|
|
|
|
|
A. Don't use threads where you don't need them. Depending on the size of these files, it doesn't sound like there's a reason to break the processing up into multiple threads. Only other reason I can think of to break this up into threads is if there's a large number of files and you can process them simultaneously, then look at creating a thread pool.
B. Again, it depends on the size of the file, if they're small text files, this doesn't really matter, if they are large binary files (or images, videos) then it might be beneficial to only read chunks at a time instead of loading the whole thing. I've had to process files > 4GB in size in which case loading to memory is just plain ol' impractical (or impossible depending on memory).
C. Once again, it's going to depend on file size and how much processing there is to do.
You really don't provide enough information to give a good recommendation, would have to know:
0. File size range.
1. Amount of processing involved.
2. Type of processing involved.
|
|
|
|
|
I agree, that the requirements as stated don't seem to make threads necessary.
I'll add though that processing requirements can make threads beneficial. If there is a resource the processing depends on that might see long delays, such as a remote data source, SQL query, etc..., the threads can keep data moving, if one file is taking a while to process. Note that this can require that you implement additional queues if the files must be delivered in some sort of relational order.
In this case, where a file must be processed every X number of seconds, it doesn't sound like the added effort of introducing threads is worth it.
I would entertain the concept of adding threads if...
1. The process can't finish a file before the next file enters the queue.
2. Another process needs the files to be processed faster because of some time sensitive constraint.
|
|
|
|
|
If you need it only once, and all sequentially, then don't store in memory what you don't have to store there. Use a streaming approach: get some, do some, store some.
And don't create lots of threads if you don't have to; a main thread for GUI, and app management; and one, or a few, for performing the boring, long winding tasks, is all that makes sense to me. Separating read/process/write when you don't have to is a recipe for disaster, as you would need to synchronize all these threads and the data they share. The better way would be to have say two threads both dealing with half of the files. Having lots of similar threads is not a good idea either, as they would contend for the same resources, such as disk I/O bandwidth.
|
|
|
|
|
Hello All,
I am writing an application to get some zipped files from a remote location [over WAN but within the domain]. The file data is huge [around 1.5 GB]. I need to unzip the contents of the file to a temporary location and then read an unzipped database to get required information.
Currently what I do is this:
1. I have written the code using FindFirstFile/FindNextFile.
2. Download the files to a temporary location.
3. Unzip the contents of the file.
4. Read the extracted database file.
5. Display the result.
But this process is slow and communicating over the network is slow as well. I understand that the download speed will be a major factor in getting the files faster, but is there any other tweak that I can follow to make this work in a better way?
P.S. I cannot do this on a on-demand basis as the datastructure in the zipped file doesn't support this and is not likely to be changed.
You talk about Being HUMAN. I have it in my name
AnsHUMAN
|
|
|
|
|
How about unzipping "on the fly", so as you receive the file from the network, start unzipping it right away getting along as it downloads?
> The problem with computers is that they do what you tell them to do and not what you want them to do. <
> If it doesn't matter, it's antimatter.<
|
|
|
|
|
Is there more than one zip file? Do you've to get all the zip files and then extract them to a single large file (as in a large file was split into several small ones by a compression utility)?
"Real men drive manual transmission" - Rajesh.
|
|
|
|
|
Affirmative! There is more than one zip file I need to extract but not to a single large file. All zip's have to be downloaded and unzipped individually.
You talk about Being HUMAN. I have it in my name
AnsHUMAN
|
|
|
|
|