|
the first thing you need to do is to check for leaked GDI objects.
-c
When history comes, it always takes you by surprise.
|
|
|
|
|
I have a dialog that owns a tree control.
I want to catch a middle button down message in the dialog class.
How can I do this ?
I can catch the message in the tree control but I don't want this.
Does anyone have an idea what can I do about this problem.
Thanks
Orcun Colak
|
|
|
|
|
This should work.. Handle the PreTranslateMessage() Function in your dialog..
BOOL CMyDlg::PreTranslateMessage(MSG* pMsg)
{
if(pMsg->message == WM_MBUTTONDOWN)
AfxMessageBox("Middle Button Pressed");
return CDialog::PreTranslateMessage(pMsg);
}
Rob
Whoever said nothing's impossible never tried slamming a revolving door!
|
|
|
|
|
Hio! I am working on a function that searches a huge text file (28000+ lines) for one line of text.. currently i am reading through every single line to find the number im lookin for.. and reading/loading the file currently takes about 25-30 seconds on a 500mhz..
how can i buffer the whole file faster? notepad only takes a half second to open the file, how can i do that? is there any way to jump to a specific line in a text file such that i can start halfway and go up or down if the number im looking for is higher or lower (the numbers are in order, but not sequential)?
here is what i have now, obviously the line by line approach wont work to well
while (!inFile.eof())
{
WndProgress.StepIt();
WndProgress.PeekAndPump();
if (WndProgress.Cancelled()) {
AfxMessageBox("Progress Error!");
WndProgress.DestroyWindow();
return FALSE;
}
inFile.getline(szBuffer2, sizeof(szBuffer2)-1);
strTemp = szBuffer2;
if (strTemp.Left(1) != ".")
{
CNumberAndName* pEntry = new CNumberAndName();
pEntry->m_strRoutingNumber = strTemp.Left(9);
pEntry->m_strBankName = strTemp.Mid(11,36);
paBankArray->Add(pEntry);
}
}
also, is there any easy way to tell how many lines long the file is?
thanks a bunch for any suggestions!
still a newb.. cut me some slack :P
-dz
|
|
|
|
|
I suspect that Notepad is in fact opening the file as a memory mapped file, i.e. the file exists as if it was in memory, but it's on disk.
If you're reading with iostreams, you can do a seek to a point to start reading from, but in characters, not lines. The only way to speed up a search is to build an index.
Christian
NO MATTER HOW MUCH BIG IS THE WORD SIZE ,THE DATA MUCT BE TRANSPORTED INTO THE CPU. - Vinod Sharma
Anonymous wrote:
OK. I read a c++ book. Or...a bit of it anyway. I'm sick of that evil looking console window.
I think you are a good candidate for Visual Basic. - Nemanja Trifunovic
|
|
|
|
|
i am using ifstream because it is all i am currently familiar with, i am looking to learn any quicker methods.. is there any way that i can handle the file like notepad is? once i have all of my lines loaded into memory the search is very very quick, the whole problem is loading the whole file into memory (which isnt a requirement, its just how im currently doing it, as im deleting all objects once i find the text anyways)
still a newb.. cut me some slack :P
-dz
|
|
|
|
|
I don't understand why this is taking so long. I'd get rid of the progress bar updates. Also see if you can increase the size of the buffer ifstream uses. Why do you need to copy the string into a CString. Just use strchr() on szBuffer2.
As others have said Memory Mapped Files are great for this, but using streams should also be more than fast enough. There is something or several things fundamentally wrong with your implementation.
Also look at profiling the code, which will show you where the bottlenecks are. Look at Glowcode www.glowcode.com[^]
In ED (see sig) loading a 30K line is instantaneous. As is finding something at the end of such a file.
Neville Franks, Author of ED for Windows. www.getsoft.com
Make money with our new Affilate program
|
|
|
|
|
Map the file into memory and use
good guesses about where a certain
line may be. Then make your way from
there to where you need to read.
That way you should only have to
read a small part of the file.
(hopefully)
|
|
|
|
|
can you give me some keywords on where to start to map the file into memory? i don't know how to do that. im searching cp for 'memory file', and other things, but im not having any success.. is there a keyword associated with mapping a file to memorY?
thanks!
still a newb.. cut me some slack :P
-dz
|
|
|
|
|
Bare bones:
HANDLE hFile=::CreateFile(Filename,GENERIC_READ,FILE_SHARE_READ,0,OPEN_EXISTING,FILE_ATTRIBUTE_NORMAL,NULL);
HANDLE hMap=::CreateFileMapping(hFile,0,PAGE_READONLY,0,0,0);
const char* pFile=(char*)::MapViewOfFile(hMap,FILE_MAP_READ,0,0,0);
;
::UnmapViewOfFile(pFile);
::CloseHandle(hMap);
::CloseHandle(hFile);
|
|
|
|
|
First, don't use strTemp. It is a total waste of time. All of those left operations are creating temp objects that just aren't needed.
if (szBuffer2 [0] != '.')
{
CNumberAndName *pEntry = new CNumberAndName ();
szBuffer [9] = 0;
pEntry ->m_strRoutingNumber = szBuffer;
szBuffer [11+36] = 0;
pEntry ->m_strBankName = &szBuffer [11];
paBankArray ->Add (pEntry);
}
Tim Smith
I'm going to patent thought. I have yet to see any prior art.
|
|
|
|
|
thanks a bunch.. seems that i need to learn how char arrays work, i always use cstrings because they are easy for me to use.. thanks for the code.. now all i have to do is figure out what you are doing with the char arrays
thanks again for everyones help!
still a newb.. cut me some slack :P
-dz
|
|
|
|
|
The "szBuffer [0] == '.'" is just checking to see if the first character is a '.'.
The szBuffer [9] = 0; is splitting the string into two parts. The first part starts at szBuffer [0] and is 9 characters long. The second part starts at szBuffer [10] and contains the rest of the string. This allows you to copy the first part of the string into your destination.
The szBuffer [11+36] = 0; is doing the same thing. But now we have three strings.
What this does is remove the need to allocate a new buffer to hold the string, copy that string into the new buffer, and then extra elements from the string piece by piece. By directly operating on the original string, we avoid a fair amount of overhead.
Tim Smith
I'm going to patent thought. I have yet to see any prior art.
|
|
|
|
|
Have a look for grep.c on the net, it is a UNIX tool, did a quick test on one of my files using it, 545,000 lines searching for a non existing string took 8 seconds on my 450mhz dual celeron. Oh I forgot to mention the file is c. 100MB
If I have seen further it is by standing on the shoulders of Giants. - Isaac Newton 1676
|
|
|
|
|
do you know how to implement CBaseControlWindow interface in IGraphBuilder ? plz, thnx u
|
|
|
|
|
I was experimenting with file IO times and discovered some peculiar results that I do not understand and thought mabye sombody could explain it to me.
I was testing the hypothesis that in file IO processing it is better to use a bigger buffer than a smaller one (I was thinking that the fastest way to perform large file IO was to read a disk sector at a time). To test this I performed the following test.
Test:
Copy one file on one drive to another file on another drive. To perform the copy I used VisualC++ 6.0 on a box running Win2k. I wrote a simple console app that just used fread() and fwrite() to and from the files on the different drives.
Results:
The results I got were shocking to me. I ran the code in debug mode first and discovered that on my hardware and a 120Mb file copy I obtainted the following times (in seconds) for each read/write buffer size:
1 byte = 128(s)
2 bytes = 112(s)
4 bytes = 104(s)
8 bytes = 100(s)
16 bytes = 99(s)
...
512K bytes = 99(s)
So there appeared to be a threashold at around 8 bytes above which buffer size makes no difference.
When I re-ran the code in Release mode with the optimizer set to maximize speed the results were much different:
1 byte = 120(s)
2 bytes = 120(s)
...
512K bytes = 120(s)
The "plateau" region went away and the curve flattened out. However the large buffer read times went up! This makes no sense to me!
Does anybody out there know what is going on here? I would appreciate any enlightenment.
Thanks!
|
|
|
|
|
First problem is that you are using fread/fwrite that include their own buffering. To better test raw performance, use ReadFile and WriteFile.
Tim Smith
I'm going to patent thought. I have yet to see any prior art.
|
|
|
|
|
I am having major thread problems in my application.
I am trying to set up our program so the threads don't run into each other (access the same file at the same time) and so they don't wait on each other forever.
I have been working on this for 2 weeks now and am nearly suicidal because it isn't working. I need a solution to replace what I have now. Please read everything below and respond if you are able to help out.
This is in a Doc/View structure. Basically, I need to know where to create the threads, what I should use for syncrhonization, where to wait, and where to set the threads/events/etc. as signalled. Thank you so much for your help!!
--Dan
/\/\/\
Right now, we have a thread for Action-S, for Action-A, and for Action-C.
Action-S accesses a file in memory and then updates a physical file.
Action-A accesses the same file in memory and updates a local file.
Action-C accesses the same physical file as Action-S and then locks down the main thread until a process is finished (the lockdown works fine - I don't need to worry about that).
All Action-S's and Action-A's only care about the actions for a particular file name. If the file "xyz" is open, Action-A for "xyz" does not care about what is happening for file abc (such as an Action-A for "abc").
For one file only, please refer to this diagram...
L P
\ / \
A\ S/ C\
\ / \
M W
"A" - Action-A (performs M then L)
"S" - Action-S (performs M then P)
"C" - Action-C (performs P then W)
"L" - Local File
"M" - File in Memory
"P" - Physical File
"W" - Process to Wait On.
"\" - a Thread
"/" - a Thread
|
|
|
|
|
The diagram again...
L P
\ / \
A\ S/ C\
\ / \
M W
|
|
|
|
|
Why do you need separate thread for each of you files? If I did not miss anything, for the same file there are no asynchronies actions possible, all threads have to wait for each other. Alternative solution would be to start thread for each of the files. Inside of the thread call the function for each action.
thread1
----"abc"------
| fL fP |
| \ / \ |
| fA\ fS/ fC\ |
| \ / \ |
| fM fW |
---------------
thread2
----"xyz"------
| fL fP |
| \ / \ |
| fA\ fS/ fC\ |
| \ / \ |
| fM fW |
---------------
thread3
----"bla"------
| fL fP |
| \ / \ |
| fA\ fS/ fC\ |
| \ / \ |
| fM fW |
---------------
|
|
|
|
|
Action-S and Action-A only wait for each other when accessing the file in memory. Then (if there are only these two threads), they can go about their business independently.
|
|
|
|
|
then use something like
HANDLE hEvent = CreateEvent(0, TRUE, TRUE, "abc");//note that we use filename to name the event
//just in case
if(WaitForSingleObject(hEvent, INFINITE) == WAIT_OBJECT_0 && ResetEvent(hEvent))//lock event on your behave
{
//... do your staff
SetEvent(hEvent);//let somebody else to continue
}
|
|
|
|
|
If you're just going to lock access,
you really want a mutex there though.
HANDLE hMutex=CreateMutex(0,FALSE,Filename);
WaitForSingleObject(hMutex,INFINITE);
ReleaseMutex(hMutex);
|
|
|
|
|
Unless you are using multiple processes you want to use a critical section, not a mutex. cs are much faster.
Neville Franks, Author of ED for Windows. www.getsoft.com
Make money with our new Affilate program
|
|
|
|
|
Yeah, completely correct. As further overkill, the example
used a named object. I don't know if it really matters as
much as I am reluctant to create named objects when an
unnamed one will do, but I always prefer to limit my
exposure.
(oh yeah, which I did not do in that ex. )
|
|
|
|