|
I'm not seeing a deadlock situation in the pseudo code...
If you run it in the debugger and break, are both threads sitting in the Wait function?
If so, why?
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
Here is what happening in debug more
snapshot(exposure)
{
set camera exposure
start snapshotthread if not running
SetEvent(getnewframe)
::WaitForSingleObject(waitfornewframe, INFINITE); -- set breakpoint(1)
display the frame
}
.
.
.
camthread
{
do
{
dummy variable -- set breakpoint(2)
::WaitForSingleObject(getnewframe, INFINITE);
take a 20ms snapshot
SetEvent(waitfornewframe);
}while(camrunning)
} when I hit snapshot button, the program stops at breakpoint(1) and then goes to breakpoint(2) and thats it from there it does nothing while the C++ interface shows as the program is in [run] mode. From the definitions of SetEvent and WaitForSingleObject I too dont see any thing wrong here.
Do they need to be in actual two different threads, I dont think thats the case, interesting!
PKNT
|
|
|
|
|
hmm the only reason I see that could happen is if
"take a 20ms snapshot" never returns so the waitfornewframe
event gets set.
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
I even tired that too by commenting it out, but it does the same. will keep tryin different ways and see. Do you think I may need to define those event handles in CMutex objects .
PKNT
|
|
|
|
|
Kiran Satish wrote: Do you think I may need to define those event handles in CMutex objects
No that's a different type of synchronization object for mutual exclusion.
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
does it matter how I am defining my thread?? I am using DWORD WINAPI camclass::camthread(LPVOID) form.
PKNT
|
|
|
|
|
How are you creating your threads?
If you're using MFC and using MFC class objects in your threads
you should be using AfxBeginThread(), and the prototype for the
thread proc is
UINT __cdecl ThreadProc(LPVOID pParam);
If you're not using MFC but using CRT functions anywhere in your threads
(including new/delete!) then you should be using _beginthread() or _beginthreadex(),
and the prototype for the thread proc is
unsigned __stdcall ThreadProc(void *lpParameter);
For a pure C/C++ thread on Windows which doesn't use MFC or the CRT, you
can use ::CreateThread(), and the prototype for the thread proc is
DWORD WINAPI ThreadProc(LPVOID lpParameter);
For MFC and/or CRT, make sure you're linking to the multithread version of those libraries.
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
I got it running. Unfortunately it doesnt run when I do it the way we discussed earlier, but when I run it in two different threads it works like it should. I am not sure why it did like tht before, but it doesnt work when I have it in between a function and thread.
PKNT
|
|
|
|
|
I always thought it was two threads!
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
Sorry abt that confusion... maybe I wasnt clear enough.... , anyway now I know how to use Events in an efficent way... thanks for your help and patience on this...
PKNT
|
|
|
|
|
Kiran Satish wrote: maybe I wasnt clear enough
I think I just assumed
Glad you got it working!
Cheers,
mark
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
I am back with another issue in this application , saving frames in real-time without dropping and in the same order. I tried using BipBuffer[^]. This is how I implemented, the actual thread keeps running and signals an event to 'SaveImage' thread for every ~20msec where it appends the image data to BipBuffer memory while another thread keeps running all the time saving images from BipBuffer until the BipBuffer memory is NULL. I am allocating 64x(512x512)bytes of memory to BipBuffer. While running like this, the close loop (main thread) starts off at 50Hz and after a while (like 6-7 sec) depending on the size of BipBuffer memory, it slows down to like 39Hz and then goes back to 50Hz slowly.
Is there any other way I can save all the images in real-time at 50Hz with out dropping the actual frames coming in from the camera (i.e I have ~17msec to save a particular frame)??
-thanks
PKNT
|
|
|
|
|
Kiran Satish wrote: I am allocating 64x(512x512)bytes of memory to BipBuffer
Are you sure that's a big enough buffer? If the data is 24bpp RGB, you may want 64x(512x512x3).
Which thread is slowing down? Are you locking something (or staying in a critical section) too long?
Mark
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
My images are only 8bpp(0-255) and I use unsigned char pointer for image data. I am not sure which thread is slowing down and as fas as I know I dont have any critical sections. Here is the order-
BipBuffer ImageBuffer;
ImageBuffer.AllocateBuffer(128*m_FrameBufferSize) //m_FrameBufferSize = 512x512
-Main Thread (camera thread)- // this controls the processing thread
while (camthread)
take image (~20.5msec)
copy image data into shared buffer
set event to processing thread
-Processing thread-
while (processing thread)
wait for event from camera
copy image data from shared buffer to processing buffer
if(saveimage)
copy image data from shared buffer to buffer for saving image
Set event to save image thread
do the processing on image data in processing buffer
Update displays for results
-Save Image thread-
BYTE* pData
wait for save image event
pData = ImageBuffer.Reserve(m_FrameBufferSize, iResult)
memcpy(pData, m_pImageBuffer, iResult)
ImageBuffer.Commit(iResult) //iResult will be equal to m_FrameBufferSize
-Store Image thread- //this thread runs continuously until BipBuffer == NULL
while ((pData = ImageBuffer.GetContiguousBlock(iResult)) != NULL){
create filename
write image data to a file (I tried writing in two ways to see if there will be any difference in speed, one as a tiff format and also as a binary file)
ImageBuffer.DecommitBlock(iResult)}
I think I might just use a standard circular buffer of my own rather than BipBuffer as I see there is something going on when using it. When I run the program loop for 15sec, the number of images being stored are increasing from one run to another. Hope I am clear . If I comment out writing image data to a file in store image thread, everything runs fine at desired speed.
-thanks
PKNT
|
|
|
|
|
Kiran Satish wrote: I think I might just use a standard circular buffer of my own rather than BipBuffer as I see there is something going on when using it.
I think that's a good place to start. I use simple queues (FIFO) with good results.
Use preallocated buffers, and design them so they can work with the queue with as little
blocking synchrnization as possible.
Also try to eliminate copying the frame data so many times (~13MB/sec for each!) -
it looks like every thread does a copy. If any threads can share
the same copy, that will be much more efficient.
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
I did some more tests and came to a conclusion that writing data to HDD is the only thing thats slowing down the loop randomly. Sometimes it wont and sometimes it does, very random, but most of the time its slows down and then gets back, doesnt matter if its FIFO structure or using fixed memory allocation. I did take out save image thread and copying image data directly into FIFO buffer (this is because I want to keep all the images in sequence without dropping any irrespective of the speed of writing thread) within the processing thread while using only one store image thread where the actual writing to HDD takes place from FIFO. Again, if I comment out just one line of writing data to HDD, everything works fine. Just for the Info, I have two 7.2k rpm SATA drives in RAID1.
The only solution I think of for now is, save the images in a big block of memory and write them to HDD once the loop is stopped, ofcourse this is not at all efficient way to go with and we can't run the loop for ever or to say there will be a limit of ~22sec continuous run.
PKNT
|
|
|
|
|
If I'm reading this correctly, and you've got two threads running, you might need to add some critical sections or events around your use of the BipBuffer; as the code stands it's not threadsafe.
|
|
|
|
|
Also I did a test with 5sec, the camera is indeed getting all the frames (5000/20.5 = ~243) constantly, while using BipBuffer and saving the images, for first run I get ~206 saved, second run ~213, third run ~224 etc, which I think is overwriting the frame data in the buffer, but the difference in the numbers doesnt explain anything to me. There should be a pretty simple way to write data to HDD at 50frames/sec (at 512x512bytes/frame), which I dont think is that much when you consider real-time applications .
PKNT
|
|
|
|
|
Hey Friends
I need a list of running applications's Main Window as is displayed in Task Manager's Application Tab.
I tried using EnumWindows & enumerated all windows.
However the callback function is returning more than i need.
Given below is the code that i tried.
BOOL CALLBACK WndProc(HWND cr_hWnd, LPARAM cr_lParam)
{
if(::IsWindow(cr_hWnd))
{
if(::GetParent(cr_hWnd)==0)
{
char vl_sText[256];
::GetWindowText(cr_hWnd,vl_sText,sizeof(vl_sText));
TRACE("%s\n",vl_sText);
}
}
return TRUE;
}
void CEnumWndDlg::OnButton1()
{
::EnumWindows(WndProc,0);
}
Need Help.
Regards
|
|
|
|
|
I wrote a TCP client class that reads known number of bytes. I stepped thru the server code to make sure that the server is sending correct number of bytes, and dumping the content of transmit buffer to the screen. The client seems to receive the data fine, as the recv() returns the same number of bytes sent by the server. But the content of receive buffer is empty. Here's the client code fragment for the read:
WSAEVENT hDataRcvdEvent = WSACreateEvent();
if (WSAEventSelect(sServer, hDataRcvdEvent, FD_READ) == SOCKET_ERROR) {
errCode = WSAGetLastError();
cerr << "Unable to create event object (WSAEventSelect): " << errCode << endl;
}
else {
do{
if (WSAWaitForMultipleEvents(1, &hDataRcvdEvent, FALSE, SERVER_TIMEOUT, FALSE) == WSA_WAIT_EVENT_0) {
bytesRcvd = recv( sServer, rxBufPtr, bytesLeft, 0 );
bytesLeft -= bytesRcvd;
rxBufPtr += bytesRcvd;
if (bytesRcvd > 0) {
totalRead += bytesRcvd;
cout << "\nReceived " << bytesRcvd << endl;
}
}
}while (bytesRcvd > 0 && bytesExpected > bytesRcvd);
}
What's really bothering me is that this TCP client class worked just fine in different code that does the SAME thing, with only difference being what it does with RECEIVED data.
Another strange thing that's happening is that when receiving a large text data, the first recv() returns an empty buffer, but the second call to recv() returns the data that should have been returned in the first read. However, when receiving a small (< 32 bytes)data packet, subsequent read returns an empty receive buffer.
|
|
|
|
|
Member 4273454 wrote: while (bytesRcvd > 0 && bytesExpected > bytesRcvd);
Shouldn't that be something like:
while (bytesRcvd > 0 && bytesExpected > totalRead);
?
|
|
|
|
|
Using Visual C++ Service Pack 6 on Windows 2000. When my automation client calls COleDispatchDriver::ReleaseDispatch(), thus releasing the server, the server receives a WM_CLOSE message. If there are other clients still connected, the server,if visible, is supposed to display a message to that effect and offer to close. If invisible, the server simly doesn't close. How does the server know which clients, if any, are still attached?
Thanks,
GF
|
|
|
|
|
garyflet wrote: How does the server know which clients, if any, are still attached?
I may be misinterpreting your question, but this is done by reference counting.
The server will in general not know anything about its clients, it only knows that it has one or more clients attached to it.
garyflet wrote: When my automation client calls COleDispatchDriver::ReleaseDispatch(), thus releasing the server, the server receives a WM_CLOSE message. If there are other clients still connected, the server,if visible, is supposed to display a message to that effect and offer to close.
From your question I assume you're talking about an application that runs as an out-of-process COM server. I wouldn't expect the server to close, or even the WM_CLOSE message being sent, unless the reference count reaches zero. Popping a dialogue asking the user whether to shut down the server or not seems a bit odd since there shouldn't be any clients left.
garyflet wrote: If invisible, the server simly doesn't close.
Don't quite get what you mean by this, but when a server shuts down and calls ::CoUninitialize it tries to dispatch all pending COM calls in a message loop. A server may hang inside this loop if the server hasn't released all COM servers it may have created.
Your problem may be related to this, or it may be popping an invisible dialogue box waiting for user input.
I think you need to explain your problem a bit further and what you're trying to do.
"It's supposed to be hard, otherwise anybody could do it!" - selfquote "High speed never compensates for wrong direction!" - unknown
|
|
|
|
|
Thanks very much for your reply.
Roger Stoltz wrote: I may be misinterpreting your question, but this is done by reference counting.
The server will in general not know anything about its clients, it only knows that it has one or more clients attached to it.
That's exactly my question. I don't know how to do reference counting! My MDI server has a CCmdTarget derived class: CAutoApp. I noted that CAutoApp gets created with every new client connection, so I put in an array to capture the LPUNKNOWN return from GetInterface(&IID_IUnknown). However, I still don't know how to use that to tell me if the client has disconnected.
Roger Stoltz wrote: From your question I assume you're talking about an application that runs as an out-of-process COM server. I wouldn't expect the server to close, or even the WM_CLOSE message being sent, unless the reference count reaches zero. Popping a dialogue asking the user whether to shut down the server or not seems a bit odd since there shouldn't be any clients left.
After any client calls ReleaseDispatch(), my server gets a WM_CLOSE message whether or not there are any clients left. I wouldn't even have to do a reference count if this were not the case. Maybe that's the real question, why do I get a WM_CLOSE message even when there are clients left? I'm not sure how to investigate that.
|
|
|
|
|
garyflet wrote: My MDI server has a CCmdTarget derived class: CAutoApp. I noted that CAutoApp gets created with every new client connection
Well, I consider this a strange behaviour because this means that there con be only one client per CAutoApp instance. The reference counting seems to be put aside.
If you override OnFinalRelease in your CCmdTarget derived class and put a breakpoint there, I assume it will get hit when any of the clients "disconnects", but for different CAutoApp instances. I assume that the WM_CLOSE message will be sent from the same call chain.
I suspect you're using the CCmdTarget derived class in a way it wasn't intended.
When a new client "connects" the reference count should be increased for the same object, i.e. the m_dwRef member of your CAutoApp object should be increased by one. But this apparently does not happen, instead you're creating a new instance of the class which sounds strange.
garyflet wrote: After any client calls ReleaseDispatch(), my server gets a WM_CLOSE message whether or not there are any clients left.
I would expect that since the reference count for the instance reaches zero. When the one and only client to the CAutoApp object "disconnects" the object will be destroyed.
garyflet wrote: why do I get a WM_CLOSE message even when there are clients left?
That's the thing: there are no clients left for that CAutoApp instance.
I suspect a design flaw here, but I cannot tell since I don't have enough information. It might require the complete source code and dig into it.
It sounds like the CAutoApp class should be a singleton since it appears to control the lifetime of the entire application, but you've created multiple instances of it. When one of the instances reference count reaches zero it wants to close the application, hence the WM_CLOSE message.
My best tip is to re-evaluate your design.
"It's supposed to be hard, otherwise anybody could do it!" - selfquote "High speed never compensates for wrong direction!" - unknown
|
|
|
|
|