|
I think you should look into EnumChildWindows and EnumWindows. (look on MSDN or here on CP)
Keeping handles for a long time is not smart. If you keep handles, try to keep them very locally.
Hope this helps.
"If I don't see you in this world, I'll see you in the next one... and don't be late." ~ Jimi Hendrix
|
|
|
|
|
Hello,
You can use a std::map<std::string, HANDLE> as an associative array to store the window handles...
Listboxes have a way of storing a 32 bit value with the item you inserted. MFC's CListBox provides SetItemData() for this purpose...
Multiply it by infinity and take it beyond eternity and you'll still have no idea about what I'm talking about.
|
|
|
|
|
Thankyou for your comments ill look into your advice!
|
|
|
|
|
I have a dialog that I have an edit box on. The dialog is named: IDD_OPENGLDIALOG_DIALOG and the edit control is named IDC_EDIT1. I am getting 'constant' error when I try to run the following code. I have tried many variations and I still get this error. Please help!
CString holder1,holder2,holder3,holder4,holder5,holder6,holder7;
holder1 += IDD_OPENGLDIALOG_DIALOG.IDC_EDIT1;
Thanks,
Chris
|
|
|
|
|
IDD_OPENGLDIALOG_DIALOG is not an object but an integer. if you watch in your ressource.h, you'll see such lines :
<font color=#0000FF>#define</font> IDD_OPENGLDIALOG_DIALOG <font color=#000080>101</font>
so, you cannot apply the operator . to it.
instead, do this :
CString strEditText;
CEdit* pEdit = (CEdit*)GetDlgItem(IDC_EDIT1);
pEdit->GetWindowText(strEditText);
i advise you also to rename you edit (IDC_EDIT1 ) into a more conveniant name...
TOXCCT >>> GEII power
|
|
|
|
|
I have a BS_PUSHBUTTON inside of a BS_GROUPBOX. When I click on the pushbutton, wndProc doesn't seem to get the message about the pushbutton being clicked.
Any help on getting the button click message to WndProc would be greatly appreciated.
|
|
|
|
|
Hi,
I have never used CArchive but a friend said it was easy to do. After a flustrating Sunday, I must defer to you.
I have a CStringArray named: CStringArray m_sAstr;
and I wish to store the total content into a binary carchive.
Foo::LoadSave()
{
int nSize;
// Create archive ...
bool bReading = TRUE; // ... for writing
CArchive* ar = NULL;
CFile* pFile = new CFile();
ASSERT (pFile != NULL);
if (!pFile->Open ("foo.txt", CFile::modeReadWrite | CFile::shareExclusive)){
return;
}
try
{
pFile->SeekToBegin();
UINT uMode = (bReading ? CArchive::load : CArchive::store);
ar = new CArchive (pFile, uMode);
ASSERT (ar != NULL);
}
catch (CException* pException)
{
return;
}
if(ar->IsStoring())
{
nSize = m_sAstr.GetSize();
for(int i = 0;i<=nSize;i++){
ar->Write(m_sAsrt[i]);
}else if(ar->IsLoading())
{
}
Please I am not sure what I am doing wrong. A small sample please.
}
"Naked we come and bruised we go."
- James Douglas Morrison
Best Wishes,
ez_way
|
|
|
|
|
BaldwinMartin wrote:
for(int i = 0;i<=nSize;i++){
you are doing an overflow guy !
prefer this :
<font color=#0000FF>for</font> (<font color=#0000FF>int</font> i = 0; i <font color=#FF0000><</font> nSize; i++) {
TOXCCT >>> GEII power
|
|
|
|
|
Hi
And thnk you,
I understand what you said is true, but my question was refering to CArchive.
Is there an known issue in the storing of asym. data?
Is there a problem with using a CList or CArray?
I do thank you, and hope that you can help with these issues.
"Naked we come and bruised we go."
- James Douglas Morrison
Best Wishes,
ez_way
|
|
|
|
|
|
First add two new unicode configurations (for debug and release) to the project (Build->Configurations->Add). Next add the following definition /D "_UNICODE" in preprocessor definitions of your unicode projects (Project->Settings->Preprocessor definitions).
Regards,
Andrzej Markowski
|
|
|
|
|
|
Hi all,
I am reading a file which contains 128 byte header and then the actual pixel data. The bit depth is 12bits with the data type given as LSB_MSB (i think it means the first 8 bits of first byte contains LSB's and the next 8 bits contains reamaining 4 MSB bits... am i right?). Means this image file data is 12 bit dpeth with a range of (4096) greylevels. Now I am trying to convert this into 24 bit BMP file means 8 bits per R,G,B layer, since this is a greylevel image we will have same R,G,B value which leaves us with 256 grey levels instead of 4096. Now how can i do this conversion... or should i have to ignore the 4 MSB bits and consider only 8 LSB bits and create an RGB image form it....please help me out.. its very urgent....
Plz atleast any suggestions or ideas are appreciated... I have posted already abt this many times(from 1 month)...but i dont know whether I am not clear to u all... since as this is a site full of experts, i think atleast one may know this...
thanks in advance,
Suman
|
|
|
|
|
Hy,
Suman Niranjan wrote:
... or should i have to ignore the 4 MSB bits and consider only 8 LSB bits and create an RGB image form it....
No, if you ignore higest 4 bits, you lost data.
Conversion algoritm for this is :
<br />
#define MAX_SRC = 4096<br />
#define MAX_DEST = 256<br />
<br />
unsigned int src = GetNextSrcValue();<br />
float nvalue = (float)src/MAX_SRC;<br />
unsigned int dest = (int)(nvalue*MAX_DEST);<br />
<br />
|
|
|
|
|
I think you could do the conversion more efficiently by right-shift (FLOAT calculations are really slow, unless the compiler optimises them out!)
The algorithm would be:
unsigned short original_12Bits = GetNextSrcValue();<br />
unsigned short new_8bits = original_12Bits >> 4;<br />
unsigned long RGB_32Bits = new_8bits + new_8bits << 8 + new_8bits << 16;<br />
displayThisPixel(lineNumber, pixelInLine, RGB_32Bits);<br />
Academically, there may be a bit of a complication in that Red, Green, and Blue are not weighted the same in terms of intensity, i.e. R, G, B = 128, 0, 0 should not give the same monochrome intensity as R, G, B = 0, 128, 0 or R, G, B = 0, 0, 128, but I don't think that this matters in your application.
I wrote an application which converts 8-bit monochrome (from a frame grabber card) to 16-bit RGB (which has 5 bits each of R, G, B, and 1 unused bit) using an algorithm similar to the one above, and it displays just fine on a display set to 16-bits per pixel.
|
|
|
|
|
Hi,
Thanks for ur suggetion and I have done this conversion using a LUT for (4096 to 256) and will this produces a nice image or not?... actually first of all i read the data in byte format and look over it and it seems pretty much the same as i said the LSB bits in first byte and in the second byte the upper nibble are zeros and the lower nibble is consisting of remainin four bits. Wht i did is i read both bytes at a times and used the value of first byte to add with the calculated value by using the four lower bits of second byte like (like Bit0*256+Bit1*512+Bit2*1024+Bit3*2048) which gives me a values ranging between (0 - 4096) and then i used the LUT for getting corresponding greylevel in (0-255) range. Can u tell me whether I am right or not?... I would try to implement in ur way and let u know the results...
thanks,
Suman
|
|
|
|
|
I'd guess that you are probably right that LSB_MSB means that the first byte has the 8 LS bits and the next byte has the 4 unused bits (presumably the high nibble) and the 4 MS bits (the low nibble.)
Don't just go ahead without checking, though! Have a look at a sample data file to see if this assumption is valid - the weirdos who created this file format may have decided to left-justify the 12 bits in a 16-bit word, so the lowest-order 4 bits are a zero.
This may sound unlikely, but there are technical reasons for this alternative approach, in fact this is done on ARINC-429 avionic serial interfaces.
|
|
|
|
|
Hi Norman,
Thanks for ur suggetion and I have done this conversion using a LUT for (4096 to 256) and will this produces a nice image or not?... actually first of all i read the data in byte format and look over it and it seems pretty much the same as i said the LSB bits in first byte and in the second byte the upper nibble are zeros and the lower nibble is consisting of remainin four bits. Wht i did is i read both bytes at a times and used the value of first byte to add with the calculated value by using the four lower bits of second byte like (like Bit0*256+Bit1*512+Bit2*1024+Bit3*2048) which gives me a values ranging between (0 - 4096) and then i used the LUT for getting corresponding greylevel in (0-255) range. Can u tell me whether I am right or not?
thanks,
Suman
|
|
|
|
|
Interesting - I would not have thought of using a look-up table, but it should be slightly faster than shifting right by 4.
I assume that your look-up table looks something like this:
InValue OutValue<br />
4095 255<br />
4094 255<br />
. . . . <br />
4080 255<br />
4079 254<br />
. . . .<br />
16 1<br />
15 0<br />
. . . .<br />
0 0
Your approach should work, as you have described it. The only change I would suggest is that you don't need to treat each bit in the MS nibble individually.
I would do something like:
LSByte = getNextByteFromFile();<br />
MSByte = getNextByteFromFile();<br />
<br />
totalValue = (int)LSByte + ((int)MSByte)<<8;<br />
<br />
newValue = LookUpTable[totalValue];<br />
<br />
RGB24 = newValue + newValue<<8 + newValue<<16;
I think that this should give you a monochrome display which is really good. In my application I grab monochrome video frames from a camera at 8 bits per pixel, then I convert to RGB16, using a similar technique to above. This only leaves 5 bits per colour, and since R, G, B are the same, this means that I only end up with 32 different shades! The image which is displayed looks pretty good, although you can sometimes see the different grey levels. Your application will have 256 different shades of grey, which should give an excellent monochrome image!
|
|
|
|
|
Hi
I am programming a simple movie player with DirectShow now. Everything seems OK, the only thing I haven't been able to do is to change the movie resolution on the fly (for example, I want user to be able to specify, when the movie is running, whether they want to zoom in or zoom out, etc.) Could someone point me in the right direction on how to do this?
Thanks!
|
|
|
|
|
Hello:
I have this interesting problem...
I have to use a c-library that contains a function that is defined as follow:
int func(char * cpData);
I have been using a structure accually a nested structure as follows
struct A
{
char cString;
char cAnotherstring;
}sA;
struct B
{
int iIndex;
float fData[MAXDATA];
}sB;
struct C
{
struct A nA;
struct B nB[8];
}sStream;
I can call this structure with now problems
using func((char *) &sStream);
I wanted to use a vector for the float fData
such as the struct B is as follows:
struct B
{
int iIndex;
vector< float> fData;
} sB;
when I call the function I getting garbaged data.
I do take care of reserving the data size.
In fact I can see that the correct data is there just prior to calling
this legecy function.
Is there a way I can convert somehow to c-array which the function understand
but be able to use the advantage of the vector.
Your speedy reply wiill be vey much appreciated.
|
|
|
|
|
Hy,
Change call to this:
func((char*)&sStream.nA)
|
|
|
|
|
Sorry but the whole structure has to be sent.
|
|
|
|
|
And how you wan't use C++ class vector in C library ?
|
|
|
|
|
Hello,
The function takes a character array as an argument. Therefore, you cant pass a struct, unless you want to find out how the function accesses the memory...
A better way is to do your own data conversion prior to passing it to the function.
Multiply it by infinity and take it beyond eternity and you'll still have no idea about what I'm talking about.
|
|
|
|