|
Hi All,
I need to catch all keyboard event and some of the keyboard keys i need to intercept.
So i using windows hook to do it.
But in some reason when i do "SetWindowsHookEx" i getting wrong return code and my hook does not working.
This is my code ( just the relevant part ):
<br />
LRESULT CALLBACK KeyboardProc(int code,
WPARAM wParam,
LPARAM lParam
)<br />
{<br />
LRESULT lRetCode = 0;<br />
<br />
return lRetCode;<br />
}<br />
<br />
LRESULT SetHook()<br />
{<br />
<br />
HINSTANCE hinstance = NULL;<br />
HHOOK hookHandle = NULL;<br />
LRESULT lRetCode = 0;<br />
<br />
hookHandle = SetWindowsHookEx(WH_KEYBOARD, KeyboardProc, hinstance, 0);<br />
<br />
return lRetCode;<br />
}<br />
<br />
What i did wronge ?
Thanks for any help.
|
|
|
|
|
A couple of things, according to the SetWindowsHookEx [^] article in MSDN:
1. As a rule, your hook function must defined in a DLL. It's not clear from your code that you've done that.
2. You are passing an hinstance value of NULL , which probably isn't correct. You need to pass the instance handle for the DLL, which is a parameter to your DllMain[^] function.
Software Zen: delete this;
|
|
|
|
|
1. Thanks for the help.
2. From what i understand - the hook can appear in the code and not only in separate DLL - am i wrong ? and in this case ( not in the DLL ) what need to be put in the hinstance ?
|
|
|
|
|
For most cases, the hook function must be in a separate DLL. The only case where it can be in an EXE is when you are hooking only a specific thread within your process. For that to be the case, you must pass the thread ID in one of the arguments to SetWindowsHookEx . In that situation, I believe you pass NULL for the instance handle.
Software Zen: delete this;
|
|
|
|
|
Hi,
where can I find New ATL Object(RadBytes Category) on vs2003 I cant find it on the wizard
Thanks
|
|
|
|
|
|
I am trying to add item to the combobox, using the code below...
....
HWND hComboBox = CreateWindow(WC_COMBOBOXEX, L"", WS_BORDER | WS_CHILD | CBS_DROPDOWNLIST, 160, 70, 160, 23, hWnd, NULL, hInst, 0);
COMBOBOXEXITEM item;
memset(&item, 0, sizeof(COMBOBOXEXITEM));
item.iItem = 0;
item.mask = CBEIF_TEXT;
item.pszText = TEXT("Test");
item.cchTextMax = 4;
SendMessage(hComboBox, CBEM_INSERTITEM, 0, (LPARAM)&item);
....
But item is not added (SendMessage returns 0). What am i doing wrong?
|
|
|
|
|
A return of 0 means it was inserted at the first position (index 0).
A return of -1 would indicate failure.
You don't have to set item.cchTextMax when setting text
|
|
|
|
|
|
This is my first time implementing a plugin interface to an application, and am I having issues deciding the best way for communication between my plugin dll and the host application.
I've read most of the articles here, and I can't reach any sort of agreement. My plugin dll has specific defined functions that are exported. When my host wants to invoke something in the dll, it calls the exported function which was found by GetProcAddress. When my dll wants to communicate with the host it sends a registered message to the host application and uses W & LPARAM for the data. This system works, but it is cumbersome to send more than two pieces of information at a time and I'm worried about memory corruption between the dll and host.
Some of the articles suggest (but don't necessarily show how to) giving the host a similar interface that can be called from the dll.
What is the "best" approach?
Thanks
modified 12-Jul-20 21:01pm.
|
|
|
|
|
Since you are using DLL's for your plugins, you can pass the address of a callback function in your host to the DLL. The DLL can then call that function when it needs to tell the host something. When you pass the callback function to the DLL, you'll also want to pass an identifying value to the DLL at the same time. The DLL, when it calls the callback function, passes the identifying value in addition to any other arguments required. The host can then use the identifying value to determine which plugin called the callback function.
This is similar to the Windows API approach for a wide variety of things. This is a lot simpler and more flexible than the registered message approach.
Software Zen: delete this;
|
|
|
|
|
Great, thanks, I hadn't even considered that route.
modified 12-Jul-20 21:01pm.
|
|
|
|
|
Hi,
I've again a question about winsockets. How do I manage sending data, if I have e.g. TCHARs at my client side ? Do I have to convert my TCHAR strings at clientside to normal char strings, then send them, and then convert them at server side back to TCHARs ? Or is there a better way to do this ?
For information: I am writing a little protocol, so there are also Integers in my sending buffer...maybe this complicates the thing a little bit, I don't really know.
|
|
|
|
|
Sockets know nothing about the data you are sending. It's all just bytes at the socket level.
The only trick is to make sure you calculate BYTE counts and not TCHAR counts.
In other words, don't use _tcslen()....use _tcslen() * sizeof(TCHAR) (of course, add 1 to _tcslen()
for the NULL terminator if that's being sent).
|
|
|
|
|
Hey thanks. I've now done this at the sending client to copy the TCHARs to my sending buffer:
memcpy(pPointerToBuffer,MyString,wcslen(MyString)*sizeof(TCHAR))
And at the server side I do something like this:
int n = 10; //assuming that we know that this unicode string has a length of 10
TCHAR ReceivedString[n];
memcpy(ReceivedString,pPointerToBuffer,sizeof(TCHAR)*10);
...
This works and I think it's correct. Or is there a better way to do this ?
Again thank you
|
|
|
|
|
You need to make sure that your client and server both agree on what a TCHAR is. They both need to be compiled for UNICODE, or they both need to be compiled for ANSI.
Software Zen: delete this;
|
|
|
|
|
In addition to Gary Wheeler's response..
This line
memcpy(pPointerToBuffer,MyString,wcslen(MyString)*sizeof(TCHAR))
IMO could be confusing. wcslen() is for wchar_t types only, but TCHAR is generic and its type
depends on whether UNICODE is defined or not. For code that could potentially be compiled with
OR without UNICODE then I'd recommend using all generics:
// All generics
memcpy(pPointerToBuffer,MyString,_tcslen(MyString)*sizeof(TCHAR))
For code that will always be UNICODE compiled, use:
// All wchar_t
memcpy(pPointerToBuffer,MyString,wcslen(MyString)*sizeof(wchar_t))
Whatever is best for your situation - the point is consistency (makes it easier to prevent or
track down bugs later ).
As stated by Mr Wheeler, both ends will need to know the character type. That can be exchanged in
initial handshake between peers or it could be assumed (always the same). Both are fine - it's
a matter of what you need.
Along the same lines, you mentioned you have int or DWORD values you send as well. If you know
the CPU on both ends will always store "int"s in little-endian format then you can just send
them as a byte stream (byte count = sizeof(int)). If there's a chance the peers will use
different byte ordering then you'll want to convert them to network byte order (or whatever
scheme you choose to use) so that both ends will always be able to put an int back together
in the proper order.
All that aside, your code is fine for Unicode strings
|
|
|
|
|
Hey, thanks a lot. Now it's all clear to me. (sry for late reply)
|
|
|
|
|
Greetings
I'm building a website that has an automate engine to display pictures. There is a menu that lets you choose the event. The directories are organized like this:
- root
-- Weddings
--- WBlaBla1
---- Pic1
---- Pic2
---- PicN
--- WBlaBla2
---- Pic1
---- Pic2
---- PicN
--- SomeOtherTypeOfEventWithTheSameStructureHasPreviously
-- SomethingElse
The purpose is to sell the pictures, so it will have a cart shop. The problem is that I have the pictures in very high resolution, and I want to prevent the users from accessing the pictures in this format (i.e. 1807x1772). I know that PHP can handle this (i.e. display thumbnails) but the thing is that, with time, the events will be more and more, and this consumes space in the server, and the space is rented, offcourse.
So I had the idea of writting a C++ application that converts ALL the pictures in a directory to a low resolution version AND with a water mark, that way the sysadmin could convert the pictures before uploading them to the server. What I would like to ask is how to do this in C++, I mean, not search the pictures nor that sort of things, but how to lower the resolution and add the water mark to them. The pictures are all in JPG format.
Best regards
hint_54
|
|
|
|
|
Th easiest way I know of (on Windows) is to use GDI+ to load and save the images. A simple
stretchblt to a smaller, proportioned size is all that's necessary...load, resize, and save.
|
|
|
|
|
I'll take a look thx
hint_54
|
|
|
|
|
Here ya go...a holiday present
(I ripped the GetEncoderClsid() right out of the GDI+ docs
int GetEncoderClsid(const WCHAR* format, CLSID* pClsid)
{
UINT num = 0;
UINT size = 0;
ImageCodecInfo* pImageCodecInfo = NULL;
GetImageEncodersSize(&num, &size);
if(size == 0)
return -1;
pImageCodecInfo = (ImageCodecInfo*)(malloc(size));
if(pImageCodecInfo == NULL)
return -1;
GetImageEncoders(num, size, pImageCodecInfo);
for(UINT j = 0; j < num; ++j)
{
if( wcscmp(pImageCodecInfo[j].MimeType, format) == 0 )
{
*pClsid = pImageCodecInfo[j].Clsid;
free(pImageCodecInfo);
return j;
}
}
free(pImageCodecInfo);
return -1;
}
...
Gdiplus::Bitmap SrcBitmap(L"D:\\Source\\Images\\sony-cybershot.jpg", FALSE);
Gdiplus::Bitmap DstBitmap(320,240,SrcBitmap.GetPixelFormat());
Graphics DstGraphics(&DstBitmap);
DstGraphics.DrawImage(&SrcBitmap, 0, 0, 320, 240);
CLSID jpgClsid;
GetEncoderClsid(L"image/jpeg", &jpgClsid);
DstBitmap.Save(L"D:\\Source\\Images\\sony-cybershot_test.jpg", &jpgClsid, NULL);
|
|
|
|
|
Djii thx !!
hint_54
|
|
|
|
|
I downloaded the Apache Portable Runtime for windows. It included a
VC6 solution and project definitions. I loaded the soultion with Visual Studio 2005 and it successfully converted the solution and project to
the new format. I can compile the code with the converted solution.
I decided to create a new solution from scratch. But when I compile my
solution it finds the wrong include file. In my configuration I specify
an include search string of:
".\include; .\include\arch; .\include\arch\win32; .\include\arch\unix"
The include in question, "apr_arch_file_io.h", exists in the "win32" and
"unix" directories. but the compiler loads the one in the "unix" directory.
I do not understand why the converted solution works and mine does not.
If I remove the include from the "unix" directory then my solution
compiles cleanly.
Any thoughts as to what configuration option I have to set to my solution to work. I have looked at the converted project defintions but
nothing obvious stands out.
Thanks for any help
James Johnson
|
|
|
|
|
One thought only: look at the preprocessor directives for the converted project and in the code. When cross platform code is involved you normally insert ‘#ifdef’ statements in the code to specify which platform you are compiling for.
INTP
"Program testing can be used to show the presence of bugs, but never to show their absence."Edsger Dijkstra
|
|
|
|
|