|
As a fairly new coder, I appriciate your insight! Thank you very much!
|
|
|
|
|
To continue on this, How do I now terminate this task, so that I can send a new task in it's place?
|
|
|
|
|
Blake Miller wrote:
if( GetAsyncKeyState(VK_CONTROL) & 0x8000 )
Can you tell me where I can find a listing of these masks?
Even the 0x40000000? All I can determin is that 0x8000 is for the control keys, and that 0x40000000 limits me to the key up.
Obviously, I need to change this to key down, as the task is complete before the message box displays curently. Removing this mask helped me determine this with a return value of -1...
Last question...
What reference material would you recommend reading for this type of hook information?
Jeff
|
|
|
|
|
You need to read the MSDN for each function VERY carefully.
The GetAsyncKeyState function determines whether a key is up or down at the time the function is called, and whether the key was pressed after a previous call to GetAsyncKeyState. ... If the most significant bit is set, the key is down, and if the least significant bit is set, the key was pressed after the previous call to GetAsyncKeyState.
KeyboardProc Function
lParam
[in] Specifies the repeat count, scan code, extended-key flag, context code, previous key-state flag, and transition-state flag. For more information about the lParam parameter, see Keystroke Message Flags. This parameter can be one or more of the following values.
0-15
Specifies the repeat count. The value is the number of times the keystroke is repeated as a result of the user's holding down the key.
16-23
Specifies the scan code. The value depends on the OEM.
24
Specifies whether the key is an extended key, such as a function key or a key on the numeric keypad. The value is 1 if the key is an extended key; otherwise, it is 0.
25-28
Reserved.
29
Specifies the context code. The value is 1 if the ALT key is down; otherwise, it is 0.
30
Specifies the previous key state. The value is 1 if the key is down before the message is sent; it is 0 if the key is up.
31
Specifies the transition state. The value is 0 if the key is being pressed and 1 if it is being released.
0x40000000 = 0100 0000 0000 0000 0000 0000 0000 0000 in binary.
Which means you are testing if bit 30 is set.
0x8000 = 1000 0000 0000 0000 in binary.
Which means you are testing if bit 15 is set.
|
|
|
|
|
I want to use NtCreateFile in my application, but I cant get a simple program calling NtCreateFile to compile with Visual Studio. I installed DDK but dont know how to get started with a simple example of using NtCreateFile. Can someone help or give pointers to introductory tutorials on this topic? Thanks.
|
|
|
|
|
NtCreateFile is an undocumented API and should not be required for standard WIN32 Applications which can simply use "CreateFile" as they are the same API.
Now, if you are writing a driver the API that should be documented in the DDK header files is "ZwCreateFile". This encapsulates the overhead of creating a IRP_MJ_CREATE IRP and handling it for ease of use with your driver. Of course the down fall of using such an API is that it is not flexible and can only be used at IRQL PASSIVE.
ZwCreateFile[^]
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Suppose a program is doing some processing and the CPU shoots up to 99% (by design and not by error). How do I make it use only 50% of CPU? I have seen a program somewhere that always shows 50% in the task manager. Any idea how to achieve this behaviour? Thanks.
|
|
|
|
|
1. You can reduce the priority of your application or of specific worker threads.
The SetThreadPriority function sets the priority value for the specified thread. This value, together with the priority class of the thread's process, determines the thread's base priority level.
Why not just let the OS figure it out? Although I realize that a ill-behaved foreground application can really seem to take over your computer!
|
|
|
|
|
I want to obtain the theme colour of the border of a disabled button. I try;
GetThemeColor(hTheme, BP_PUSHBUTTON, PBS_DISABLED, TMT_BORDERCOLOR, &rgb );
but the returned color is darker than the colour produced by DrawThemeBackground call.
I've tried various other property IDs but to no avail.
Does anyone know how to obtain the correct theme colour?
|
|
|
|
|
RazorBridge wrote:
I've tried various other property IDs but to no avail.
I don't know for sure, but have you tried these in particular?
TMT_EDGELIGHTCOLOR
TMT_EDGEHIGHLIGHTCOLOR
TMT_EDGESHADOWCOLOR
TMT_EDGEDKSHADOWCOLOR
TMT_EDGEFILLCOLOR
--
jlr
http://jlamas.blogspot.com/[^]
|
|
|
|
|
Yes, tried them all but no luck.
|
|
|
|
|
Target OS = WinXP/SP2
Using VC6
I'm trying to modify Tom Delascia's TaskKeyHook DLL as follows
<br />
LRESULT CALLBACK MyTaskKeyHookLL(int nCode, WPARAM wp, LPARAM lp)<br />
{<br />
KBDLLHOOKSTRUCT *pkh = (KBDLLHOOKSTRUCT *) lp;<br />
<br />
if (nCode == HC_ACTION) <br />
{<br />
BOOL bCtrlKeyDown = GetAsyncKeyState(VK_CONTROL)>>((sizeof(SHORT) * 8) - 1);<br />
BOOL bDelKeyDown = (pkh->vkCode == VK_DELETE);<br />
BOOL bAltKeyDown = (pkh->flags & LLKHF_ALTDOWN);<br />
<br />
if ( (pkh->vkCode==VK_ESCAPE && bCtrlKeyDown)
||(pkh->vkCode==VK_TAB && bAltKeyDown)
||(pkh->vkCode==VK_ESCAPE && bAltKeyDown)
||(pkh->vkCode==VK_LWIN || pkh->vkCode==VK_RWIN)
||(bCtrlKeyDown && bAltKeyDown && bDelKeyDown))
{<br />
if (g_bBeep && (wp == WM_SYSKEYDOWN || wp == WM_KEYDOWN))<br />
{<br />
MessageBeep(0);
}<br />
return 1;
}<br />
}<br />
return CallNextHookEx(g_hHookKbdLL, nCode, wp, lp);<br />
}<br />
My line of code is the addition of the last comparison in the if statement:
||(bCtrlKeyDown && bAltKeyDown && bDelKeyDown)
This does not have the desired affect of simply eating the keystroke. Why not, and how do I get to where I want to go if a global hook isn't going to work?
------- sig starts
"I've heard some drivers saying, 'We're going too fast here...'. If you're not here to race, go the hell home - don't come here and grumble about going too fast. Why don't you tie a kerosene rag around your ankles so the ants won't climb up and eat your candy ass..." - Dale Earnhardt
"...the staggering layers of obscenity in your statement make it a work of art on so many levels." - Jason Jystad, 10/26/2001
|
|
|
|
|
I always thought the whole point of the Ctrl-Alt-Del sequence was that it *couldn't* be interupted by application programs, but was under the sole control of the OS, so that it could be trusted. I'm sure you're not, but if you could catch this combination, what's to stop you putting up a dialog box just like a standard logon box, and stealing passwords?
|
|
|
|
|
I didn't ask why it's done, I asked how to get around it.
I'm building a "kiosk"-style app that will be put on over 100 laptops, and I don't want the users to be able to exit our app at all. The app already goes full-screen (no titlebar, menu, toolbar, or statusbar), can't be minimized, and requires a password to shut it down.
I've also disabled any way to change tasks using normal avenues (alt-tab, etc).
Now, I want to keep them from shutting down or restarting the box.
Is there a way to do this, or not?
------- sig starts
"I've heard some drivers saying, 'We're going too fast here...'. If you're not here to race, go the hell home - don't come here and grumble about going too fast. Why don't you tie a kerosene rag around your ankles so the ants won't climb up and eat your candy ass..." - Dale Earnhardt
"...the staggering layers of obscenity in your statement make it a work of art on so many levels." - Jason Jystad, 10/26/2001
|
|
|
|
|
I did a quick search, I remembered that the GINA dll who isresponsible for the logon process might also be used when doing a log-off.
one thing that came up was :
The purpose of a GINA DLL is to provide customizable user identification and authentication procedures. The default GINA does this by delegating SAS event monitoring to Winlogon, which receives and processes CTL+ALT+DEL secure attention sequences (SASs). A custom GINA is responsible for setting itself up to receive SAS events (other than the default CTRL+ALT+DEL SAS event), and notifying Winlogon when SAS events occur. Winlogon will evaluate its state to determine what is required to process the custom GINA's SAS. This processing usually includes calls to the GINA's SAS processing functions.
maybe this can lead you to something usefull.
Maximilien Lincourt
Your Head A Splode - Strong Bad
|
|
|
|
|
|
|
Theres 3 ways:
1. Inject into winlogon and handle the WM_HOTKEY.
2. Write a gina to handle the SAS.
3. Write a keyboard filter driver.
|
|
|
|
|
Anonymous wrote:
3. Write a keyboard filter driver.
To my knowledge, the Ctrl+Alt+Del key combination is not sent to the keyboard driver. Otherwise it would be too easy to circumvent.
"One must learn from the bite of the fire to leave it alone." - Native American Proverb
|
|
|
|
|
Hi,
I have a char*. How to convert it to unicode big-endian in unmanagerd C++?
Thanks
|
|
|
|
|
Converting to unicode and converting to big-endian are 2 completely separate things.
To convert to unicode use mbstowcs from the C runtime library or the Windows API call MultiByteToWideChar.
Do you reall need it in big-endian? Are you sending the data over TCP or something? Anyway to convert to big-endian loop over each character in the unicode string buffer and either call htonl from the winsock library or manually swap the bytes yourself like:
wchar_t bigendchar = ((littleendwchar & 0xFF) << 8) | ((littleendwchar & 0xFF00) >> 8);
|
|
|
|
|
Actually I want to convert the string pointed by a char* to big-endian encoding, nothing deal with TCP/IP.
Like the following managed C++ code but work in unmanaged C++:
#include "stdafx.h"
#using <mscorlib.dll>
using namespace System;
using namespace System::Text;
int main()
{
String* unicodeString = S"This string contains the unicode character Pi(中)";
// Create two different encodings.
Encoding * unicode = Encoding::Unicode;
Encoding * bigendian = Encoding::BigEndianUnicode;
// Convert the string into a Byte->Item[].
Byte unicodeBytes[] = unicode -> GetBytes(unicodeString);
// Perform the conversion from one encoding to the other.
Byte bigendianBytes[] = Encoding::Convert(unicode, bigendian, unicodeBytes);
// Convert the new Byte into[] a char and[] then into a string.
// This is a slightly different approach to converting to illustrate
// the use of GetCharCount/GetChars.
Char bigendianChars[] = new Char[bigendian ->GetCharCount(bigendianBytes, 0, bigendianBytes -> Length)];
bigendian -> GetChars(bigendianBytes, 0, bigendianBytes->Length, bigendianChars, 0);
String* bigendianString = new String(bigendianChars);
// Display the strings created before and after the conversion.
Console::WriteLine(S"Original String*: {0}", unicodeString);
Console::WriteLine(S"bigendian converted String*: {0}", bigendianString);
}
|
|
|
|
|
That code isn't actually converting from char* (ANSI string) to big-endian. Rather, it's converting a String object, which internally holds a unicode string encoded as UTF-16) to another String object, which internally will hold a unicode string, again encoded as UTF-16 but this time using big-endian.
From end to end it's just swapping pairs of bytes.
The entire process it's doing is as follows:
1. Get the internal buffer of the original string as a byte array
2. Convert the byte array to big-endian (swapping each pair of bytes)
3. Copy the byte array into an array of Unicode UTF-16 chars
4. Create a new String object from the array of Unicode UTF-16 big endian chars
So, an equivalent in unmanaged C++ would be converting a wide char string to big-endian. I'm not aware of any standard C++ or Win32 API function that can be used to swap pairs of bytes, but something like the function below should work:
void ConvertToBigEndian(const wchar_t* pw, wchar_t* pwBuffer, int nBufLen)
{
int i = 0;
for( ;i < nBufLen && pw[i] != 0; i++)
{
const char* p = (const char*)&pw[i];
char* q = (char*) &pwBuffer[i];
q[0] = p[1];
q[1] = p[0];
}
if (i < nBufLen)
pwBuffer[i] = 0;
else
pwBuffer[nBufLen-1] = 0;
}
Besides that, if you really need to start from a char* (MBCS/ANSI string), first convert it to wide chars (Unicode UTF-16) using MultiByteToWideChar.
Hope that helps,
--
jlr
http://jlamas.blogspot.com/[^]
|
|
|
|
|
Thanks for everybody.
I have constructed a method:
char* UnicodeCharToBigEndianConverter(char* message)
{
wchar_t input[2000];
wchar_t output[2000];
MultiByteToWideChar(
CP_ACP, // code page
MB_COMPOSITE, // character-type options
message, // string to map
-1, // number of bytes in string
input, // wide-character buffer
strlen(message)+2 // size of buffer
);
// escape character for unicode
output[0]='\xFE\xFF';
MessageBoxW(NULL,input,L"Input",0);
for(int i=1;i<=wcslen(input);i++)
{
output[i] = ((input[i-1] & 0xFF) << 8) | ((input[i-1] & 0xFF00) >> 8);
}
wcscat(output, (wchar_t*)"\x00\x00");
return (char*) output;
}
It works fine if the char* that I got from my application is a string with all unicode characters, e.g. all chinese characters. However, what if a user input a string with some chinese characters and some ASCII characters, e.g. A-Z, a-z?
Is that whole string can be converted to big-endian using:
output[i] = ((input[i-1] & 0xFF) << 8) | ((input[i-1] & 0xFF00) >> 8);
Or we need to handle those ASCII characters?
|
|
|
|
|
I hope you don't mind my comments
scchan1984 wrote:
wchar_t input[2000];
wchar_t output[2000];
MultiByteToWideChar(
CP_ACP, // code page
MB_COMPOSITE, // character-type options
message, // string to map
-1, // number of bytes in string
input, // wide-character buffer
strlen(message)+2 // size of buffer
);
Those magic 2000 don't look well. If you are going to specify a maximum length of strings you support, you should at least check that the string you receive isn't longer than what you support.
The last parameter to MultiByteToWideChar is wrong. There you are expected to pass the size of your buffer, which in this case is 2000. strlen() returns a char count of the string (not counting the null terminator), so for example, if you receive a message of 2100 chinese characters, strlen will return 4200. When you call to MultiByteToWideChar will try to write 2100 wide chars in a buffer that can only hold 2000, based in the erroneus information you gave it (you would be telling it that your buffer has space for 4202 wide chars instead of the actual 2000). The most likely result is a crash of your application.
You should call MultiByteToWideChar first using 0 as the buffer size. That will return the required size for the buffer in wide chars. With that info, you can allocate a buffer in the heap, and then call MultiByteToWideChar again to do the conversion.
scchan1984 wrote:
// escape character for unicode
output[0]='\xFE\xFF';
You defined both buffers with the same size, but if output will hold a byte oder marker, then it should be at least one wide char bigger, or you might not have enough space.
scchan1984 wrote:
for(int i=1;i<=wcslen(input);i++)
You are calling wcslen in each iteration, making it traverse the entire string in the search for the NULL terminator. It would make more sense to call wcslen outside of the loop, store its value in a variable, and use the variable in the loop.
Then again, you don't even need to call wcslen, as the length of input is what MultiByteToWideChar returns; you'd just need to receive it in a variable.
scchan1984 wrote:
output[i] = ((input[i-1] & 0xFF) << 8) | ((input[i-1] & 0xFF00) >> 8);
This seems to be a more readable version of the byte swap I wrote in my previous post, right?
scchan1984 wrote:
wcscat(output, (wchar_t*)"\x00\x00");
Here you are again traversing the entire string just to add a terminator at the end. You already know in advance that the length of output would be exactly one wide char more (the byte order marker) than the lenght of input, which in turn was returned by MultiByteToWideChar. So you could simply write
int nInputLen = MultiByteToWideChar(...);
.
.
.
output[nInputLen+1] = 0;
scchan1984 wrote:
return (char*) output;
Why are you returning a pointer to a wide char string as if it were a char string? It doesn't make sense. The return type should be wchar_t, and you shouldn't be casting it to anything else.
Besides, you are returning a pointer to a variable allocated in the stack in the context of the function. As soon as the function returns, the variable will be destroyed!
scchan1984 wrote:
It works fine
I think you haven't done enough testing
scchan1984 wrote:
However, what if a user input a string with some chinese characters and some ASCII characters, e.g. A-Z, a-z?
That wouldn't be a problem. For example, in a MBCS string, which is what your function is supposed to receive as the char* message, an 'a' will occupy just one byte, while a Chinese character will probably use a lead byte followed by a second byte (3 bytes total, plus one additional byte for the null terminator = 4 bytes). After using MultiByteToWideChar to convert it to Unicode, you'll have a wide char (that's 2 bytes) for the 'a' and a second wide char for the chinese character (total = 2 wide chars plus a null wide char as terminator = 3 wide chars = 6 bytes). Then you would swap the bytes in each of those 3 wide chars while copying them to the output buffer, and everything would be fine.
scchan1984 wrote:
Or we need to handle those ASCII characters?
No need for special handling. Each ascii char will be represented by two bytes (one wide char), one with the original value and the other with a value of zero. Those 2 bytes will be swapped like every other wide char in the buffer.
--
jlr
http://jlamas.blogspot.com/[^]
|
|
|
|
|