|
I dont think you can directly execute a DLL
But you can use rundll32 to execute functions in a DLL for some functions [donno why this is so]
I tried this and failed
rundll32 user32.dll,MessageBoxA 0,"hello","hi",0
But this worked
rundll32 user32.dll,ExitWindowsEx 2,0
Nish
|
|
|
|
|
hmmm
even rundll32 user32.dll,ExitWindowsEx 2,0 failed, but I remember using it long back. Then I was on win 95. perhaps it wont work on 2000
Nish
|
|
|
|
|
Doesn't it depend on how you define an executable? DLLs are definitely executed ("Put them up against the wall!" ), but only in the context of an exe. So if executable means stand-alone program then no. But if it means "contains code that is executable" then yes.
Using rundll32.exe to execute the code in the dll is IMHO no different from using your own app to execute it. So whether you can execute the dll using rundll32.exe or not does not is not determine executability (I bet no such word exists!), again IMHO.
Cheers
Steen.
"To claim that computer games influence children is rediculous. If Pacman had influenced children born in the 80'ies we would see a lot of youngsters running around in dark rooms eating pills while listening to monotonous music"
|
|
|
|
|
Rundll32.Exe requires functions to have a particular signature:
void CALLBACK EntryPoint(
HWND hwnd, // handle to owner window
HINSTANCE hinst, // instance handle for the DLL
LPTSTR lpCmdLine, // string the DLL will parse
int nCmdShow // show state
);
You can find this in the MSDN.
I don't think it's possible to generally call any function in a DLL. You have to use an EXE that knows the arguments the function expects, and can convert from the command line form to that required by the function.
Gary R. Wheeler
|
|
|
|
|
I read Colin's post as a philosophical question of whether you could catagorize a DLL as "an executable" in it's own right. Of course, you're right about the specifics of using rundll32.exe, but that was not my point. I don't think the statement "my dll can be called using rundll32.exe, therefore it's an executable" is valid - in this sense rundll32.exe is just another application using a dll.
And of course, you have to know the signature of the exported function you want to call. Otherwise you'll get stack problems. That's one of the reasons they invented COM, to give you runtime linking (OK, that's a little simplistic
Cheers
Steen.
"To claim that computer games influence children is rediculous. If Pacman had influenced children born in the 80'ies we would see a lot of youngsters running around in dark rooms eating pills while listening to monotonous music"
|
|
|
|
|
Does anyone knows some simple function like that ?
I have tried using both WNetGetUniversalName and WNetGetConnection APIs
but with no success. Some code would be appreciated.
Thanks,
Miroslav Rajcic
|
|
|
|
|
I've found this article in a page, but i have to pay for the article to see it, does anybody know where can i found this article free, or anybody know how do this?
|
|
|
|
|
|
Hi,
I try to use CHoverButton (published in this site) in a netscape plugin.
The plugin is in DLL format. But it doesn't work. Finally, I figure out
that the problem lies with bitmap loading. Somehow the
CHoverButton::LoadBitmap function does not load/can't find the bitmap.
I change the code to the simplest version as follows.
mybitmap.LoadBitmap(IDB_BITMAP1); //mybitmap is a CBitmap object
To my surprise, the function returns 0 (fail). I've imported the bitmap
into the resource editor as IDB_BITMAP1, and the resource.h clearly
shows the entry for it.
#define IDB_BITMAP1 105
Does it have anything to do with DLL vs Application? I tried to load the
same bitmap using the same way in normal MFC Application and it works.
How can I know whether the bitmap resource has been linked into the DLL itself?
Is there anyway to verify it through Project Settings?
Your help is appreciated.
|
|
|
|
|
my project consists of 3 services and a set of COM objects. The COM objects each host a pool of worker threads. Each object has three interfaces, one for each service.
The COM objects and the services all update the same table in a SQL Server DB. The multi-threaded objects post a message to a single thread to do their updates.
Do I need a synchronization device, such as CMutex to keep these from stepping on each other? If so, please give me a specific recommendation.
Thanks for the help,
Bill
|
|
|
|
|
It is just a question of how many threads (considering the whole project) access the same resource (in this case, the DB). If it's more than one, then you're better off having the resource protected by a named (interprocess) mutex.
My recommendation is that you stay away from MFC wrappers around synchronization objects, as they're counterintuitive and in some cases broken. Native Win32 mutexes are not so hard to use directly. Here's a starter for a simple class that access a named mutex (or creates if needed), locks on it and releases the lock on destroy. The use is simple: just define a CMutexAccesor object at the beginning of each critizal zone. Use some unique string as lpstrMutexName (like your project name followed by a lot or garbage digits, or a GUID if you're into these). Warning: I've adapted this from some code of mine in Spanish, so some typos are likely to have been introduced.
class CMutexAccesor
{
public:
CMutexAccesor(LPCTSTR lpstrMutexName){
res=(mutex=CreateMutex(NULL,FALSE,lpstrMutexName))!=NULL;
if(res){
res=WaitForSingleObject(mutex,4000)==WAIT_OBJECT_0;
if(!res)CloseHandle(mutex);
}
}
~CMutexAccesor(){
if(res){
ReleaseMutex(mutex);
CloseHandle(mutex);
}
}
BOOL res;
private:
HANDLE mutex;
}; Hope this helps.
Joaquín M López Muñoz
Telefónica, Investigación y Desarrollo
|
|
|
|
|
Thanks for the help. I have on more question. Here are two samples of what I believe you are telling me. I believe the first one is correcte, but would find it wonderful if the second one would also work. Thas tould let me simply override the Update method of my CRecordset objects.
In this first case I am protecting the entire DB operation. There are literally hundreds of updates in the project. All are unprotected.
class CTester(prsSomeTable)
{
UpdateDB()
{
CMutexAccessor *pMut;
pMut = new CMutexAccessor();
prs->Edit();
prs->x = 2;
prs->Update();
delete pMut;
}
};
In the second case I am only protecting the Update. Is this sufficient? If I can get away with simply protecting the Update operation itself, I can reduce the problem to about a dozen changes by overriding the Update method of my recordset classes.
class CTester(prsSomeTable)
{
UpdateDB()
{
CMutexAccessor *pMut;
prs->Edit();
prs->x = 2;
pMut = new CMutexAccessor();
prs->Update();
delete pMut;
}
};
Thanks for the help,
Bill
|
|
|
|
|
I couldn't tell from the little information you provide. What you have to determine is wether the operation is trying to change a shared resource. So: is that prs private to each thread? Is Edit() an read-only operation? If both answers are yes, I guess it is safe to leave those ops out of the mutex-protected zone. If in doubt, wrap it all up within the mutex and measure performance --maybe it is OK for your purposes.
Another subject is that CMutexAccessor should not be used like you do, but this other way:
UpdateDB()
{
CMutexAccessor mut("MYONEANDONLYNAMENOONECAMETOFIGUREOUTBEFORE");
prs->Edit();
prs->x = 2;
pMut = new CMutexAccessor();
prs->Update();
} I.e, construct it on the stack, not with new . This way you stay confident that the destructor will be called even if your function exits in unexpected fashions (for instance, if some of the operations throws an exception). To protect more or less code inside the function just move the CMutexAccessor mut... line up or down.
Joaquín M López Muñoz
Telefónica, Investigación y Desarrollo
|
|
|
|
|
Thanks again. Sorry I forgot to type in the mutex name.
Here's what I actually came up with... (sample usage is at the bottom.
in stdafx.h:
extern CRequests * g_prsRequests;
In main program module
CRequests * g_prsRequests;g_prsRequests
In program entry point:
g_prsRequests = new CRequests();
Here is the modified CRequests class. What do you think?
class CRequests : public CRecordset
{
public:
CRequests(CDatabase* pDatabase = NULL);
DECLARE_DYNAMIC(CRequests)
long m_ID;
CString m_SERVICE_TYPE;
clip for brevity
CMutexAccesor *pMutex;
#define DBMUTEX "AFX_REQUESTS_3B2738B5_A172_11D5_9FCB_00B0D081C96F"
public:
virtual CString GetDefaultConnect();
virtual CString GetDefaultSQL();
virtual void DoFieldExchange(CFieldExchange* pFX);
virtual void AddNew()
{
try
{
pMutex = new CMutexAccesor(DBMUTEX);
CRecordset::AddNew();
}
catch (CDBException *e)
{
if (pMutex)
{
if (pMutex) delete pMutex;
pMutex = NULL;
}
throw e;
}
if (pMutex) delete pMutex;
}
virtual void Edit()
{
try
{
pMutex = new CMutexAccesor(DBMUTEX);
CRecordset::Edit();
}
catch (CDBException *e)
{
if (pMutex)
{
if (pMutex) delete pMutex;
pMutex = NULL;
}
throw e;
}
if (pMutex) delete pMutex;
}
virtual BOOL Update()
{
BOOL bRet;
try
{
bRet = CRecordset::Update();
}
catch (CDBException *e)
{
delete pMutex;
throw e;
}
delete pMutex;
return bRet;
}
void AbandonUpdate()
{
if (pMutex) delete pMutex;
}
};
in the cpp file I set pMutex = NULL in the constructor and delete it in the desctructor.
I'm using a pointer variable so I can create and destroy the mutex for each update operation.
e.g.
try {
g_prsRequests->Edit(); // mutex created
g_prsRequests->m_ID = 1;
g_prsRequests->Update(); // mutext destroyed.
}
catch (CException *e)
{
...blah blah
in the case of an actual DB exception, the mutex is already destroyed...
g_prsRequests->AbaondonUpdate(); // mutex destroyed
}
Thanks for the help,
Bill
|
|
|
|
|
No no no no no. You got it wrong. Do not have a CMutexAccessor pointer as a member of CRequests . Do not. This try catch labyrinth is something you would regret having to maintain in the long term.
Instead, have a local CMutexAccessor variable at the beginning of each protected zone, like this:
virtual void AddNew()
{
CMutexAccesor(DBMUTEX) mut;
CRecordset::AddNew();
}
virtual void Edit(){
CMutexAccesor(DBMUTEX) mut;
CRecordset::Edit();
} and so forth. That simple. This schema ensures exclusive access among sections of code adorned this way, no matter the object, the method, the thread or the process that piece of code belongs in. What identifies the section (and I guess this is where your misunderstanding stems from) is not the particular CMutexAccesor object used to perform the locking, but only the name of the mutex (DBMUTEX in your example). Do not worry also about mutex-protected methods calling other mutex-protected methods: it'll all work like a breeze (your approach is a maintenance nightmare with respect to this).
Hope I made myself clear. Do't hesitate to ask for more help, if needed.
Joaquín M López Muñoz
Telefónica, Investigación y Desarrollo
|
|
|
|
|
Got it. Thanks. You are right, I was thinking that the object held the lock, not the name. I really appreciate your time on this. You have likely saved me many hours of frustration.
Thanks for the help,
Bill
|
|
|
|
|
In order to set my CEditView to read-only, i modified the createstruct passed to PreCreateWindow as so;
BOOL CLogView::PreCreateWindow(CREATESTRUCT& cs)
{
cs.dwExStyle = ES_READONLY;
return CEditView::PreCreateWindow(cs);
}
But, this doesn't result in the view having a read-only status when the application is loaded. Neither does calling ModifyStyleEx during OnInititalUpdate with the relevant style.
I could just override input events to stop the user forcing the view in focus, but that isn't the logical or correct way to go about it.
Any ideas?
Simon
Hey, it looks like you're writing a letter!
|
|
|
|
|
try
GetEditCtrl().SetReadOnly(TRUE);
---
It may be that your sole purpose in life is simply to serve as a warning to others.
|
|
|
|
|
Hi!
A dynamically loaded DLL I wrote receives a pointer to a class that I
pass to it from the client app. A certain method in that class, (let's
call it) cClass::AllocMem()is called inside the DLL to allocate some memory.
The "problem" is that the allocated memory is stored on the heap
of the DLL, as opposed to the heap of the client app, which is
where I'd like it to reside. Is there any way for me to still be
able to call this AllocMem() method inside the DLL and have the
memory allocated inside the client app's heap?
This is just a plain vanilla Win32 application and DLL. I know in
MFC, there the AFX_MANAGE_STATE() macro to help out with these things,
but it is unavailable to me.
Does anybody have any ideas?
Thanks a bunch!
Steve The Plant
|
|
|
|
|
Have you tried declaring cClass::AllocMem() as virtual?
Joaquín M López Muñoz
Telefónica, Investigación y Desarrollo
|
|
|
|
|
Well, I tried and it worked! But I don't understand why.
Does it have anything to do the class' virtual table
when you pass it to the DLL?
Steve The Plant
|
|
|
|
|
Well, actually it's magic
Now seriously, what's happening here is this:
Both your client app and your DLL are having access to cClass.h and cClass.cpp , and both modules are subsequently compiling and linking their own copies of cClass::AllocMem() . Now, when you pass a pointer to a cClass object created by the app to the DLL, and the DLL code executes pcClass->AllocMem() , what it is actually doing is invoking its own version of cClass::AllocMem() against pcClass : crash promptly ensues when the clients tries to free the memory allocated, for the reasons you already diagnosed.
Now, if you make cClass::AllocMem() virtual, then the DLL does not execute any local version of the method, but instead uses pcClass to locate its associated virtual table and AllocMem() implementation, both of which reside at the app's home (which originally created the object). Having things arranged like this, you can even remove cClass.cpp from the DLL build and things will still work as long as the DLL limits itself to handle cClass pointers passed from the app, and all relevant methods are virtualized. This is also a good thing to have in the light of OO, as it defines an interface contract between the app and the DLL and allows you to change the implementation code without the DLL knowing nor having to be recompiled.
Joaquín M López Muñoz
Telefónica, Investigación y Desarrollo
|
|
|
|
|
Waitasecond! According to Richter there is no such thing as a local DLL heap in Win32. The DLL uses the heap of the client process. So unless the DLL in question here creates it's own heap using HeapCreate then the OP's problem cannot be caused by allocation of different heaps.
Cheers
Steen.
"To claim that computer games influence children is rediculous. If Pacman had influenced children born in the 80'ies we would see a lot of youngsters running around in dark rooms eating pills while listening to monotonous music"
|
|
|
|
|
All right all right. The picture is larger than I showed, but, taken literally, Richter is wrong.
What is causing these problems with allocating memory on one side and deallocating at the other is the C runtime library (CRT) used by each of the parts. As you know, the CRT comes in six flavors:- (Release) Single-Threaded
- (Release) Multithreaded
- (Release) Multithreaded DLL
- Debug Single-Threaded
- Debug Multithreaded
- Debug Multithreaded DLL
Let's begin with the debug/release aspect. In debug mode, standard C/C++ functions like malloc() and new allocate, when requested a memory block, additional info about the block than can be later used to identify out of bounds errors and stuff like that. This info is stored contiguously to the block delivered to the app, so that when free() or delete are invoked, these functions know where the info should be with respect to the pointer to deallocate. Now, if your app uses a debug CRT and the DLL a release one, and the DLL malloc() s some memory and the app tries to free() it, a crash will follow as the debug CT cannot find that additional info I talked about. Other similar scenarios involving debug/release mixing follow analogous patterns.
Now for the DLL/not DLL option. If you choose not DLL, then the CRT is linked as a static library to your app (or DLL). The CRT maintains a private heap (yes, it does) called _crtheap which is created upon initiation of the app (or the DLL). So, even if you use the exact same version of CRT in static mode for your app and your DLL, your final executable will end up with two _crtheap s accessed respectively from the app CRT and the DLL CRT code. As before, allocating on one side and deallocating on the other leads to catastrophe.
Enough is enough. It is simple to check that all combinations of CRTs exhibit this problem except if both the app and the DLL use the same CRT version and this CRT is linked as a DLL. Then, only one _crtheap exists, shared by all folks in the program, and all other aspects being equal (debug/release, multithreaded/single threaded) things run smooth.
The moral of the story: Expect problems when passing around the responsibility of freeing a CRT-allocated chunk of memory except if you are 100% sure all parts involved (app, DLLs) use the same CRT version and this is linked dynamically.
Joaquín M López Muñoz
Telefónica, Investigación y Desarrollo
|
|
|
|
|
Wow! You really do know what you're talking about, don't you? I stand corrected, I bow my head before you
I did a search on MSDN on the subject (yeah, I know, should have done it before), and article Q190799 does a pretty good job explaining things (at least after I had read your post first
I will immidiately check if my current multi-DLL application project is linked to the MT DLL version!
Cheers
Steen.
"To claim that computer games influence children is rediculous. If Pacman had influenced children born in the 80'ies we would see a lot of youngsters running around in dark rooms eating pills while listening to monotonous music"
|
|
|
|
|