|
Hi,
I'm not sure if this is possible, or whether it's been answered before but I've reached meltdown trying to guess the key phrase to get information on this.
What I have is:
- an existing COM DLL + idl etc which is installed in location A
What I'd like to do is:
write a new version of the DLL supporting the same interfaces as A so it is a complete replacement for A. However, I actually want to use the original DLL to do all the work.
All the new DLL is for is to journal calls in and events out of the original and forward responses back to the client application. It may eventually multiplex information elsewhere, but that's the future.
Any help would be greatly appreciated. I've been staring at MSDN too long to actually read the COM API clearly
Thanks.
Kev
|
|
|
|
|
You can import the idl file in your new idl file to get the interface information from the old file. Please see the MIDL statement import .
Note that the compiled DLL will not depend on the old DLL. You can opt to install only the new DLL on the target system.
Another way to do it is to #import (VC++ feature) the DLL and its associated type library into your C++ code.
A third way to do it is to grab the header file generated by the compilation of the IDL file beloning to the old DLL, and #include it in your new sources.
Personally, I import import in the IDL file. Please note that by doing so, the corresponding header file of the imported IDL file, must also be accessible to the C++ compiler. At work I have an include files in one directory and IDL files in another (you can have them in one directory if you're not so anal about order it as I am ). I have added those two directories to the include search path in the IDE. That way MIDL finds the idl-file, and the C++ compiler finds the corresponding header file.
--
Arigato gozaimashita!
|
|
|
|
|
Hi,
thanks for replying - all useful information.
However, I don't think I made my question clear enough.
What I am trying to achieve is to be able to install my binary-compatible version of an exisiting DLL onto a pre-existing system containing the original DLL. My new DLL would register itself and be the DLL loaded by clients of the original DLL.
My DLL, because it knows that there is an existing DLL in a known location would be able to the original to provide the clients with identical functionality.
If I was using plain DLL's I'd simply be able to install the new version so that it was found earlier in the path and load in the original DLL using LoadLibrary(). Since I know exactly what the interfaces are, I could do this with the COM DLL. However, I'm looking for an "official" COM way of short-circuiting the registry lookup.
Kev
|
|
|
|
|
Aha! My foggy brain finally seems to be kicking into gear again. I believe the answer is to:
1) Call CoLoadLibrary() to load the specific DLL into memory
2) Call GetProcAddress() to get the original's DllGetClassObject() function.
3) Call DllGetClassObject() to get the original DLL's class-factory, then off-we-go!
Pretty obvious really. Oh well
I'll try this out later on today when shipping-hell lets me.
Kev
|
|
|
|
|
I need to take an opinion on whether our approach is right (described below):
Problem
We have a ATL windows service which also exposes one COM class - say MainServer. MainServer interacts with other internal COM objects and a VB ActiveX control. Client application can instantiate MainServer component and fire some methods on it. The VB ActiveX control needs a form container - so we are using ATL dialog (invisible, modeless) to host it. And fire methods on the dialog.
Output
Service when tested through control panel GUI, it works perfectly and we can fire methods on ActiveX control through its container in ServiceWinMain(), Run() and other methods.
But when tested with Client application which instantiates MainServer and fires methods, we have an unspecified exception.
My questions are:
1. Is it possible to host ATL dialog like I mentioned in a Windows service?
2. What is the best way to make this application thread safe?
3. Is this approach right?
|
|
|
|
|
HI
Wt is the difercne between
Differnce between ATL DLL, ATL Service and ATL EXE
and where thay r used
i want to use them in ASP.NET page .
that every client can use it .
I want to do in it is , to connect to a sepecific client IP and Socket and to get and send data.
or may i use a WebService is ASP.NET
which one is better
thanx
Regards.
|
|
|
|
|
zahid_ash wrote:
Differnce between ATL DLL, ATL Service and ATL EXE
A DLL server is in-proc. That means that the code inside the DLL will run within the same process space as the client. This means faster execution, at the cost of not too good reliability. If the server crashes, the entire process will crash with it.
zahid_ash wrote:
ATL EXE
An EXE server out-proc. That means it will run in its own process, separate from the client. Execution will require marshalling and is thus slower than a DLL server. However, if the server crashes, the client may still survive (given that it inspects and act appropriately to the HRESULTs returned by the COM methods)
zahid_ash wrote:
ATL Service
It works sort of like an ATL EXE, but is started by the system at boot time, and not at first client activation.
--
Arigato gozaimashita!
|
|
|
|
|
i've developed a windows service on VC 7. using ATL.
I'm using .net istaller to install that.
.net istaller uses command "service full path" -service to install the service.
It is observed that ,when i do installation using this installer, and tries to acces an interface of service
it gives automation error,"object doesn't support automation or queryinterface failed"
interestingly, if i unregisters the same service form same location and register it using same command, it doesn't give any problem.
what would be the problem.
thanks,
Prasad
|
|
|
|
|
Hi, I'm implementing an interface method of a COM object that runs in a local EXE server. One of the arguments is a SAFEARRAY (packed in VARIANT).I found if I destroyed the SAFEARRAY after using it, the server EXE would crash mysterically after running for a while. But if I left the SAFEARRAY alone, everything would be OK. So should I destroy the SAFEARRAY or not? Will the underlying RPC do the cleanup work for me?
There is also the twin question: after a client fills a SAFEARRAY argument and then calls the method, does it need to destroy the SAFEARRAY?
Thanks a lot.
|
|
|
|
|
If your server allocates it, the client (that uses it) should call VariantClear on the VARIANT parameter received from the call to the server interface once it has finished with it.
|
|
|
|
|
the situation is
STDMETHODIMP CMyServer::AddKeywords(VARIANT eywords)
{
if (keywords.vt & VT_BSTR && keywords.vt & VT_ARRAY){
CComSafeArray<bstr> sa;
sa.Attach(keywords.parray);
...
sa.Destroy(); // sa.Detach();
}
...
}
// client code
...
CComSafeArray<bstr> sa(m_nKeywords);
...
CComVariant vKeywords(sa.Detach());
m_pServer->AddKeywords(vKeywords);
...
I suddenly had an idea: Since COM is location transparent, i can pretend the server is an in-proc server (although in fact it's a local server). Then in that case, since cient/server interact by direct pointers, the safearray should only be destroyed once (in the client). Is my understanding right? Thanks.
|
|
|
|
|
timtanbin wrote:
Is my understanding right?
For most cases, yes.
When marshalling occurs, COM takes care of any copying/deleting necessary.
if the SAFEARRAY is a [in] parameter, you just detach the CComSafeArray from it.
if it's in/out, and you can drop the oldand create a new one. In this case, you need to delete the old one
we are here to help each other get through this thing, whatever it is Vonnegut jr.
sighist || Agile Programming | doxygen
|
|
|
|
|
Hello,
I am using a third party folder view control which has a method GetSelectedFolder() . The method returns a VARIANT (VT_DISPATCH ).
So:
VARIANT varFolder = m_fvFolder.GetSelectedFolder();
if(varFolder.vt == VT_DISPATCH)
{
}
Thanks a lot in advance!
Rgds,
Nirav Doshi
* Don't wish it was easier, wish you were better! *
|
|
|
|
|
If it's returning a VT_DISPATCH , the contained value is an object reference to an IDispatch* object. Not knowing how your third party folder view control works, I'd suggest looking up GetSelectedFolder() in the docs. Chances are that the object is a "dual" object (vtable and dispatch interfaces) which implements IFolder something similar.
You can acquire the IDispatch interface using the pdispVal member of VARIANT. Don't forget to VariantClear() it when you are done or you will leak an object! For less error prone code, take a look at ATL's wrapper class CComVariant .
Similarly, if a VARIANT contains a VT_UNKNOWN, it holds a reference to an IUnknown* object, which you'll have to query for a usable interface. It is accessed through the punkVal member.
--
...Coca Cola, sometimes war...
|
|
|
|
|
Jörgen, Thanks a lot for your reply!
Jörgen Sigvardsson wrote:
Not knowing how your third party folder view control works, I'd suggest looking up GetSelectedFolder() in the docs.
The doc only explains this with reference to VB Samples! - So no use!
Jörgen Sigvardsson wrote:
Chances are that the object is a "dual" object (vtable and dispatch interfaces) which implements IFolder something similar.
Doesn't seem to be that either!
Jörgen Sigvardsson wrote:
You can acquire the IDispatch interface using the pdispVal member of VARIANT
Is it something like:
IDispatch *pIDisp = varFolder.pdispVal;
Had tried this earlier, but what further?
Thanks,
Rgds,
Nirav Doshi
* Don't wish it was easier, wish you were better! *
|
|
|
|
|
Nirav Doshi wrote:
Had tried this earlier, but what further?
I can't help you with that, I'm afraid. You need a description of the interface(s) which the returned object exposes.
Nirav Doshi wrote:
The doc only explains this with reference to VB Samples!
That may be of good use. You could access the object using the dispatch interface using a dispatch driver such as this one[^]. Then you'd have to do something like:
XYDispDriver disp;
disp.Attach(varFolder.pdispVal);
::VariantClear(&varFolder);
VARIANT* var = disp.GetProperty(_T("PropertyName"));
UseVariant(*var);
var = disp.InvokeMethod(_T("MethodName"), arg1, arg2);
UseVariant(*var); Please read the article for more information on how the XYDispDriver works as I'm not the author of it.
And also look at the VB samples to learn about the properties and methods which can be used. Happy coding!
--
...Coca Cola, sometimes war...
|
|
|
|
|
Hello Jörgen,
Thank you very much for your reply!
I will explore more on this! Atleast now I have some pointers to start with...
Rgds,
Nirav
* Don't wish it was easier, wish you were better! *
|
|
|
|
|
I write custom ActiveX controls for an application that has been slow to support ActiveX completely. In fact, only the recent release of the software claims to support the ability to modify and save ActiveX control properties during design time. So, I plopped my custom controls on a form, but no properties.
The application developers informed me that the design tools only look for properties by scanning the "methods" section of a control's type library rather than the "properties" section. Of course, my properties are all listed on "properties" because VC++ 6.0 does this automatically when you add properties via the Class Wizard.
Example: here's what my property looks like:
dispinterface _DMyControl
{
properties:
[id(1)] BSTR Caption;
methods:
[id(DISPID_ABOUTBOX)] void AboutBox();
};
But, here's what the application would prefer:
dispinterface _DMyControl
{
properties:
methods:
[id(0x00000001), propget] BSTR Caption();
[id(0x00000001), propput] void Caption([in] BSTR lpszNewValue);
[id(DISPID_ABOUTBOX)] void AboutBox();
};
My question is: does anyone have information or an opinion on which method is preferrable? I ask because the application developers refuse to support straight "properties" on the grounds that so few controls use this paradigm. I want to know if there is a reason I should conform to using purely methods or if I should push the company to support properties.
Kevin Fournier
SRP Computer Solutions, Inc.
|
|
|
|
|
KFournier wrote:
does anyone have information or an opinion on which method is preferrable?
From a VB perspective, properties are nicer syntactically. From a C++ perspective, there's no difference really. Properties will be accessed as if they were functions anyway.
One significant advantage using properties I can think of right now, is that you can automate persistence rather easily. I'm mainly thinking of IPropertyBag /IPersistPropertyBag implementations. All you have to do then is to iterate through the members of an interface and persist them one by one as variants. Of course, it becomes harder if the property is indexed.
Personally, I implement properties, because I keep scripting in mind as I host a scripting engine as well. It's also benefitial for me when I write the scripts as well as I can use the "property syntax".
--
...Coca Cola, sometimes war...
|
|
|
|
|
By the way, why are you using pure dispinterfaces if you access the objects from C++? You could do yourself a favor by making them dual instead. Then there's no need for dispatch drivers and the like. Then you just QueryInterface() and call methods as you go.
--
...Coca Cola, sometimes war...
|
|
|
|
|
Thanks for your input. I'm glad to hear such a strong arguement for using properties.
Jörgen Sigvardsson wrote:
By the way, why are you using pure dispinterfaces if you access the objects from C++?
I'm using pure dispinterfaces simply because I'm using MFC to develop my controls and the VC++ 6.0 Class Wizard uses them. I'm just taking advantage of a little automation.
|
|
|
|
|
I have an out-of-process exe server which was a one-to-one connection with a client. The client will start the exe server when using CoCreateInstance() in the usual way, generating an IMyInterface pointer. In normal execution, when the client process closes, Release() is called for the IMyInterface pointer, which terminates the server.
Unfortunately, there are some instabilities in the client executable over which I have no control. If the client crashes, which it does particularly during testing, then the normal IMyInterface Release() is not called, and thus the server does not get released and stays executing.
What I would like to be able to do is to test whether the client is still running. My problem is that the wizard-generated event sink code does not return an E_FAIL or similar error code, if the connection fails. I know that there are several ways in which I could address this problem, each with their own disadvantages. I would really appreciate if anyone could offer any help on this, of advise me whether I am missing something obvious.
FB
|
|
|
|
|
Fixed by these changes:
A) Added a Test method to the event sink which is empty at the client method
B) When the server tries to close itself, it calls the above Test method. If an RPC error results, it is assumed that the client has broken.
C) Server shutdown is firstly achieved by calling CFrameWnd::OnClose() (as an MFC frame window). If the client has been broken, then AfxPostQuitMessage(0) is called immediately afterwards to terminate the process.
Effectively the call to AfxPostQuitMessage does an AfxOleUnlockApp() so that the server can quit gracefully. I discovered this by debugging the MFC code.
The only bad thing about this approach was the need to add another event method to the sink, and to hardwire a call to this in the server. Does anyone have a better approach to this?
|
|
|
|
|
Hi I would like to have some valuable suggestion from you people.
Let me explain me my problem scenario.
I have an xml file which will serve the clients as a database. Now the situation is that this file could be accessed simultaneously from different clients, so the maintaining the state of the file becomes a serious concern. Now what we have thought is to make an out-of-process COM exe that will expose 2 interfaces to the client through which they can extract the record. As the 2 clients can call the interface at the same time we have used the critical section in each function call. In this way all the call will be serialized.
The second approach is to make an in-process dll and use some synchronization objects to handle the simultaneous calls.
Now the thing what I want to know is that which is the better approach keeping in mind the performance, maintainability and robustness.
What I think in any way as the database file is single the calls need to be serialized at any level regardless they are from the same process or different process. Also when we will use the in-process dll we need to use the mutex for synchronization purposes which itself is very heavy as compared to the critical section (out-of-process exe).
Please provide your suggestion as soon as possible as I have to face my boss tomorrow with the design issues.
Thanks
|
|
|
|
|
I would choose the in-process solution personally, not only because the out-of-process calls are quite slow (even mutex complexity is nothing compared to it) but you can drop into the problems with security settings at the specific machine.
But anyway it seems that it's a bit late for my suggestions
|
|
|
|