|
I am in a migration project of conversion of COM components written in VC ++ to either Managed VC++ or C#.
Can anyone suggest me how to proceed with this.
Client wants to the development team to use some migration tool and convert it and after that rectify some minor errors and modifications.
I tried open the VC++ project in the MS Visual Studio.Net, but when I tried to build, it didn't worked out giving multiple errors for the header files.
Please suggest as I need to give some solution at the earliest.
|
|
|
|
|
If you plan on supporting the legacy COM clients, you'll need to understand COM interop in .NET. This means following COM guidelines, like versioning interface names, not using auto-generated class interfaces (despite the fact that most Microsoft examples do), etc.
Start by reading Interoperating with Unmanaged Code[^] in the .NET Framework SDK. There's lots of books out there on this, too.
If you don't care about supporting legacy clients, then it's not really a problem. Design your object model anew taking advantage of the rich features in the .NET FCL (and other third-party libraries if you like) like runtime serialization, XML serialization, .NET Remoting, etc. What you need depends on your requirements, so no one in this forum could really help there (a forum is a good place for specific questions).
As far as the errors in VS.NET regarding headers, all I can tell you is to go through and resolve them. Make sure you installed the Platform SDK when you installed VS.NET, or download it from http://msdn.microsoft.com/platformsdk[^] and be sure to register the paths with VS.NET (though I usually go back and put them all in environment variables so I can easily use command line tools and VS.NET uses the env vars by default).
If you have questions about specific errors when trying to compile your old VC++ projects in VS.NET, I recommend you ask in the VC++ forum since this forum is for C# specifically.
Microsoft MVP, Visual C#
My Articles
|
|
|
|
|
Hi,
I've been lookin at using IO Completion Ports (through Socket.BeginXxx) within my network application (instead of using a thread for each client connection), but my findings are discouraging.
I was under the impression that IO Completion Ports improve performance and are a scalable solution to client/server connections. For this, I was expecting any benchmarks between using IO Completion Ports and using a thread for each client connection to be in favour ot IO Completion Ports. My tests however, indicate otherwise.
My tests are very simple, client sends a message and server receives it. No other action takes place, it is as simple as that. Here is the interesting part of the code:
Server
private void BytesReceived(IAsyncResult ar) {
SocketAndBuffer socketAndBuffer = (SocketAndBuffer) ar.AsyncState;
int noOfBytesReceived = socketAndBuffer.Socket.EndReceive(ar);
if (noOfBytesReceived != 0) {
socketAndBuffer.Socket.BeginReceive(socketAndBuffer.Buffer, 0,
socketAndBuffer.Buffer.Length, SocketFlags.None, new AsyncCallback
(this.BytesReceived), socketAndBuffer);
}
}
Client
while (true) {
socket.Send(bytesToSend, 0, bytesToSend.Length, SocketFlags.None);
Thread.Sleep(10);
}
I run two variations of the test, one with the code as it appears above and the other (more brute force) with the Thread.Sleep(10) commented out. Below are the results:
10ms wait between sends, using completion ports, 40 connections
----------------------------------------------------------------
Processor Queue Length: 42.700 average, 44.000 max
Processor Time: 0.501 average, 5.000 max
Context Switches/sec: 16394.070 average, 16486.271 max
10ms wait between sends, using threads, 40 connections
------------------------------------------------------
Processor Queue Length: 42.817 average, 46.000 max
Processor Time: 0.167 average, 1.000 max
Context Switches/sec: 16343.139 average, 16475.683 max
No wait between sends, using completion ports, 1 connection
-----------------------------------------------------------
Processor Queue Length: 6.098 average, 9.000 max
Processor Time: 100 average, 100 max
Context Switches/sec: 13842.128 average, 14126.729 max
No wait between sends, using completion ports, 10 connections
-------------------------------------------------------------
Processor Queue Length: 6.475 average, 11.000 max
Processor Time: 100 average, 100 max
Context Switches/Sec: 48305.052 average, 49553.108 max
No wait between sends, using threads, 1 connection
--------------------------------------------------
Processor Queue Length: 6.525 average, 11.000 max
Processor Time: 100 average, 100 max
Context Switches/Sec: 52214.525 average, 53155.988 max
No wait between sends, using threads, 10 connection
---------------------------------------------------
couldn't gather statistics, CPU was getting hammered
In the case of 10ms wait between sends and 40 connections, using
threads instead of completion ports results in 3 times less cpu util
on average, 5 times less peak cpu util and on par context switches and
processor queue length.
In the case of no wait between sends and 1 connection, using threads
results in an 8% more context-switches and everything else on par.
It was not possible to gather statistics for 10 connections using
threads and no waiting between sends, but I think that's because all
the threads run at a priority level of Normal, hogging the CPU whilst
the completion ports have a threshold. No matter how many connections
I created to the server using completion ports, cpu util would remain
the same, it just had more requests queuing up.
What are your thoughts on this?
Thank you in advance
Angelos Petropoulos
|
|
|
|
|
IOCP's will give you better performance and scalability. I don't see any problems with the results of your tests. In fact, I'd say the tests with no send wait proves that IOCP's are performing better. The tests with a send wait aren't stressing the server app enough to fully realize the benefits of IOCP's. Keeping incrementally increasing the number of connections during the test as well. Hundreds or even thousands of connections on a sufficiently fast box isn't out of the question.
Also, there's no need to create a new AsyncCallback everytime. Just create one and reuse it. I don't think it will make a dramatic difference, but every little bit helps.
|
|
|
|
|
Well, in the case of no waits between sends I think the only reason IOCP are working better is because they have a queue and a threshold. All that happens is that requests are qeueing up and qeueing up. I believe this to be true because there was no difference in CPU util between 1 and 10 client connections.
In the thread server though, all the threads have a priority of Normal and no threshold, which means that if you ask them to work as hard as to hog the whole system they will. I can see this as a clear advantage of IOCP (having the threshold etc.), but I can't see any evidence towards better performance when using IOCP.
I just do not understand why individual threads would work more efficiently (less CPU util and same amount of context switches per second) in the scenarios of 40 or even 100 client connections. Is there some sort of dependance between the benefits of using IOCP and the number of connections? By that I mean, does the number of connections have to be above a certain number of the benefits of IOCP to be apparent? In which case, since my application won't deal with more than 100 connections, I am better off using individual threads for each connection.
In any case, I will take your advice and change the line that creates a new call back all the time (oops). I will also run some more tests. If I find something interesting I will post my results here.
Thank you for your reply, not many people want to touch this subject. I've posted in the newsgroups, in other forums as well and you are only the second person to reply. Much appriciated.
Regards,
Angelos
|
|
|
|
|
Angelos Petropoulos wrote:
Is there some sort of dependance between the benefits of using IOCP and the number of connections? By that I mean, does the number of connections have to be above a certain number of the benefits of IOCP to be apparent?
Well, not really. But, the thread pool does contain up to 25 threads* so the difference in performance between threads and IOCP's would be marginal with 25 connections, although I think you should still be better of with IOCP's. As the number of connections increases you should also see a bigger difference in performance favoring IOCP's.
Angelos Petropoulos wrote:
In which case, since my application won't deal with more than 100 connections, I am better off using individual threads for each connection.
Yeah, this is probably true since using asynchronous operations is more complex. There's more synchronization issues to worry about, more testing, etc. But then again, having 100 threads inside a process isn't the norm either.
Angelos Petropoulos wrote:
Thank you for your reply, not many people want to touch this subject. I've posted in the newsgroups, in other forums as well and you are only the second person to reply. Much appriciated.
Yeah, I just found your thread in microsoft.public.dotnet.framework.performance. Please report back here if you discover anything. You're right though, there aren't very many people who even know what we're talking about
*Actually there are 25 per processor.
|
|
|
|
|
Actually, and I only found this out myself recently, worker threads and IOCP threads are different. The .NET Threadpool has both, 25 of the former and 1000 of the later. You can tell by just looking at the signature of the following method found in the class Threadpool:
public static void GetAvailableThreads(ref Int32 workerThreads, ref Int32 completionPortThreads);
I will be doing some more tests soon, because I want to get to the bottom of this once and for all. I will post my findings here. Shouldn't take long ...
Regards,
Angelos Petropoulos
|
|
|
|
|
Angelos Petropoulos wrote:
The .NET Threadpool has both, 25 of the former and 1000 of the later.
Now that you mention it I remember reading something about this. So do those 1000 threads really exist in the process or are they simply logical abstractions of a thread similar to fibers? I have a hard time believing that there could be up to 1000 real threads in a process.
|
|
|
|
|
Hmmm, to be honest I don't know. I am a bit confused with the whole concurrency side of the thread pool. The documentation mentions that only one thread is active at a time (per CPU) and the rest are sleeping unless the active thread is put to sleep as well. If that is the case, what are the chances of 999 threads being put to sleep? Hmmm I'm going to have to read up some more ...
|
|
|
|
|
|
Well I run some more tests, this time making sure everything was in order and that the environment was a good simulation of a real-world application.
I coded a client and a server app. I run the client app on a machine on the LAN and the server app on my machine. Below are the test results:
Using threads, 100 clients, 50ms wait between sends
CPU Time: 4ave, 10max
Context Switches/sec: 814ave, 928max
Using IOCP, 100 clients, 50ms wait between sends
CPU Time: 11ave, 16max
Context Switches/sec: 238ave, 407max
Using threads, 500 clients, 50ms wait between sends
CPU Time: 27ave, 41max
Context Switches/sec: 3983ave, 4200max
Using IOCP, 500 clients, 50ms wait between sends
CPU Time: 59ave,69 max
Context Switches/sec: 587ave, 908max
Using threads, 1000 clients, 50ms wait between sends
CPU Time: 36ave, 46max
Context Switches/sec: 6083ave, 6227max
Using IOCP, 1000 clients, 50ms wait between sends
CPU Time: 74ave, 91max
Context Switches/sec: 205ave, 302max
Well, to be honest I'm not sure what to make of the results. Context switches seem to benefit substantially from the use of IOCP, but what good is that when the CPU utilization is twice or more as high? Since context switches are CPU intensive and using IOCP lowers their number, where is all the extra CPU utilization coming from? Is it just that IOCP are not properly implemented within the .NET Framework?
Another interesting find has been the private boolean static property UseOverlappedIO found in the Socket class. You can get to it through reflection, and if you do you will find that its value is always set to false. I found these two usenet messages related to the matter, but no conclusion was reached ...
Thread #1[^]
Thread #2[^]
I just don't understand why using IOCP would involve such a CPU utilization increase. I'm lost, I guess I'll stick to having 100s of threads within my application ...
Regards,
Angelos Petropoulos
|
|
|
|
|
Your results definitely seem odd. Okay, try this. Taking the same code you used in these tests add a counter variable to count the number of sends (or even total bytes) sent by the client. Do something similar on the server. That way you are counting how much work the server and client are completing. Rerun the tests for a fixed amount of time and compare CPU time, context switches, processor queue length, and the new measurement of total sends that the client sent and the server received. I'm wondering if you are going to find that although IOCP's are using more CPU they are peforming a lot more work. Perhaps the thread version is using less CPU because the threads are spending more time a wait state.
If you think about this makes since. Remember, performance is how fast something executes and scalability is how well that something is able to utilize the resource. I'm thinking that you may be seeing the scalability advantage of IOCP's in that they are able to utilize a lot more of the CPU because they are spending less time in a wait state.
There has to be some logical explanation for the results you posted here because I know IOCP's are better than using threads.
|
|
|
|
|
I thought of the same thing late last night, so I run a test just as you describe it; a counter that counts how many times data was read from the socket. The results were that both servers did exactly the same amount of work. This ties in with the fact that CPU utilization never reached 100% and therefore the CPU was never the bottleneck.
It is obvious that the .NET Framework does actually use something more sophisticated than an ordinary thread pool as the context switches are much lower than when using ordinary threads. This confuses me even more, because less context switches means less CPU utilization, however the IOCP solution had double the CPU utilization compared to the threads server. This means that the IOCP implementation is not just double as slow when it comes to processing data, it is even more since it is saving CPU utilization from the reduced number of context switches.
I just came across an implementation of an IOCP threadpool in C#. I haven't had a chance to read it thoroughly, but it might have some interesting insight into the .NET thread pool and IOCP.
Continuum Technologies[^]
Regards,
Angelos Petropoulos
|
|
|
|
|
I just finished reading the articles on Continuum Technologies and from what I see, there is no new information there that we didn't already know. The articles are very thorough, but some of the code is a bit questionable.
At least there is a nice example on how to implement your own completion ports by calling the API directly.
Regards,
Angelos Petropoulos
|
|
|
|
|
I have reviewed both your thread and the article on implementing IOCP through C# via Win32. I have developed an IOCP server in C++ (Native) and have tested it thoroughly (at Microsoft) confirming that there is no contest between the thread per client model and IOCP model servers (when under load of course).
My question for both of you is: how did you implement a true IOCP server without using the WIN32 API? In the article it indicates the following:
"IOCP thread support has not been made available to C# developers through the “System.Threading” namespace. We need to access the Win32 API calls from the Kernel32.dll."
Being fairly new to C# and recently tasked with solving a scalability problem with a C# thread per client server I am interested to know how you created an IOCP server in C# without using the API? Or are you using the API? Can you explain? The code snippet at the beginning of this thread does not illustrate the use of IOCP so I need some further clarification.
Also, have you heard if there will be a way to create an IOCP C# server from version 2.0 of the framework? It is available now as a BETA but I have not looked into it yet; have either of you two inquired about this?
Thanks very much.
James
|
|
|
|
|
here is the code:
Process p = Process.GetProcessByID();
...
p.close();
p.dispose();
i use close() and dispose() both to release resource.is it correct ? what is the difference between them ?
and in this code:
Process[] p = Process.GetProcessesByName();
How to release the resource properly ?
|
|
|
|
|
Looking at the IL (Intermediate Language) for the Process class reveals that calling Dispose explicitly calls Close . If you don't call either and the destructor calls Dispose , then the process handle is freed but other native resources are not ( memory leak! ).
Calling Close is sufficient, but it never hurts to call Dispose either.
Microsoft MVP, Visual C#
My Articles
|
|
|
|
|
Thanks!
Looking at the IL (Intermediate Language) for the Process class reveals that calling Dispose explicitly calls Close
where can i find the IL help ?
|
|
|
|
|
It's not "help" - it's the intermediate language contained in the modules that are embedded in assemblies - the very heart of the Common Language Infrastructure (CLI) that makes .NET possible.
You can use the IL Disassembler (ildasm.exe) that ships with the .NET Framework SDK, in the SDK's bin directory. Documentation about the IL instruction codes can be found in the .NET Framework SDK.
Microsoft MVP, Visual C#
My Articles
|
|
|
|
|
Here's a tip, use a using block to make it both easier and safer to release Disposable resources:
using (Process p = Process.GetProcessByID())
{
...
}
No need to worry about calling Dispose (or Close).
Regards,
Alvaro
Give a man a fish, he owes you one fish. Teach a man to fish, you give up your monopoly on fisheries.
|
|
|
|
|
Hi !
I try to generate Crystal Report with Untyped Dataset.
When i used typed dataset, CrystalReport wizard's shown this dataset.
But my dataset provided from function and there are untyped (so not shown in the CR wizard).
If anyone have a link or answer ....
Thanks a lot
Alex
(Sorry for my bad english :p)
|
|
|
|
|
If you're using typed DataSet s, just make sure your methods return that Type. You can also design your report to use fields from an untyped DataSet , but you must make sure that those untyped DataSet have the same tables and field names/types.
Microsoft MVP, Visual C#
My Articles
|
|
|
|
|
I started with Crystal Reports from this article
Crystal Reports
Anyway look through Internet it's a lot of information outhere.
http://aspalliance.com/articleViewer.aspx?aId=265&pId=2
xedom developers team
|
|
|
|
|
.
modified 1-Dec-11 1:27am.
|
|
|
|
|
If you're looking for samples, the best thing to do is try searching. The following google result turned up many good projects that you could use in your projects: http://www.google.com/search?q=C%23+FTP[^].
As far as telnet goes, there really isn't a protocol. "Telnet" simply describes a socket connection with a known character encoding. For example, I can telnet to my mail server on port 25 and type SMTP commands directly, or telnet to my web server on port 80 and make requests. Telnet is just a bare-bones TCP socket connection. The protocol is whatever protocol the daemon you're connecting to uses, like HTTP, SMTP, etc. See the TcpClient class in the .NET Framework SDK.
For what is traditionally thought as telnet, you need to echo input and output. Using the TcpClient , get the NetworkStream using TcpClient.GetStream . You use this to send commands and receive output. If you expect this to be all text, instantiate a StreamReader and a StreamWriter to read and write from the NetworkStream .
Microsoft MVP, Visual C#
My Articles
|
|
|
|
|