|
I've heared that it's easily possible to view IL code of a compiled application. How is it possible ? Obviously, you don't need to decompile or disassemble it, as it's already IL code, but just in byte-code, right ? So how can you get it back to text mode, so that you could actually view the IL code (in a text editor) ?
Regards, Desmond
|
|
|
|
|
ILDASM[^] will do this for you. It is part of the >NET Framework SDK[^]
Anger is the most impotent of passions. It effects nothing it goes about, and hurts the one who is possessed by it more than the one against whom it is directed.
Carl Sandburg
|
|
|
|
|
Thanks. That did it.
How does it work (how can I make my own ildasm.exe theoretically) ? I mean, the compiled exe should be already IL (just in byte-code), right ? So what does ildasm actually do ? Does it just converst byte-code to text somehow or what ?
Regards, Desmond
|
|
|
|
|
Assemblies contain IL, which is similar to assembly language in sytax. So it should be fairly easy to write a tool like ildasm, all you need to know is the structure of .NET executable and the byte encodings for various IL instructions.
Regards
Senthil
My Blog
|
|
|
|
|
Hello,
I have some problems with user controls that I made.
Lately these controls have a bad habit of dissappearing when I open the form in designer.
It seems that visual studio messes up the generated code.
Is there a bug fix or a work around for this issue?
|
|
|
|
|
Hi all
I have a custom usercontrol in my window form "frmsett.cs" .
"initializecomponents" of "frmsett.cs" initialises usercontrol
This usercontrol is derived from a base component class.
Usercontrol populates printer informations.
we populate default printer related informations to usercontrol through Pinfo() method (a function in base component class )
Pinfo() invokes getinfo()(another function in base component class ) using methodinvoker and asynccallback
My "basecomponent" class definition starts with
[ClassInterface(ClassInterfaceType.None), ComVisible(false)]
when i call frmsett.showdialog() Applcation is developed and works perfectly in .NET framework 1.1
Then I installed .NET framework 1.1 SP1
Here the problem begins. When application executes ShowDialog(Only at first call) It hangs for some minutes (5-15 mins) before showing the dialog "frmsett" . If we clicked second time it will display the screen immedialtely.
Anyone come across this type of issues???
Is there any problems asynccallback with methodinvoker in .NET fx1.1 SP1
What is the reason for this problem in .NET fx1.1 SP1???
Thanks a lot for any tips !!!!
Regards
Krishnan
If u can Dream... U can do it
|
|
|
|
|
I am looking for a component in .net to send fax withing an ASP.net page.
I don't want to use a webservice nor to send fax over WEB.
All I need is a third party component that I can use it in my ASP.net application to trigger a fax modem installed on the server to fax an HTML page
Thanks
Arvind
|
|
|
|
|
hi,
i am using for my current project in that i have to search the html file for certain keyword like 'overrule' for searching this i am using regular expression.
i am able to search word 'overrule' but failed when the word 'overrule' is like this '[overrule'.
how i can do that? can anybody help me?
code is.........
Key_Word="overrul"
ObjRe.Pattern = Key_Word & "[a-z]"
Set ObjMatches = ObjRe.Execute(FileMatter)
lCnt = ObjMatches.Count
Thank in Advance
Syed
|
|
|
|
|
I have build an application in which I use file names from a directory.
to get these file names I used the following code from MSDN.this code requires mscorlib.dll
to be included in the application
#using <mscorlib.dll>
FILE * fp;
String *extension;
// Create a reference to the current directory.
DirectoryInfo* di = new DirectoryInfo("C:\Sachin");
// Create an array representing the files in the current directory.
FileInfo* fi[] = di->GetFiles();
// write the names of the files in the current directory.
Collections::IEnumerator* myEnum = fi->GetEnumerator();
fp = fopen("nest.lst","w+");
while (myEnum->MoveNext())
{
FileInfo* fiTemp =__try_cast<fileinfo*>(myEnum->Current);
//Console::WriteLine(fiTemp->Name);
extension = Path::GetExtension(fiTemp->Name);
fprintf(fp,"%s\t%s\n",fiTemp->Name,extension);
}
fclose(fp);
this code works using
#using <mscorlib.dll> with /clr option.
this is working fine on my machine.
but i fail to run this application on another machine that doesnt have visual studio installed. As i dont know how to specify path for this dll at run time.
thanks
sachin
|
|
|
|
|
|
Thankyou Colin now I m able to do this.
|
|
|
|
|
and make sure that you compile to "release" not to "debug"
Why a sin?
|
|
|
|
|
:(i have problem with Font and
i couln'd resolve it by myself. Could you please give me some help on
this?
My problem is how to compress a Font (changing the character's width
while its height remains unchanged) ? I can do it in VB6, but in
VB.Net I couldn't. Here is my code:
<structlayout(layoutkind.sequential, charset:="CharSet.Auto)"> Public _
Class LOGFONT
Public lfHeight As Integer
Public lfWidth As Integer
Public lfEscapement As Integer
Public lfOrientation As Integer
Public lfWeight As Integer
Public lfItalic As Byte
Public lfUnderline As Byte
Public lfStrikeOut As Byte
Public lfCharSet As Byte
Public lfOutPrecision As Byte
Public lfClipPrecision As Byte
Public lfQuality As Byte
Public lfPitchAndFamily As Byte
<marshalas(unmanagedtype.byvaltstr, sizeconst:="32)"> _
Public lfFaceName As String
End Class
Public Shared Function CreateLogFont() As LOGFONT
Dim tmpfont As Font
Dim tmpLogFont As New LOGFONT
tmpfont = New System.Drawing.Font("Courier New", 27.0)
tmpfont.ToLogFont(tmpLogFont)
tmpLogFont.lfHeight = 15
tmpLogFont.lfWidth = 10
Return tmpLogFont
End Function
After creating tmpLogFont, i create new font for my TextBox
Public Sub New()
MyBase.New()
Dim lf As New LOGFONT
Dim tmpfont As new Font
tmpfont = tmpfont.FromLogFont(CreateLogFont)
Me.Font = tmpfont
End Sub
The only thing changed is the font height, and the font width
(lfWidth) remains 0!
is there any error in my code?
I tried again, but by using API function CreateFontIndirect, and got
the same result !
Could you give me an example for compressing fonts?
Thank you very much
Best regards.
|
|
|
|
|
Here's something I wish to learn about how Windows OS interacts with .NET:
Every Windows Forms Control has the ability to handle one or more mouse actions - left click, double click etc. For example, .NET allows this by generating a Click event for the relevant control everytime the mouse is left clicked.
What I wish to understand is that how does .NET know which Control to generate the Click event for. In other words, if I click on a particular button, the OS recognizes the point on the screen where the click occurs, then translates that information and sends it to .NET which identifies which control does the click correspond to. That, at least, I believe is the chronology.
But I'm not sure how this translation between the OS and .NET takes place. Any help is greatly appreciated.
Thanks..
Sarabjit.
|
|
|
|
|
I think it is done the way normal Windows apps do, through a Window procedure that is called from a message loop. The window procedure simply calls the delegates registered for that event. The Winforms button is a normal Windows button wrapped in managed code. There are no native .NET UI controls, AFAIK
Regards
Senthil
|
|
|
|
|
Thanks for the help.. Though I'm still not completely clear:
"The window procedure simply calls the delegates registered for that event"
As per my impression, delegates that make calls to the respective event handlers whenever an event for a Control is raised are invoked inside the Control itself. Thus, to "simply call the delegates," the Windows procedure should first be able to identify which Control is it going to call the delegates of.
Thus, either Windows directly makes a call to this Control, or it simply passes the event to the active process, which (by virtue of routines that were directly added by .NET when the program was compiled) then makes a call to this Control (specifically, to the OnEventName() method within the definition of this control). In both these cases however, there needs to be a way to identify which Control (button, scrollbar etc.)does the specific click correspond to.
In other words, a literal translation between the co-ordinates of the pointer when the mouse was clicked and the Control the pointer was above at that time takes place. It is this translation that I need to get at the root of.
"The Winforms button is a normal Windows button wrapped in managed code. There are no native .NET UI controls"
I think you're right. I guess the .NET's only purpose - like that of any runtime environment - is to successfully compile and debug the code, adding any extra routines that it might need to. A few examples of such actions which are relevant to our context here and which .NET directly takes care of are:
1. Delegate definitions - A delegate declaration is sufficient to define a delegate class. The declaration supplies the signature of the delegate, and the common language runtime provides the complete implementation.
2. Event Wiring - A designer such as Visual Studio .NET automatically completes the event wiring by generating code which is necessary for the purpose.
3. Event Hooks - When the compiler encounters an event keyword (such as - public event HandlerNameEventHandler HandlerInstant;), it creates a private member such as and two public methods: add_Alarm and remove_Alarm. These methods are event hooks that allow delegates to be combined or removed from the event delegate. The details are hidden from the programmer.
(The above are taken from the MSDN Library: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconevents.asp)
Once the self-executable/dll is ready, .NET I believe is out of the picture.
Thanks for the help anyways..
|
|
|
|
|
My guess:
The ability to perform this information transfer and detect the specific Control whose delegates need to be referenced in response to the particular event is NOT performed by the Operating System but by the process - the application itself.
However, the routines to achieve this are not written by the application programmer. They are instead a part of the specific libraries the programmer includes in his project. As a result, these are included in the application at compile-time by the respective compiler (the .NET compiler in this case). After all, it is the environment's job to provide the I/O interface to the application programmer.
AFAIK..
Any thoughts're welcome..
Cheers..
|
|
|
|
|
If you've done MFC programming, you would have got a clearer picture. Every widget you see in a windows application (well, almost most of them) are windows by themselves. A button is a window, a textbox is a window and so on. Every window has a window procedure which is invoked by the OS (once you register it, of course). The OS takes care of figuring out which input goes to which window and calls the appropriate WndProc. For eg, for a button class
void WndProc(MSG message, LPARAM lParam, WPARAM wParam)
{
switch(message)
{
case WM_BUTTONCLICK:
// Code for invoking the delegate
}
}
If it is not handled by a window, it passes it to the parent window and so on. So you can have a single WndProc in the topmost window handle all messages for you.
So it's Windows that takes care of it, not .NET. That's the difference between Java's Swing and .NET. Swing does all painting/event handling by itself.
"Once the self-executable/dll is ready, .NET I believe is out of the picture."
Not really, it's during runtime that the .NET Runtime and CLR play important roles like GC, CAS etc..
Regards
Senthil
|
|
|
|
|
|
Hi...
When I try to Deserialize a stream of data to an object, it throws an exception "no top object". Any help?
Its not reproducible always.
The code(relevant portion) goes like this:
public virtual ObjectGraph DeSerialize(MemoryStream memoryStream)
{
BinaryFormatter formatter = new BinaryFormatter();
try
{
memoryStream.Seek(0, SeekOrigin.Begin); ObjectGraph objectGraph = (ObjectGraph) formatter.Deserialize(memoryStream);
return objectGraph;
}
catch
{
throw;
}
Any help will b greatful..
Thx in advance..
Ganesh.
|
|
|
|
|
Hi,
I am interested in comments / experience people have wrt writing a tcp server in .net.
We currently have existing an existing server, written in native c++. It communicates with our own browser client using a proprietery protocol and services hundreds / thousands of concurrent connections. The server uses blocking sockets in multiple threads to obtain reasonable performance.
We are investigating a to move from native code to .net code.
I am not sure which language, but at this stage c# or managed c++ seem to be the best options.
I know that we could do the clients in .net.
However, I am unsure about the suitablility of .net for the server.
In particular wrt cpu usage.
|
|
|
|
|
It very much depends on how you write your code. Algorithmic complexity normally outweighs the exact instructions executed. Given the gap between processor speed and memory speed, you'll normally get better performance by being efficient with memory. While your code may be JIT-compiled,
I note that you're using blocking sockets. If you use a large number of threads which are mostly blocked, you'll end up thrashing the CPU cache and losing cache locality. For a really high-performance server, consider switching to asynchronous I/O.
Using asynchronous I/O in unmanaged C++ can be quite a pain. Using .NET, it's relatively easy: System.Net.Sockets.Socket provides a BeginRead method which will call a callback function on a pooled thread when data is available. This allows the server to use a smaller pool of threads to handle the requests.
It's best to create a number of static buffers for your I/O buffers. That way, they rapidly end up in generation 2 and stay there. You therefore avoid the problems with medium-life objects causing a lot of GC overhead (see Mid-Life Crisis[^]).
At present I'd suggest not using Managed C++. The syntax is changing for VS2005, and while the existing syntax will be supported, it won't be enhanced.
Stability. What an interesting concept. -- Chris Maunder
|
|
|
|
|
My understanding is that since the tcp/ip stack runs in kernel mode, threads blocked on a tcp/ip calls do not cause excessive thread management issues (eg thrashing) for the os. I have stress tested our server with over 1 thousand connections, all continousuly sending dummy data on a std P4 1.6 workstation. The test ran for approx 2hrs. The cpu utilisation & memory usage (as reported by the windows taskmanager) did not appear to be excessive.
I can see some benefits from having a single thread poll select() & then off load the work onto a thread from a fixed (or controlled) size thread pool. However I seem to remember hearing of issues with select() when it is called with large no.s of sockets.
In any case, I can move our existing code over the using select(), without too much difficulty. I assume that is what BeginRead does & that it can be used in a non gui app).
However, my main question still is:
"Is .net efficient enough to use as the basis for a concurrent tcp server which must be able to handle many hundreds of concurrent connections."
|
|
|
|
|
When you receive data on a connection where a thread is blocked on the socket, the kernel puts that thread on the runnable list. The kernel tries to balance the demand for CPU resources by switching between threads in a runnable state. At any given time there can only be one thread actually running on each logical CPU. The OS switches to a different thread if the running thread blocks, if the thread's quantum (time permitted to run) expires, or if another thread with a higher priority becomes runnable.
In the design where you have one thread per client, you can get a situation where multiple clients send packets simultaneously, unblocking all the threads for those clients. The OS then ends up switching between them. Context switches are not free - they take time. They also generally cause processor cache misses as each thread refers to data on its stack. A processor cache miss is not reflected in the performance counters, but it does take time.
This is time that could have been spent servicing requests.
Windows also offers asynchronous I/O. With asynchronous I/O, threads can be performing useful operations while the I/Os are pending. If the socket is not associated with a completion port[^], the I/O is associated with a particular thread.
I/O completion ports allow I/O to be associated with a pool of threads, so that any thread blocked on the completion port can handle a completed I/O. Windows performs some special tricks with threads associated with a completion port:- Threads are released from the completion port in reverse order of blocking - that is, the last thread to block is the first thread released. This helps prevent cache thrashing.
- Windows only releases as many threads as can be handled with the CPUs in the system (or as many as specified, if the number of concurrent threads is specified when the completion port is created).
- When a thread blocks on some other operation, Windows releases another thread from the pool waiting on the completion port, if there is one and there is work for it to do
In this way, Windows keeps all the CPUs busy.
The Framework does all the work behind the scenes to associate asynchronous I/O operations with a pool of worker threads, and manages the thread pool for you. You don't have to consider whether an asynchronous I/O completed synchronously (this can happen) as the Framework calls your callback function in either case.
This is rather different from select , which is a polling function. A transition from user to kernel mode is required for select to determine whether any data is waiting. It also has the problem that you can only specify a given maximum number of sockets in an fd_set (by defining the FD_SETSIZE macro).
Microsoft implements many of its network servers in this fashion - including IIS before version 6.0 (version 6.0 introduces the HTTP.SYS driver which performs much of the HTTP protocol in kernel mode to save kernel->user transitions).
Given the scale of your testing, you might not see much difference. I recently implemented a UDP-based client emulator in C#, using BeginSend/BeginReceive, to test against our application server. I found that it was using 20 threads to emulate 1000 clients with no think time. The server software, written in VB6 and mostly single-threaded, was unable to keep up with that rate of requests.
Stability. What an interesting concept. -- Chris Maunder
|
|
|
|
|
thanks for the comments, they have been quite informative.
I understand how thread management works & the overheads associated. My understanding is that having a thread blocked by a kernel object (eg mutex) reduces the thread management overhead for that thread. As the tcpip stack is in kernel mode, i have assumed that blocking on a winsock call is like blocking on a mutex wait.
In any case, io completion ports seem to offer better performance, at the cost of using windows specific code. In fact I had just started reading about ioscp & I intend to investigate them further. Any one know of any good code examples (unmanaged &/or .net)
I assume that BeginRead uses iocps
Wrt our existing server, it uses 1 thread per socket connection. My tests involved running client emulators on 10 pcs, each emulating 100 clients, each client repeatedly sending data to the server. The PCs were connected in a 100Mbs lan.
Wrt .net & iocps.
Can .net code that uses iocps (ie BeginRead) be ported to unix via mono?
|
|
|
|
|