|
For that specific question, "is the JIT compiler smart enough to resolve those repeated Ldarg_0s into register access?", you'll find the answer with magnitudes less effort (compared to learning the inner workings of a JIT compiler) by compiling and linking the code in a tiny test program, load it into VS and display the disassembly.
Another remark: JITting is essentially code generation - including loophole optimization. Code generation is inherently CPU dependent. x86, x64 and ARM require significantly different code generators. If they are developed by the same team, you can expect them to have similar overall structure, but the actual code generation may be quite different - because the CPUs are different. Significant parts may have been created by different people, each of them expert on one specific CPU. Maybe you'll see one optimization on ARM that you do not see on x86, or even the other way around. Maybe one optimization that you expected to see was omitted because it didn't give a speed increase at all, on that specific processor (remember that jitting is done for one specific CPU, e.g. utilizing instruction set extensions available on that specific chip where the jitter is running).
If I could spare the time, it sure would be fascinating to dig into the entire jitter for ARM, say, to learn how many of all the tricks in the book they have implemented. I guess it would be more or less the entire book, but not necessarily for all the latest instruction set extensions. Learning and fully understanding the entire ARM JITter would be a major task, though, way beyond finding out if one specific peephole optimization is applied on one specific CPU chip in one specific context.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Quote:
For that specific question, "is the JIT compiler smart enough to resolve those repeated Ldarg_0s into register access?", you'll find the answer with magnitudes less effort (compared to learning the inner workings of a JIT compiler) by compiling and linking the code in a tiny test program, load it into VS and display the disassembly.
I must be a dunce, because I can't get it to disassemble in release.
Quote: Another remark: JITting is essentially code generation - including loophole optimization. Code generation is inherently CPU dependent. x86, x64 and ARM require significantly different code generators.
I would still expect them all to use registers (assuming the architecture supports it) if one does. Or if not, they will eventually. Looking at the x86 code gives me baseline information I can use to determine the code it produces on most machines, and some insight into how their code generation works generally. Yes they are different, but the performance priorities Microsoft assigns to them won't be. If the x86 JITter uses registers, the ARM one does too, and if it doesn't, it will get there.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Your first step is to see the disassembly in VS in the debug version. If a loophole optimization is applied in the debug version, you can be sure that it is applied in the release version as well!
Debugging does not by definition imply that the generated code is different from the release version. Sometimes you 'voluntarily' turn off some optimizations while debugging, because e.g. single stepping can be somewhat messy if code has been moved around. (Then we are talking about more advanced, not peephole, optimization.)
The 'Debug' and 'Release' configuration names are arbitrary names. You can create configurations of any other name, and set configuration options for your project for any of your configurations: Project|<name> Properties|Build|Advanced sets the amount of debug generated. In principle you could set Debug information: None for the Debug configuration, and Debug information: Full for the Release configuration, but most developers would find that rather non-intuitive
In any case, debug information is external to the executable code. E.g. to insert a breakpoint into release code, the debugger finds (from the debug info) the address where you want the BP, stuffs away the original instruction at that address and inserts a BP instruction, which it catches at run time. When the user commands 'go on', it re-inserts the original, stuffed away instruction, pulls the program counter one instruction back and restarts the target process. If the BP is meant to be persistent (not one of the volatile kind like 'run to cursor'), when continuing, the debugger will first execute a single target instruction (the one inserted), re-insert the BP instruction for the next round, and then set the target process running.
Well, this is one way of doing it. It requires write access to code memory. Many CPUs, typically embedded ones, must handle debugging of read only code (for the sake of this discussion, consider code in flash to be read only). They may have a few registers where the debugger can load code addresses that are continuously compared to the instruction pointer. If equal, a debug interrupt is generated. The CPU may have e.g. 4 such registers, so you can only have 4 BPs active at any one time, but they can be set in any release code.
For both (and other) alternatives: If you do not have debug information for the code, then you will have to know the binary address yourself. Nothing keeps you from generating debug information for release code. If you have debug info available and open the disassembly window (Ctrl-Alt-D or Debug|Windows|Disassembly in VS2022) after starting the program, you see the generated code.
An alternative is to generate a plain .exe file and use an external debugger. Note, however, that this file can be moved to any PC of the same CPU family, but that CPU may be lacking some instruction set extensions. The code generator cannot assume that any extension is available, so the code may be less optimized than JITted code, which is tailor made for the local CPU.
In the pre-JITting days, at least some compilers generated startup code for checking extension availability at run time. Before user code was run, it might be patched by the startup routine to use whatever extension was available. This of course required code and a table of all locations requiring patch-up, and startup was slower (but it made the program faster, once it was running). I do not know if dotNet compilers still do this when generating binary stand-alone executables.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I know how debug information and symbol mapping and such work.
What I don't know is if Microsoft does some magic to the JITter in debug to make it produce different code.
So I can't rely on that method.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
If I select code generating for 'Any CPU' the main compiler will generate an IL assembly, which is processed by the JITter when the assembly is run for the first time. At the moment, I am running 32 bit CLR, and that jitter generates exactly the same binary code as the 'x86' CPU option. I'd be very surprised if they were different. I'd be very surprised if there were two different x86 code generators. The linkers do completely different jobs, but not the code generators.
I do not understand where MS could do some magic that is not visible in the generated code.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
That seems to be assuming more than I am usually comfortable with when it comes to MS. I've worked at Microsoft and with Microsoft code enough to expect the unexpected deep in the bowels of their frameworks. You should have seen me wrestle with the some less oft used typelib generation functions in oleaut32.dll. I was working there at the time, and nobody could answer me about what the heck they were doing.
If they made the JITter produce different code for debug builds than release, it would be totally on brand for them, is what I'm saying, no matter if it's not intuitive. You can't put anything past these people. You really can't.
And I know those are names, but Debug generates debug symbols and such. Does the jitter for example? do something different if a pdb is present? Or some other magic signaled by the linker dropping some flag in the binary's metadata? Probably not. "Probably" is doing a lot of heavy lifting there.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
I put that code snippet of yours into a tiny test program. Here is the disassembly window in VS2022 for x86 code:
class c {
char current
public void setA(char c) {
this.current = c
00EF0DF0 push ebp
00EF0DF1 mov ebp,esp
00EF0DF3 mov word ptr [ecx+4],dx
if ((this.current >= 'A' && this.current <= 'Z') ||
(this.current >= 'a' && this.current <= 'z')) {
00EF0DF7 movzx eax,word ptr [ecx+4]
00EF0DFB cmp eax,41h
00EF0DFE jl Test.c.setA(Char)+015h (0EF0E05h)
00EF0E00 cmp eax,5Ah
00EF0E03 jle Test.c.setA(Char)+01Fh (0EF0E0Fh)
00EF0E05 cmp eax,61h
00EF0E08 jl Test.c.setA(Char)+02Ah (0EF0E1Ah)
00EF0E0A cmp eax,7Ah
00EF0E0D jg Test.c.setA(Char)+02Ah (0EF0E1Ah)
Console.WriteLine("Argument is an alphabetic character")
00EF0E0F mov ecx,dword ptr ds:[3BB24A0h]
00EF0E15 call System.Console.WriteLine(System.String) (65A637B8h)
00EF0E1A pop ebp
00EF0E1B ret And for x64 code:
class c {
char current
public void setA(char c) {
this.current = c
00007FFE45A90EE0 sub rsp,28h
00007FFE45A90EE4 mov word ptr [rcx+8],dx
if ((this.current >= 'A' && this.current <= 'Z') ||
(this.current >= 'a' && this.current <= 'z')) {
00007FFE45A90EE8 movzx ecx,word ptr [rcx+8]
00007FFE45A90EEC cmp ecx,41h
00007FFE45A90EEF jl Test.c.setA(Char)+016h (07FFE45A90EF6h)
00007FFE45A90EF1 cmp ecx,5Ah
00007FFE45A90EF4 jle Test.c.setA(Char)+020h (07FFE45A90F00h)
00007FFE45A90EF6 cmp ecx,61h
00007FFE45A90EF9 jl Test.c.setA(Char)+032h (07FFE45A90F12h)
00007FFE45A90EFB cmp ecx,7Ah
00007FFE45A90EFE jg Test.c.setA(Char)+032h (07FFE45A90F12h)
Console.WriteLine("Argument is an alphabetic character")
00007FFE45A90F00 mov rcx,1FE90003938h
00007FFE45A90F0A mov rcx,qword ptr [rcx]
00007FFE45A90F0D call System.Console.WriteLine(System.String) (07FFEA3F80DB0h)
00007FFE45A90F12 nop
00007FFE45A90F13 add rsp,28h
00007FFE45A90F17 ret On both architectures, the code is identical in default settings for Debug and Release configurations. I'd be surprised if it wasn't, and I'd be surprised if - as you seemed to fear - the base register was reloaded for each of the four tests.
I do not have any ARM Windows PC available (but I'd sure like to...), so I can't tell which code is generated. I'd be similarly surprised if ARM code loads the same base address four times.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Yeah, the output is what I was hoping for, and sort of expecting.
That answers one question, so thank you.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Could this be a possible workaround to avoid those extra JIT compiler arguments?
var cur = this.current;
if( cur >= 'A' && cur <= 'Z' || cur >= 'a' && cur <= 'z') {
}
|
|
|
|
|
Not in the instance I'm using it in without a rework. I'd have to change the structure of the code, which is made more complicated by the fact that it's a CodeDOM tree instead of real code.
Before I do that, I want to make sure I'm not (A) doing something for nothing, and more importantly (B) introducing clutter or extra overhead in an attempt to optimize.
I've included a chunk of the state machine runner code which should illustrate the issue I hope.
int p;
int l;
int c;
ch = -1;
this.capture.Clear();
if ((this.current == -2)) {
this.Advance();
}
p = this.position;
l = this.line;
c = this.column;
if (((((this.current >= 9)
&& (this.current <= 10))
|| (this.current == 13))
|| (this.current == 32))) {
this.Advance();
goto q1;
}
if ((((((((((this.current >= 65)
&& (this.current <= 90))
|| (this.current == 95))
|| (this.current == 104))
|| ((this.current >= 106)
&& (this.current <= 107)))
|| (this.current == 109))
|| (this.current == 113))
|| (this.current == 120))
|| (this.current == 122))) {
this.Advance();
goto q2;
}
if ((this.current == 97)) {
this.Advance();
goto q3;
}
if ((this.current == 98)) {
this.Advance();
goto q22;
}
q1:
if (((((this.current >= 9)
&& (this.current <= 10))
|| (this.current == 13))
|| (this.current == 32))) {
this.Advance();
goto q1;
}
return FAMatch.Create(2, this.capture.ToString(), p, l, c);
q2:
if ((((((this.current >= 48)
&& (this.current <= 57))
|| ((this.current >= 65)
&& (this.current <= 90)))
|| (this.current == 95))
|| ((this.current >= 97)
&& (this.current <= 122)))) {
this.Advance();
goto q2;
}
return FAMatch.Create(0, this.capture.ToString(), p, l, c);
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Don't expect to see any optimizations in MSIL code, even in Release configuration. They are done by JIT-compiler, and may be more effective, since exact CPU type is known at runtime.
You may try to see optimized real Assembly code, but this is difficult task, since there is huge distance from the source C# code and MSIL to machine language instructions.
modified 21-Jan-24 3:51am.
|
|
|
|
|
I'm aware of that. I am generating MSIL instructions using Reflection Emit as part of my project.
The other part generates source code. I would like to ensure that this source code generates IL that will be then be optimized appropriately by the JITter. If not, I will generate the source code differently, but my interest is in post-jitted code. Not the IL.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
modified 21-Jan-24 1:05am.
|
|
|
|
|
"Quote: Running the code through a debugger and dropping to assembly. The only way I can do that reliably is with debug info, which may change how the JITter drops native instructions. I can't rely on it.
Probably, the answer is here: Do PDB Files Affect Performance?
Generally, the answer is: No. Debugging information is just additional file, which helps debugger to match the native instructions and source code. Of course, if implemented correctly. The article is written by John Robbins.
|
|
|
|
|
I think that's about unmanaged code, and not the JITter
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Well, buzzwords like .NET, VB .NET, C#, JIT compiler, ILDASM are used in this article only by accident. You are right.
|
|
|
|
|
I am tired and I read the first bit of it. Sorry. It's 3am here and I shouldn't be awake.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Wouldn't the Rosslyn compiler stuff be a good place to look? It's open source afaik.
|
|
|
|
|
Probably not, since at best it uses Emit facilities and has nothing to do with the final JITter output
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
The JIT (as well as the rest of the runtime) is also open source - there's an optimizer.cpp in that directory, which might be of interest.
Also in that directory is a file (viewing-jit-dumps.md ) which talks about looking at disassembly, and also mentions a Visual Studio plugin, Disasmo, that simplifies this process.
[Edit]Another option - use Godbolt - it supports C#![/Edit]
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Oh wow. I learned two new things from your post. Thanks! Will check that out.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Even if it did, I wouldn't assume that it always would and would do so on all systems.
I would code explicitly and not use behaviour that isn't part of the doco.
|
|
|
|
|
Well, I didn't ask you what you would do.
And this isn't bizdev
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
If you make a question about some super-fine peephole optimization, an answer that says "Trying to do anything like that is a waste of your time" is an appropriate answer.
Years ago, I could spend days timing and fine-tuning code, testing out various inline assembly variants. Gradually, I came to realize that the compiler would beat me almost every time. Instructions sequences that "looked like" being inefficient, actually run faster when I timed it.
Since those days, CPUs have gotten even bigger caches, more lookahead, hyperthreading and whathaveyou, all confusing tight timing loops to the degree of making them useless. Writing (or generating) assembler code to suppress single instructions was meaningful in the days of true RISCs (including pre-1975 architectures when all machines were RISCs...) running at 1 instruction/cycle with (almost) no exception. Today, we are in a different world.
I really should have spent the time to assembler code the example you bring up, with and without the repeated register load, and time them for you. But I have a very strong gut feeling of what it would show. I am so certain that I do not spend the time to do that for you.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I guess I just don't see looking at a new (to me) tech for code generation to see if it's doing what I expect in terms of performance as a waste of time.
To be fair, I also look at the native output of my C++ code. I'm glad I have. Even if not especially the times when it ruined my day, like when I realized how craptastic the ESP32 floating point coprocessor was.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
If you are working on a jitter for one specific CPU, or a gcc code generator for one specific CPU, and your task is to improve the code generating, then you would study common methods for code generating and peephole optimization.
If you are not developing or improving a code generator (whether gcc or jitter), the only reason for studying the architecture of one specific of them is for curiosity. Not for modifying your source code, not even with "minor adjustments".
It can be both educating and interesting to study what resides a couple of layers below the layer you are working on. But you should remember that it is a couple of layers down. You are not working at that layer, and should not try to interfere with it.
(I should mention that I grew up in an OSI protocol world. Not the one where all you know is that some people have something they call 'layers', but one where layers were separated by solid hulls, and service/protocol were as separated as oil and water. An application entity should never fiddle with TCP protocol elements or IP routing, shouldn't even know that they are there! 30+ years of OO programming, interface definitions, private and protected elements -- and still, developers have not learned to keep their fingers out of lower layers, neither in protocol stacks nor in general programming!)
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|