|
That seems to be assuming more than I am usually comfortable with when it comes to MS. I've worked at Microsoft and with Microsoft code enough to expect the unexpected deep in the bowels of their frameworks. You should have seen me wrestle with the some less oft used typelib generation functions in oleaut32.dll. I was working there at the time, and nobody could answer me about what the heck they were doing.
If they made the JITter produce different code for debug builds than release, it would be totally on brand for them, is what I'm saying, no matter if it's not intuitive. You can't put anything past these people. You really can't.
And I know those are names, but Debug generates debug symbols and such. Does the jitter for example? do something different if a pdb is present? Or some other magic signaled by the linker dropping some flag in the binary's metadata? Probably not. "Probably" is doing a lot of heavy lifting there.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
I put that code snippet of yours into a tiny test program. Here is the disassembly window in VS2022 for x86 code:
class c {
char current
public void setA(char c) {
this.current = c
00EF0DF0 push ebp
00EF0DF1 mov ebp,esp
00EF0DF3 mov word ptr [ecx+4],dx
if ((this.current >= 'A' && this.current <= 'Z') ||
(this.current >= 'a' && this.current <= 'z')) {
00EF0DF7 movzx eax,word ptr [ecx+4]
00EF0DFB cmp eax,41h
00EF0DFE jl Test.c.setA(Char)+015h (0EF0E05h)
00EF0E00 cmp eax,5Ah
00EF0E03 jle Test.c.setA(Char)+01Fh (0EF0E0Fh)
00EF0E05 cmp eax,61h
00EF0E08 jl Test.c.setA(Char)+02Ah (0EF0E1Ah)
00EF0E0A cmp eax,7Ah
00EF0E0D jg Test.c.setA(Char)+02Ah (0EF0E1Ah)
Console.WriteLine("Argument is an alphabetic character")
00EF0E0F mov ecx,dword ptr ds:[3BB24A0h]
00EF0E15 call System.Console.WriteLine(System.String) (65A637B8h)
00EF0E1A pop ebp
00EF0E1B ret And for x64 code:
class c {
char current
public void setA(char c) {
this.current = c
00007FFE45A90EE0 sub rsp,28h
00007FFE45A90EE4 mov word ptr [rcx+8],dx
if ((this.current >= 'A' && this.current <= 'Z') ||
(this.current >= 'a' && this.current <= 'z')) {
00007FFE45A90EE8 movzx ecx,word ptr [rcx+8]
00007FFE45A90EEC cmp ecx,41h
00007FFE45A90EEF jl Test.c.setA(Char)+016h (07FFE45A90EF6h)
00007FFE45A90EF1 cmp ecx,5Ah
00007FFE45A90EF4 jle Test.c.setA(Char)+020h (07FFE45A90F00h)
00007FFE45A90EF6 cmp ecx,61h
00007FFE45A90EF9 jl Test.c.setA(Char)+032h (07FFE45A90F12h)
00007FFE45A90EFB cmp ecx,7Ah
00007FFE45A90EFE jg Test.c.setA(Char)+032h (07FFE45A90F12h)
Console.WriteLine("Argument is an alphabetic character")
00007FFE45A90F00 mov rcx,1FE90003938h
00007FFE45A90F0A mov rcx,qword ptr [rcx]
00007FFE45A90F0D call System.Console.WriteLine(System.String) (07FFEA3F80DB0h)
00007FFE45A90F12 nop
00007FFE45A90F13 add rsp,28h
00007FFE45A90F17 ret On both architectures, the code is identical in default settings for Debug and Release configurations. I'd be surprised if it wasn't, and I'd be surprised if - as you seemed to fear - the base register was reloaded for each of the four tests.
I do not have any ARM Windows PC available (but I'd sure like to...), so I can't tell which code is generated. I'd be similarly surprised if ARM code loads the same base address four times.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Yeah, the output is what I was hoping for, and sort of expecting.
That answers one question, so thank you.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Could this be a possible workaround to avoid those extra JIT compiler arguments?
var cur = this.current;
if( cur >= 'A' && cur <= 'Z' || cur >= 'a' && cur <= 'z') {
}
|
|
|
|
|
Not in the instance I'm using it in without a rework. I'd have to change the structure of the code, which is made more complicated by the fact that it's a CodeDOM tree instead of real code.
Before I do that, I want to make sure I'm not (A) doing something for nothing, and more importantly (B) introducing clutter or extra overhead in an attempt to optimize.
I've included a chunk of the state machine runner code which should illustrate the issue I hope.
int p;
int l;
int c;
ch = -1;
this.capture.Clear();
if ((this.current == -2)) {
this.Advance();
}
p = this.position;
l = this.line;
c = this.column;
if (((((this.current >= 9)
&& (this.current <= 10))
|| (this.current == 13))
|| (this.current == 32))) {
this.Advance();
goto q1;
}
if ((((((((((this.current >= 65)
&& (this.current <= 90))
|| (this.current == 95))
|| (this.current == 104))
|| ((this.current >= 106)
&& (this.current <= 107)))
|| (this.current == 109))
|| (this.current == 113))
|| (this.current == 120))
|| (this.current == 122))) {
this.Advance();
goto q2;
}
if ((this.current == 97)) {
this.Advance();
goto q3;
}
if ((this.current == 98)) {
this.Advance();
goto q22;
}
q1:
if (((((this.current >= 9)
&& (this.current <= 10))
|| (this.current == 13))
|| (this.current == 32))) {
this.Advance();
goto q1;
}
return FAMatch.Create(2, this.capture.ToString(), p, l, c);
q2:
if ((((((this.current >= 48)
&& (this.current <= 57))
|| ((this.current >= 65)
&& (this.current <= 90)))
|| (this.current == 95))
|| ((this.current >= 97)
&& (this.current <= 122)))) {
this.Advance();
goto q2;
}
return FAMatch.Create(0, this.capture.ToString(), p, l, c);
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Don't expect to see any optimizations in MSIL code, even in Release configuration. They are done by JIT-compiler, and may be more effective, since exact CPU type is known at runtime.
You may try to see optimized real Assembly code, but this is difficult task, since there is huge distance from the source C# code and MSIL to machine language instructions.
modified 21-Jan-24 3:51am.
|
|
|
|
|
I'm aware of that. I am generating MSIL instructions using Reflection Emit as part of my project.
The other part generates source code. I would like to ensure that this source code generates IL that will be then be optimized appropriately by the JITter. If not, I will generate the source code differently, but my interest is in post-jitted code. Not the IL.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
modified 21-Jan-24 1:05am.
|
|
|
|
|
"Quote: Running the code through a debugger and dropping to assembly. The only way I can do that reliably is with debug info, which may change how the JITter drops native instructions. I can't rely on it.
Probably, the answer is here: Do PDB Files Affect Performance?
Generally, the answer is: No. Debugging information is just additional file, which helps debugger to match the native instructions and source code. Of course, if implemented correctly. The article is written by John Robbins.
|
|
|
|
|
I think that's about unmanaged code, and not the JITter
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Well, buzzwords like .NET, VB .NET, C#, JIT compiler, ILDASM are used in this article only by accident. You are right.
|
|
|
|
|
I am tired and I read the first bit of it. Sorry. It's 3am here and I shouldn't be awake.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Wouldn't the Rosslyn compiler stuff be a good place to look? It's open source afaik.
|
|
|
|
|
Probably not, since at best it uses Emit facilities and has nothing to do with the final JITter output
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
The JIT (as well as the rest of the runtime) is also open source - there's an optimizer.cpp in that directory, which might be of interest.
Also in that directory is a file (viewing-jit-dumps.md ) which talks about looking at disassembly, and also mentions a Visual Studio plugin, Disasmo, that simplifies this process.
[Edit]Another option - use Godbolt - it supports C#![/Edit]
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Oh wow. I learned two new things from your post. Thanks! Will check that out.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Even if it did, I wouldn't assume that it always would and would do so on all systems.
I would code explicitly and not use behaviour that isn't part of the doco.
|
|
|
|
|
Well, I didn't ask you what you would do.
And this isn't bizdev
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
If you make a question about some super-fine peephole optimization, an answer that says "Trying to do anything like that is a waste of your time" is an appropriate answer.
Years ago, I could spend days timing and fine-tuning code, testing out various inline assembly variants. Gradually, I came to realize that the compiler would beat me almost every time. Instructions sequences that "looked like" being inefficient, actually run faster when I timed it.
Since those days, CPUs have gotten even bigger caches, more lookahead, hyperthreading and whathaveyou, all confusing tight timing loops to the degree of making them useless. Writing (or generating) assembler code to suppress single instructions was meaningful in the days of true RISCs (including pre-1975 architectures when all machines were RISCs...) running at 1 instruction/cycle with (almost) no exception. Today, we are in a different world.
I really should have spent the time to assembler code the example you bring up, with and without the repeated register load, and time them for you. But I have a very strong gut feeling of what it would show. I am so certain that I do not spend the time to do that for you.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I guess I just don't see looking at a new (to me) tech for code generation to see if it's doing what I expect in terms of performance as a waste of time.
To be fair, I also look at the native output of my C++ code. I'm glad I have. Even if not especially the times when it ruined my day, like when I realized how craptastic the ESP32 floating point coprocessor was.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
If you are working on a jitter for one specific CPU, or a gcc code generator for one specific CPU, and your task is to improve the code generating, then you would study common methods for code generating and peephole optimization.
If you are not developing or improving a code generator (whether gcc or jitter), the only reason for studying the architecture of one specific of them is for curiosity. Not for modifying your source code, not even with "minor adjustments".
It can be both educating and interesting to study what resides a couple of layers below the layer you are working on. But you should remember that it is a couple of layers down. You are not working at that layer, and should not try to interfere with it.
(I should mention that I grew up in an OSI protocol world. Not the one where all you know is that some people have something they call 'layers', but one where layers were separated by solid hulls, and service/protocol were as separated as oil and water. An application entity should never fiddle with TCP protocol elements or IP routing, shouldn't even know that they are there! 30+ years of OO programming, interface definitions, private and protected elements -- and still, developers have not learned to keep their fingers out of lower layers, neither in protocol stacks nor in general programming!)
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Why not download ILSpy[^] and nosey at the produced IL code? Just compile your application in release mode and take a look at the produced IL to see whether it's been optimised. I would hazard a guess that it probably doesn't optimise something like that, but I could be wrong!
|
|
|
|
|
Because I'm not interested in the IL code, but in the post jitted native code.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Maybe you ought to be more interested in the IL code.
The optimizations that is really significant to you application are not concerned with register loads, but with techniques such as moving invariant code out of loops, doing arithmetic simplifications etc. Peephole optimization (done at code generation) is a combination of "already done, always" and "no real effect on execution time".
I have had a few surprises with C# performance, but they were typically related to data structures, and they were discovered by timing at application level. To pick one example: I suspected that moving a variable out of a single-instance class, making it a static, would make address calculation simpler and faster, compared to addressing a variable within an object instance. I was seriously wrong; that slowed down the application significantly. I could have (maybe should have) dug into the binary code to see what made addressing a static location significantly slower, but as I knew the effect already, I didn't spend the time when I was working on that application.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I'm intimately familiar with the IL code already. I both generate code that then gets compiled to it, and I Reflection Emit it directly.
I get that you don't want me to be concerned about the things that I am concerned about. Get that I am anyway.
I already optimized at the application level.
I should add, I inlined one method and got a 20% performance increase. That's strictly jit manipulation. You don't think it's worth it. My tests say otherwise.
And one more thing - not paying attention to this? That along with some broken benchmarks (which shielded me from seeing the performance issues) led me into a huge mess.
Sure if you're writing an e-commerce site you don't have to be concerned with inner loop performance and "performance critical codepaths" because to the degree that you have them, they are measured in seconds to complete or longer.
Lexing, or regex searching is not that. If you don't think manipulating the jitter is worth it then why don't you ask microsoft why they mark up their generated regex code with attributes specifically designed to manipulate the jitted code?
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
"I should add, I inlined one method and got a 20% performance increase."
You are not telling, 20% of what? The entire application, or that specific function call?
And: Inlining is not peephole optimization. The jitter didn't do that. The compiler generating the IL did.
Inlining isn't quite at the same level as changing the algorithm, but much closer to that than to register allocation. In another post, I mentioned my experience with trying to make a variable static, rather than local to the single instance. Inlining is more at that level.
I am saying that modifying your source code to affect peephole optimization is a waste of energy. Inlining is at a different level, and might be worth it, especially if the method is small and called only in a few places.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|