|
Just came to the same idea.
Image processing ...
|
|
|
|
|
I would expect the conclusions to be substantiated by some results and source code.
|
|
|
|
|
Wdid you learn all that! Go ask your shcool for your money back!
Signature
|
|
|
|
|
My biggest issue with this topic is it's purely subjective in nature. You cite no valid references nor valid examples of why. The case of type-safety is a non-issue. The primary area where type-safety has been cited as a time impediment is though the use of arrays. Because arrays were implemented in a covariant and contravariant unsafe manner, there's an extra check during insertion performed to ensure that the data you're inserting is valid based upon the initial storage type of the array when it was created.
Hyperthreading and multi-core architectures don't even enter the picture of JIT compiler optimizations. There's no magical /parallel switch, programs need to be developed with multithreading in mind. This is due to the complex manner to which the data associated to the individual threads interconnect. If you have one reading a field while another is altering it, you could enter a case where two threads are aware of different, potentially invalid, program states, leading the end result of a multithreaded task being invalid. For example, if three threads simultaneously increment the same integer, if they each retrieve the same value and increment it, then store it, you end up with one increment's worth of result since the value, they each received, is the same. Locking and other synchronization is necessary to ensure that when one's about to change it, the others can't read it until you're done. This isn't an optimization a compiler can magically know. Because most code isn't so simple as incrementing a value on three threads, it's much more complex and multiple data-points are usually involved, further you can't assume to know the level to which multithreading can occur since future machines will likely contain dozens of cores (mine as it is has six, with hyperthreading on top of that).
You are right about one thing, there is a double compilation involved. Once into intermediate code, a stack-based IL that's similar to assembly, but different in that it's aware of complex type systems. The second compilation involves awareness of the actual system's architecture, more instruction sets to use that can't be assumed on the originating machine that initially compiled the code.
Here's a small article that covers the differences in speed of C++ vs. C#: http://www.grimes.demon.co.uk/dotnet/man_unman.htm
|
|
|
|
|
Point 1: "Hyper Threading" is like having a 2 core CPU (although not quite as good). You need to multithread to take advantage of that. Multi-core CPUs take multithreading to the next level. Both are not an instruction set. The programs just need to be written to use multiple threads. You could do the same on a single core CPU it's just that the threads would share the processor's time and thus not really improve performance.
An instruction set is like MMX, SSE, 3DNow, etc. They enable intrinsic functions to be used that speed up certain operations (like matrix math in the case of MMX).
|
|
|
|
|
True, C++ emits barebones instructions to ensure platform compatibility, but seriously C# is just another managed language. Comparing a managed language with a native one isn't a good thing to do, C# emits abstract platform independent run-time byte-code called MSIL, with this the C# program practically defines its procedures in the highest way possible, allowing the the run-time to do how ever it wishes to perform an operation defined by MSIL instructions. Through dynamic updating, the .NET run-time may be installed or change its method of execution during run-time to fit the profile of the host hardware, for example--using various sets SIMD instructions to decode media. HyperThreading is not exactly like having two cores, the Intel x86-64 Architecture defines an entire core of having an execution-core, various caches(Translation-look-aside buffers, write buffers for code with different memory orderings, caches relative to each core (L1), caches relative per CPU package (L2)) and many other facilities. Hyperthreading simply duplicates two execution cores per CPU core, so two simultaneous threads maybe sharing the the same core but dispatching its operations on the same leftover architectural state.
Native programs usually rely on the operating systems API for managing and dispatching threads, and through the OS is where the worry of hardware threading stops. CreateThread for example passes through the kernel and the kernel would decide how and where a thread is created and executed, and through this independent programming method, native programs on different systems with consistent CPU architectures need not worry of specific CPU archs. It's even better for intermediate languages because the run-time, unlike native programs, promises to perform the same way it should as opposed to different architectures, enabling optimizations such as those SIMD instructions. And also, you're using the term intrinsic incorrectly, when using SIMD based and or similar instructions in syntax such as C, they are coined "intrinsic" because they belong to the nature of the Compiler and the CPU context the compiler supports. The point here is that native programming allows the use of special compilers that enable intrinsic instructions in syntax thus allowing one to program using faster native ISA specific instructions; but this does not stop a run-time to attempt to speed these SIMD operations by means of CPU dependent instructions.
|
|
|
|
|
C++ wins only in performance but in other things it kisses the C# feet.
No Manual Garbage Collection
Huge .NET-Framework library
Autoboxing - every type can be treated as if it inherits from object
Supports constructor-chaining (one constructor can call another constructor from the same class)
When a virtual method is called in a constructor, the method in the most derived class is used
Static constructors (run before the first instance of the class is created)
Advanced runtime type information and reflection
Built-in support for threads
No need for header files and #includes
Structs and Classes are actually different (structs are value types, have no default constructor in general cannot be derived from)
Supports properties
Readonly members are const, but can be changed in the constructor
Finally block for exceptions
Arrays are objects
Supports the base keyword for calling the overridden base class
and lots of
TVMU^P[[IGIOQHG^JSH`A#@`RFJ\c^JPL>;"[,*/|+&WLEZGc`AFXc!L
%^]*IRXD#@GKCQ`R\^SF_WcHbORY87֦ʻ6ϣN8ȤBcRAV\Z^&SU~%CSWQ@#2
W_AD`EPABIKRDFVS)EVLQK)JKSQXUFYK[M`UKs*$GwU#(QDXBER@CBN%
Rs0~53%eYrd8mt^7Z6]iTF+(EWfJ9zaK-iTV.C\y<pjxsg-b$f4ia>
--------------------------------------------------------
128 bit encrypted signature, crack if you can
|
|
|
|
|
Xmen wrote: C++ wins only in performance but in other things it kisses the C# feet.
No Manual Garbage Collection
Huge .NET-Framework library
Autoboxing - every type can be treated as if it inherits from object
Supports constructor-chaining (one constructor can call another constructor from the same class)
When a virtual method is called in a constructor, the method in the most derived class is used
Static constructors (run before the first instance of the class is created)
Advanced runtime type information and reflection
Built-in support for threads
No need for header files and #includes
Structs and Classes are actually different (structs are value types, have no default constructor in general cannot be derived from)
Supports properties
Readonly members are const, but can be changed in the constructor
Finally block for exceptions
Arrays are objects
Supports the base keyword for calling the overridden base class
I agree
|
|
|
|
|
I like C# too, and the .net framework is great. Unfortunately its just too darn slow, expecially on Windows mobile devices. I seem to spend all my time trying to optimize code, and never get close to the performance of C++.
A simple function to search a .csv file would take minutes to run in C#, but just milliseconds in C++. I eventually gave up on .netcf, and re-wrote my project, using a combination of native evc++ and freepascal for the gui.
Some benchmarks on the internet seem to sugest that C++ is only 3 to 10 times faster than C#, but my tests show the speed difference to be much greater on mobile devices. (at least 100x)
rjklindsay
|
|
|
|
|
I did a lot on mobile some comments
- Debug runs like a dog
- most of the code is not even implemented in C# it just calls native via pinvoke ....so you are running C++ this includes nearly all winform code.
- DO not use Reflection ..
A lot of people try to use reflection and fancy things like that and COmpact framework doesnt do caching the same goes with serialization of types.
If you avoid these things i found performance was better than Evc since it was quicker to optomize. Just KISS.
Ben
|
|
|
|
|
Nice article. what do think of D language anyway? I have heard it is compiled in native, however it has garbage collection so that you won't end up with memory leaks after hours of running your application.
|
|
|
|
|
If performance is important, a C/C++ application will be compiled twice -- the first time to generate profiling instrumentation, and the second time to optimize the code based on the profiling data generated from running the application. The performance improvement can be quite dramatic, and it is something not available to C# programs.
A JIT compiler is restrained by time. It can only take the time to make a broad pass at optimization. Otherwise, it runs the risk of spending more time figuring out how to optimize the code then the time saved from the actual optimizations. A C/C++ compiler, on the other hand, can take however much time it needs to analyze the code to provide as much optimization as it can manage.
It is true that a JIT compiler can theoretically use processor instructions not available to a previously compiled application. However, this will only be of benefit if those special instructions are designed to speed up an application. There isn't much motivation for a chip designer to add new instructions for applications -- by the time legacy applications have been updated to take advantage of them, that chip will be several generations old. The primary advantage of new instructions is for the operating systems that run on them. Unlike most applications, an Operating System typically runs a large number of concurrent tasks, so any new instructions to speed that up is a large advantage -- but not one that Applications will benefit from.
|
|
|
|
|
A JIT compiler is restrained by time, but heavy optimizations is possible; however, complex heuristics are needed to decide when to spend time on them.
About new processor instructions, what you say is a theoretical reasoning not backed by any facts, and indeed false (you're welcome to post specific evidence) - most new instructions are meant to be used by applications (the only OS-specific ones I can remember are sysenter/syscall for OS-level system calls, instead of software interrupts like "int 0x80" on Linux or "int 21h" on DOS). Most SIMD operations can't be possibly used in a OS (say SIMD floating point instructions). And performance-intensive applications like media manipulation (codec/editors/...) ones or video games will use them quite soon. In the other cases, I'd say that you need to introduce them really early. Introduce an op today, to make programs faster tomorrow (that's the case with cmov, added in P6/Pentium Pro; lots of binaries in the wild are still able to run on a 486/Pentium).
Profile-guided optimizations is something most commonly used by JIT. It's true that the speedups can be dramatic, but using data from a different usage scenario can invalidate some of the profiling results, and this does not apply to the JIT.
|
|
|
|
|
... C++/CLI
The best of both worlds... or close enough
I've been investigating this technology for quite some time now. I'm writing a game engine which main purpose is to be easy to use and enhance. C++/CLI not only provides me with the ability to use my assemblies in whatever language is out there for the .NET platform, but also to optimize the code to have as much performance as possible.
Best,
Hernan
|
|
|
|
|
Both C++ and C# will run at the "SAME" speed in .NET as both are converted into IL. Unless you use unsafe {} blocks, you will not find any specific differences in C++/CLI. If you really want performance gains, write C++ algos in classical C++ dll and call it thru a DllInterop as illustrated in my Point 3.
http://www.3dbuzz.com/vbforum/showthread.php?t=135493[^]
Alternatively You can search for game development in C# vs C++,
http://www.google.com/search?aq=f&hl=en&q=c%23+vs+c%2B%2B&btnG=Search[^]
Thanks for making a lively discussion.
Mugunth
Thanks & Regards,
M.Mugunth Kumar
M +65 82448625
W http://mugunth.kumar.googlepages.com
B http://tech-mugunthkumar.blogspot.com (Technology Blog) *NEW
Nanyang Technological University,
Wee Kim Wee School of Communication and Information,
31 Nanyang Link, Singapore - 637718.
|
|
|
|
|
C++/CLI provides custom data marshalling, which may result in huge performane gains. That is not possible in C#.
Eduardo León
|
|
|
|
|
computers now are faster. who has ever dreamed about a computer with more 4Gb of memory so use whatever language that would help u finishing ur task
|
|
|
|
|
true, they are faster but not fast enough for EVERY task so you still have to optimize, data-cache, pre-calc etc... AND well design your code
|
|
|
|
|
well c# will be better it does not need a lot of thinkingg(garbage collector does a lot of memory managment for u ) and it help u avoid lot of mistakes so i vote for C#
also c++ is powerful
|
|
|
|
|
i vote for knowledge : programming without knowledge is just like driving on the reverse lane on the highway... it can work but...
|
|
|
|
|
yassir.2 wrote: who has ever dreamed about a computer with more 4Gb of memory
And yet on a 4 processor machine with 6GB of memory I spent several months to make my research application work efficiently with only 6GB of memory.
yassir.2 wrote: so use whatever language that would help u finishing ur task
I would make sure that the language is suitable for the application you are writing. .NET is certainly not suitable for the work I do but it will be fine for 90% of business applications.
John
|
|
|
|
|
Java/C# are entering the HPC picture as well. But if you are tight on memory, stick with C++. Garbage collectors can be faster than manual memory management, but they use a lot more memory as any paper on the topic will tell you.
|
|
|
|
|
Hello,
Where should i start?
At first, i like articles which deals with such complex topics but i think the actual version need some improvements.
I think it isn't possible to find a general answer because it depends from the point of view.
You write: The reason why C# compiled applications could be faster is that, during the second compilation, the compiler knows the actual run-time environment and processor type and could generate instructions that targets a specific processor.
It will also not be able to take advantages of the Core 2 duo or Core 2 Quad's "true multi-threaded" instruction set as the compiler generated native code does not even know about these instruction sets.
Never heard about it. The most common way to control or optimize threading is the WIN32 API (f.e. control affinity mask to specifiy cpu). The API hides such details and generalize the usage via HAL. Then you already have a "true multithreaded" application. If your application is well designed it dosen't matters which cpu environment is present.
You write: In the earlier days, not much changes were introduced to the instruction set with every processor release. The advancement in the processor was only in the speed and very few additional instruction sets with every release. Intel or AMD normally expects game developers to use these additional instruction sets. But with the advent of PIV and then on, with every release, PIV, PIV HT, Core, Core 2, Core 2 Quad, Extreme, and the latest Penryn, there are additional instruction sets that could be utilized if your application needs performance. There are C++ compilers that generate code that targets specific processors. But the disadvantage is the application has to be tagged as "This application's minimum system requirements are atleast a Core 2 Quad processor" which means a lot of customers will start to run away.
I must diagree. Well, special instructions sets exists like SSE but there are serveral ways to dynamically use them.
You write: How many of us can manage memory efficiently in a C++ application that's so huge say a million lines of code? It's extremely difficult to "well-design" a C++ program especially when the program grows larger. The problem with "not-freeing" the memory at the right time is that the working set of the application increases which increases the number of "page faults". Everyone knows that page fault is one of the most time-consuming operation as it requires a hard disk access. One page fault and you are dead. Any optimization that you did spending your hours of time is wasted in this page fault because you did not "free" memory that you no longer needed.
I must also disagree, in C++ there are serveal ways to manage the memory very effective (more effective as C#).
Of course Page Faults are a critical situation but Page Faults will be raised if a requested memory page isn't present in the RAM (stored in swap file).
That's all. I hope it's a constructive feedback .
Ciao...
|
|
|
|
|
Quick article and the information about the two time compilation is good.
I am new to C#/.Net world. But I think the second time compilation at when we run
the application will make the starting of the application slower
But IMHO, We can use C++ smart pointers to manage memory without problem
And we don't need to ship a big runtime with our application.
|
|
|
|
|
For improving application load time, installer programs can be set to "NGen" the code.
NGen is a native code generator that compiles a C# application to native exe. This process is usually
done by the installer. Secondly, the .NET framework is present in all machines after Windows XP SP2.
this means, you don't have to ship the framework seperately for atleast 90% of the machines.
Regards,
Mugunth
Thanks & Regards,
M.Mugunth Kumar
M +65 82448625
W http://mugunth.kumar.googlepages.com
B http://tech-mugunthkumar.blogspot.com (Technology Blog) *NEW
Nanyang Technological University,
Wee Kim Wee School of Communication and Information,
31 Nanyang Link, Singapore - 637718.
|
|
|
|
|