|
RedSonja wrote: I never use the optimiser in Visual Studio
I'm not quite sure how to read that…
Which optimiser? The compiler's?
And how do you mean you never use it in Visual Studio?
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Erm, I'm assuming you mean you don't use the C#/VB.NET compilers and JIT optimizations when you say "optimiser in Visual Studio". And, if that is really true, I think its ludicrous!
Any optimizations you make purely with C#/VB.NET code are at a very high level. You are incapable of making the kinds of optimizations the compiler is no matter how hard you try, because a lot of IL constructs may only BE made by the compiler itself (unless you write IL directly). The JIT makes even lower level optimizations, at the machine-code level, tuned specifically for the operating environment (physical hardware, virtual resource allocations, etc.) within which the code is actually running (and those aspects may change from run-to-run). No matter how much you try to optimize at the code level, you can not foresee each and every operating environment and account for every single optimization, nor are you capable of it.
Optimizing at a high level is only one level of optimization, and it will only take you so far. There are additional levels of optimization that SHOULD be done, but which a developer should not have to worry about (hence the reason we use high-level languages). I think its a load of crap that you don't use the compiler optimizations, and I think its humorous that you actually think your doing a better job than the compiler and JIT by ( ) "saveing every microsocond you can."
Give me a break. Use the compiler for what it is, a tool to reduce your workload and handle significant, lower-level optimizations for you.
|
|
|
|
|
I should have said, at the moment I use Visual Studio only for C++. I just got a new version (version 9 SP1). The version before that had some problems using the optimisers, I had memory leaks and strange crashes. Colleagues said try disabling optimisation, and suddenly things got a lot better. Maybe the new version is better. I will give it a try when I get time, since you recommend it. Saving microseconds is a habit of mine, not a work ethic. Actually, some of my datasets are very large indeed, and shaving off a few microseconds on every loop can make a measurable difference. The customers are really interested in seconds; I gather their survival depends on it. Soon we will get a more modern processor and everything will be faster.
I use other tools for tidying my code, like Together and DevPartner. All very, very useful. My code is safety critical and I have to prove I have no memory leaks, no untested code, no uncalled code, no uninitialised variables, etc, etc. We have long lists of guidelines, which we can enter in the tools to check our code for us. We don't get to choose our tools, they are sent from above.
Despite its quirks Visual Studio is a fine thing. I came from Unix originally, and Visual Studio, along with Visual Basic for Applications, was what persuaded me I could work in Windows after all. I'm not sure I would want to go back to vi and Emacs now. I liked the total control, but Visual Studio takes a lot of work off my hands, the interface to external stuff just works, I don't have to mess about with it.
VBA has a better debugger though, the ability to move the pointer back and try again is brill.
***********************************************************
Some time later. I switched on Full Optimise for release mode and it seems to work OK. The older colleagues think it is risky, the younger ones don't see the problem, they optimise every time. Well, I have to run a test programme this week so I shall leave the optmisation on and see how it goes.
Contrary to what my husband says, I do listen to advice!
------------------<;,><-------------------
modified on Wednesday, October 7, 2009 2:54 AM
|
|
|
|
|
Well, when it comes to C++, I can't say. Visual Studio has never had the greatest C++ support. This is where C# truly shines. Like VB, it has some stunning debugger capabilities, but that isn't the best part about it. Visual Studio and C# were designed to go together, and the static analysis of C# is truly amazing. You generally don't need additional tools to verify the vast bulk of your code for correctness (Having ReSharper is a big bonus and offers some improvements, but Visual Studio itself covers most of it.) C# also has a progressive compiler/optimizer which does phenomenally well.
While you may be a C++ gal, I highly recommend looking into C#. You won't have a problem with the syntax, and the functional improvements it has over C++ will likely improve you're productivity another order of magnitude beyond what you have now (similar to the difference between VI/Emacs and Visual Studio.) You still need to write optimal code to gain the best performance, but doing so isn't nearly as much of a chore as it is with C++.
|
|
|
|
|
RedSonja wrote: The version before that had some problems using the optimisers, I had memory leaks and strange crashes. Colleagues said try disabling optimisation, and suddenly things got a lot better.
Wow! I'm amazed by your reasoning. Not to mention the fact that you wrote significant portion of code without trying to building your project and run it in release mode! And you're working on safty-critical application?
|
|
|
|
|
Well, of course it was a lot more complicated than that... I do indeed test in release mode too. We all do, and on the target as well. Actually we spend a lot of time testing.
As it happens a lot of my code is inherited C code (and a few other languages); there are many things in it I don't like and can't change, and a lot of things I have improved over the years. It's not bad as safety critical stuff goes. I have attended hardware trials and I am still alive...
When I did my "Convert from Unix to Visual Studio" course some years ago it was received wisdom that one didn't use the optimisers. Things have changed since then. Me too, if you read the posts here. I am now a convert and am preaching to my colleagues.
------------------<;,><-------------------
|
|
|
|
|
... and, on a charitable day, the performance of the team and company I work for.
With regards to performance as it applies to software, this is best handled as part of everyone's continuing eduction in software development and not part of any specific project. After all, do people really think to themselves 'well, I was going to go with the bubble sort but for performance reasons I'd better not'?!?
|
|
|
|
|
In my experience, most performance problems are usually a result of algorithm design. Sometimes the optimizer can save you, but most times it can't. If debug builds show poor performance, then its time to start thinking about fixing the problem right then rather than waiting.
patbob
|
|
|
|
|
What about performance problems that are not algorithmic, but architectural? The use of chatty, envious interfaces over less chatty, more encapsulated interfaces. Going single-threaded because it is easier (algorithmically and conceptually) than using multiple threads (and therefor wasting the benefit of multiple cores/processors that are most likely available.) Implementing a system that requires all components to be deployed locally, rather than using a distributed service-oriented approach that could offer significantly greater scalability, composability, etc. etc.
Not all performance issues are purely algorithmic. Algorithms solve business problems, but they don't really solve technical, non-functional problems. Algorithms can indeed be written poorly (i.e. BubbleSort vs. QuickSort), but that won't matter that much if you need to perform that algorithm on dozens, hundreds, or thousands of independent sets of data simultaneously.
modified on Wednesday, October 7, 2009 12:38 AM
|
|
|
|
|
Point well made. That's what I had in mind, but certainly isn't what I wrote
FYI, there's another class of design-related performance bug I've had to find and fix -- blowing the CPU's data or instruction cache while processing in a loop. Its is simply amazing how much performance you can loose doing either Sometimes the optimizewr will save you, but a lot of times it can't.
In my experience, optimizers are good at making good code run just a little faster or beat on the CPU just a little less. They rarely fix true performance problems.
patbob
|
|
|
|
|
Good point about the CPU cache...thats a level of optimization that most programmers rarely think about. I guess it could be considered a detraction of high-level languages. In that case, proper algorithmic tuning is key, as a blown cache can really destroy algorithmic performance.
|
|
|
|
|
When moving large amounts of data, using the assembler code movnt (and it's related functions), can reduce time by 50% or more (movnt writes data to memory, without writing it back to the cache, whereas the standard mov functions writes data to memory AND the cpu cache)
|
|
|
|
|
"In which phase of your software project do you actually care about performance?"
Obviously you should "care about" performance all the way through from the very beginning, but that doesn't mean you actually do anything about it.
"When do you decide that the go-faster pedal needs to be applied?"
You can't hit the "go-faster pedal" until you have at least a working prototype to which the performance can be compared. And you can't truly judge performance until you're in the production environment.
<Anecdote>
On my last job I had a Windows Service that read data from a text file created by a third-party product and inserted it to an SQL Server database.
In production it could process one hundred rows per second, but in test it could only process ten rows per second -- and therefore couldn't keep up with the data. Should I have tried to improve the system so that it could keep up even though there was no problem in production? I don't think so.
</Anecdote>
|
|
|
|
|
I do care about performance early on in projects but it takes a back seat to implementing all the feature requirements. Thus it only gets worked on late in the effort.
|
|
|
|
|
|
There is a bit of a problem with the wording of the answers in that there is more than one “correct” answer (IMO) …
One should keep “performance” in mind when coming up with the initial design, but there is a problem in trying to “optimize” before the system is operational and a regular profile has been established.
Any “optimizations” attempted before that time will most likely drain resources and can prove ineffective or even detrimental until a realistic load pattern is established (after going live).
|
|
|
|
|
It very much depends on the project.
If there is a requirement for specific performance targets, I'll make it something I think about from the start. The whole design will include performance considerations right from the beginning.
On the other hand, if there aren't any specific performance requirements, I'll pay less attention to it until we have implemented some functionality.
It also depends on level of risk of bad performance. Sever based apps have a higher risk of performance issues than basic client side only gui apps. These factors will effect where I plan to start considering performance. (Server apps get consideration from the start of the requirements phase, simple client apps with nothing complex probably don't get any consideration at all unless it actually doesn't perform well)
Simon
|
|
|
|
|
As stated in previous posts (by someone else), performance is a factor within the bounds of the implementation. It's not possible to consider performance at the design stage for more than a few reasons, the biggest one being the fact that design is driven by the feature set of the application, which are usually set by the client needs.
Everybody agrees that performance is about 'doing things the best possible way' and that usually addresses terms like 'Speed', 'Ease of use' and 'Simplicity'. But things get a bit complicated when those same words come from the mouth of an 'End User', 'Developer', 'Manager'.
Tip ... When discussing 'Performance' make sure you speak the 'same language' with the other guy(s) !!!!
PS: After a few years of trying, i've come to the inevitable conclusion that performance is located at the very spot where the outer boundaries of the 'acceptable software' criteria of each one involved, meet ....
modified on Monday, October 5, 2009 8:23 AM
|
|
|
|
|
It can be done before hand at large corporations that meet for months before the project is given to a room of 20 programmers. But in the 'real world', you are absolutely correct. I'm given a project from the start. I have to design it, code it, and distribute it in a specified time period. I don't have time to optimize anything until start actually beta testing at a few of our locations.
|
|
|
|
|
Wonder what percentage of the 54% who says they optimize their code during design and implementation is actually enhancing the truth?
Judging by the amount of projects that have serious issues afterwards I would bet quite a few. It is like scratching your nose ... not publicly good to admit it.
You can always trust people to tell you what they think you want to hear...
|
|
|
|
|
Actually We as an organisation care about performance in design, and not so much in implementation until the end of each sprint...
|
|
|
|
|
Ok, which company is that and where can i sent my CV ....
|
|
|
|
|
You're taking too much an all-or-none tact.
Doing both together, from the beginning is sensible -> a result of experience eventually gained.
That does not, however, preclude further optimizations anywhere along the path, including after what one could refer to as completion (from the feature point of view, at least).
Experience results in some optimizations "to begin with" - because you know where you're going (pretty much) and you've been there before (or some place that is much like it).
(too bad I don't optimize my posts).
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"It's a sad state of affairs, indeed, when you start reading my tag lines for some sort of enlightenment. Sadder still, if that's where you need to find it." - Balboos HaGadol
|
|
|
|
|
I am not advocating any approach, just observing that in reality a lot of projects are challenged and that doesn't seem to align with what people are saying they are doing. It is obviously the best theoretically to think about performance all the time but that is not real life. The better skilled the developer the more they can cheat and get away with it. By cheat I mean take shortcuts. Shortcuts are for those with battle scars and knowledge.
This post is not optimized either
|
|
|
|
|
I think the phrasing of the 1st choice explains its popularity:
<i>I care about performance during both the design and implementation phases</i>
It just says that you <i>care</i>, or are mindful of performance during the design phase.
It doesn't say anything about actually optimizing the code.
It's a pretty safe assumption to think that most developers at least care about the idea of performance during the design phase.
I suspect that the percentage would be much lower if it were phrased in terms of actually doing something about it (iteratively testing and optimizing the code during development).
Jordan
|
|
|
|
|