|
In my experience, most performance problems are usually a result of algorithm design. Sometimes the optimizer can save you, but most times it can't. If debug builds show poor performance, then its time to start thinking about fixing the problem right then rather than waiting.
patbob
|
|
|
|
|
What about performance problems that are not algorithmic, but architectural? The use of chatty, envious interfaces over less chatty, more encapsulated interfaces. Going single-threaded because it is easier (algorithmically and conceptually) than using multiple threads (and therefor wasting the benefit of multiple cores/processors that are most likely available.) Implementing a system that requires all components to be deployed locally, rather than using a distributed service-oriented approach that could offer significantly greater scalability, composability, etc. etc.
Not all performance issues are purely algorithmic. Algorithms solve business problems, but they don't really solve technical, non-functional problems. Algorithms can indeed be written poorly (i.e. BubbleSort vs. QuickSort), but that won't matter that much if you need to perform that algorithm on dozens, hundreds, or thousands of independent sets of data simultaneously.
modified on Wednesday, October 7, 2009 12:38 AM
|
|
|
|
|
Point well made. That's what I had in mind, but certainly isn't what I wrote
FYI, there's another class of design-related performance bug I've had to find and fix -- blowing the CPU's data or instruction cache while processing in a loop. Its is simply amazing how much performance you can loose doing either Sometimes the optimizewr will save you, but a lot of times it can't.
In my experience, optimizers are good at making good code run just a little faster or beat on the CPU just a little less. They rarely fix true performance problems.
patbob
|
|
|
|
|
Good point about the CPU cache...thats a level of optimization that most programmers rarely think about. I guess it could be considered a detraction of high-level languages. In that case, proper algorithmic tuning is key, as a blown cache can really destroy algorithmic performance.
|
|
|
|
|
When moving large amounts of data, using the assembler code movnt (and it's related functions), can reduce time by 50% or more (movnt writes data to memory, without writing it back to the cache, whereas the standard mov functions writes data to memory AND the cpu cache)
|
|
|
|
|
"In which phase of your software project do you actually care about performance?"
Obviously you should "care about" performance all the way through from the very beginning, but that doesn't mean you actually do anything about it.
"When do you decide that the go-faster pedal needs to be applied?"
You can't hit the "go-faster pedal" until you have at least a working prototype to which the performance can be compared. And you can't truly judge performance until you're in the production environment.
<Anecdote>
On my last job I had a Windows Service that read data from a text file created by a third-party product and inserted it to an SQL Server database.
In production it could process one hundred rows per second, but in test it could only process ten rows per second -- and therefore couldn't keep up with the data. Should I have tried to improve the system so that it could keep up even though there was no problem in production? I don't think so.
</Anecdote>
|
|
|
|
|
I do care about performance early on in projects but it takes a back seat to implementing all the feature requirements. Thus it only gets worked on late in the effort.
|
|
|
|
|
|
There is a bit of a problem with the wording of the answers in that there is more than one “correct” answer (IMO) …
One should keep “performance” in mind when coming up with the initial design, but there is a problem in trying to “optimize” before the system is operational and a regular profile has been established.
Any “optimizations” attempted before that time will most likely drain resources and can prove ineffective or even detrimental until a realistic load pattern is established (after going live).
|
|
|
|
|
It very much depends on the project.
If there is a requirement for specific performance targets, I'll make it something I think about from the start. The whole design will include performance considerations right from the beginning.
On the other hand, if there aren't any specific performance requirements, I'll pay less attention to it until we have implemented some functionality.
It also depends on level of risk of bad performance. Sever based apps have a higher risk of performance issues than basic client side only gui apps. These factors will effect where I plan to start considering performance. (Server apps get consideration from the start of the requirements phase, simple client apps with nothing complex probably don't get any consideration at all unless it actually doesn't perform well)
Simon
|
|
|
|
|
As stated in previous posts (by someone else), performance is a factor within the bounds of the implementation. It's not possible to consider performance at the design stage for more than a few reasons, the biggest one being the fact that design is driven by the feature set of the application, which are usually set by the client needs.
Everybody agrees that performance is about 'doing things the best possible way' and that usually addresses terms like 'Speed', 'Ease of use' and 'Simplicity'. But things get a bit complicated when those same words come from the mouth of an 'End User', 'Developer', 'Manager'.
Tip ... When discussing 'Performance' make sure you speak the 'same language' with the other guy(s) !!!!
PS: After a few years of trying, i've come to the inevitable conclusion that performance is located at the very spot where the outer boundaries of the 'acceptable software' criteria of each one involved, meet ....
modified on Monday, October 5, 2009 8:23 AM
|
|
|
|
|
It can be done before hand at large corporations that meet for months before the project is given to a room of 20 programmers. But in the 'real world', you are absolutely correct. I'm given a project from the start. I have to design it, code it, and distribute it in a specified time period. I don't have time to optimize anything until start actually beta testing at a few of our locations.
|
|
|
|
|
Wonder what percentage of the 54% who says they optimize their code during design and implementation is actually enhancing the truth?
Judging by the amount of projects that have serious issues afterwards I would bet quite a few. It is like scratching your nose ... not publicly good to admit it.
You can always trust people to tell you what they think you want to hear...
|
|
|
|
|
Actually We as an organisation care about performance in design, and not so much in implementation until the end of each sprint...
|
|
|
|
|
Ok, which company is that and where can i sent my CV ....
|
|
|
|
|
You're taking too much an all-or-none tact.
Doing both together, from the beginning is sensible -> a result of experience eventually gained.
That does not, however, preclude further optimizations anywhere along the path, including after what one could refer to as completion (from the feature point of view, at least).
Experience results in some optimizations "to begin with" - because you know where you're going (pretty much) and you've been there before (or some place that is much like it).
(too bad I don't optimize my posts).
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"It's a sad state of affairs, indeed, when you start reading my tag lines for some sort of enlightenment. Sadder still, if that's where you need to find it." - Balboos HaGadol
|
|
|
|
|
I am not advocating any approach, just observing that in reality a lot of projects are challenged and that doesn't seem to align with what people are saying they are doing. It is obviously the best theoretically to think about performance all the time but that is not real life. The better skilled the developer the more they can cheat and get away with it. By cheat I mean take shortcuts. Shortcuts are for those with battle scars and knowledge.
This post is not optimized either
|
|
|
|
|
I think the phrasing of the 1st choice explains its popularity:
<i>I care about performance during both the design and implementation phases</i>
It just says that you <i>care</i>, or are mindful of performance during the design phase.
It doesn't say anything about actually optimizing the code.
It's a pretty safe assumption to think that most developers at least care about the idea of performance during the design phase.
I suspect that the percentage would be much lower if it were phrased in terms of actually doing something about it (iteratively testing and optimizing the code during development).
Jordan
|
|
|
|
|
Tom Lessing wrote: Wonder what percentage of the 54% who says they optimize their code during design and implementation is actually enhancing the truth?
Wonder what percentage of the 54% think the design phase is bouncing a few ideas off their coworkers while getting coffee before they sit down and type in a pile of code. In which case there is always someone who says something like, "no don't do that, use the STL as it's already optimized and tested", so there you go, optimization during the design phase.
|
|
|
|
|
Little, slow and steady optimization from the inception would save a lot of heart breaks later besides ensuring minimal cost of development and avoidance of unncessary rework.
Vasudevan Deepak Kumar
Personal Homepage Tech Gossips
The woods are lovely, dark and deep,
But I have promises to keep,
And miles to go before I sleep,
And miles to go before I sleep!
|
|
|
|
|
..and the initial design (if you still take your time for such quaint oddities) has most influence on final performance - perceived and real.
Personally, I love the idea that Raymond spends his nights posting bad regexs to mailing lists under the pseudonym of Jane Smith. He'd be like a super hero, only more nerdy and less useful. [Trevel] | FoldWithUs! | sighist
|
|
|
|
|
Yes, I like to do it as I go along. The secret, really, is to test regularly on a slow machine. Keep one handy!
|
|
|
|
|
...but ignoring performance at the start and waiting till the end is hell itself.
Performance is not purely an implementation thing. A system must be architected wisely with performance in mind, from the start, to truly achieve good, scalable performance. I fully agree that implementation should start by focusing on making something work, and factor in performance as part of refactoring after function is achieved. However there are less granular levels of performance that must be addressed throughout the whole process. Should an application be multi-threaded? What kind of physical scalability is needed? What kind of throughput is required after its initial release? Subsequent releases? What kind of load growth is expected?
Those questions can't simply be met by after-the-fact implementation changes...they are fundamental architectural decisions that need to be addressed early, so that refactoring to meet them is possible...and easy.
|
|
|
|
|
When you design something, there are two way of optimization. To make it better (work faster or stay smaller) or make it best. Process of reimplementation (editing existing implementation) may continue until there are exactly zero ways to make it better (you made it best). There is no more from best. If you want better, you should redesign.
The process is "develop" and "optimize" are like two parallel lines, which are connected at multiple places. For a best optimized project you need to design it and redesign it until it fits in practice, then implement it, if nessesary redesign and implement it again (may up to several times). Then optimize the implementation and finally optimize globally (if required - redesign and reimplement it). After every implementation you need to test it, if something doesn't work, your implementation or design doesn't fit in practice. At least that works for me, but for large projects, I'm not experienced in to tell.
|
|
|
|
|
Continual redesign and reimplementation is a very wasteful tactic. It is not necessary to come up with the absolutely perfect design up front. I never mentioned anything like that. However, it is important to spend the appropriate amount of time up front and factor in critical, non-functional success factors, such as performance and scalability, before you implement something that does NOT meet those requirements. Agile is not about no-design...its about apropriate design, in the right amounts, at the right times, to maximize effectiveness and minimize waste. It doesn't have to be 100%...but 80% will do...you know how the rule goes.
I was surprised by how many people voted that they do not even think about performance until they are done designing AND implementing. I've been down that road many times, and it is nothing less than pure hell. Continual redesign of a poorly thought initial design is a BAD practice that will lead to tremendous waste.
|
|
|
|