|
Ok, which company is that and where can i sent my CV ....
|
|
|
|
|
You're taking too much an all-or-none tact.
Doing both together, from the beginning is sensible -> a result of experience eventually gained.
That does not, however, preclude further optimizations anywhere along the path, including after what one could refer to as completion (from the feature point of view, at least).
Experience results in some optimizations "to begin with" - because you know where you're going (pretty much) and you've been there before (or some place that is much like it).
(too bad I don't optimize my posts).
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"It's a sad state of affairs, indeed, when you start reading my tag lines for some sort of enlightenment. Sadder still, if that's where you need to find it." - Balboos HaGadol
|
|
|
|
|
I am not advocating any approach, just observing that in reality a lot of projects are challenged and that doesn't seem to align with what people are saying they are doing. It is obviously the best theoretically to think about performance all the time but that is not real life. The better skilled the developer the more they can cheat and get away with it. By cheat I mean take shortcuts. Shortcuts are for those with battle scars and knowledge.
This post is not optimized either
|
|
|
|
|
I think the phrasing of the 1st choice explains its popularity:
<i>I care about performance during both the design and implementation phases</i>
It just says that you <i>care</i>, or are mindful of performance during the design phase.
It doesn't say anything about actually optimizing the code.
It's a pretty safe assumption to think that most developers at least care about the idea of performance during the design phase.
I suspect that the percentage would be much lower if it were phrased in terms of actually doing something about it (iteratively testing and optimizing the code during development).
Jordan
|
|
|
|
|
Tom Lessing wrote: Wonder what percentage of the 54% who says they optimize their code during design and implementation is actually enhancing the truth?
Wonder what percentage of the 54% think the design phase is bouncing a few ideas off their coworkers while getting coffee before they sit down and type in a pile of code. In which case there is always someone who says something like, "no don't do that, use the STL as it's already optimized and tested", so there you go, optimization during the design phase.
|
|
|
|
|
Little, slow and steady optimization from the inception would save a lot of heart breaks later besides ensuring minimal cost of development and avoidance of unncessary rework.
Vasudevan Deepak Kumar
Personal Homepage Tech Gossips
The woods are lovely, dark and deep,
But I have promises to keep,
And miles to go before I sleep,
And miles to go before I sleep!
|
|
|
|
|
..and the initial design (if you still take your time for such quaint oddities) has most influence on final performance - perceived and real.
Personally, I love the idea that Raymond spends his nights posting bad regexs to mailing lists under the pseudonym of Jane Smith. He'd be like a super hero, only more nerdy and less useful. [Trevel] | FoldWithUs! | sighist
|
|
|
|
|
Yes, I like to do it as I go along. The secret, really, is to test regularly on a slow machine. Keep one handy!
|
|
|
|
|
...but ignoring performance at the start and waiting till the end is hell itself.
Performance is not purely an implementation thing. A system must be architected wisely with performance in mind, from the start, to truly achieve good, scalable performance. I fully agree that implementation should start by focusing on making something work, and factor in performance as part of refactoring after function is achieved. However there are less granular levels of performance that must be addressed throughout the whole process. Should an application be multi-threaded? What kind of physical scalability is needed? What kind of throughput is required after its initial release? Subsequent releases? What kind of load growth is expected?
Those questions can't simply be met by after-the-fact implementation changes...they are fundamental architectural decisions that need to be addressed early, so that refactoring to meet them is possible...and easy.
|
|
|
|
|
When you design something, there are two way of optimization. To make it better (work faster or stay smaller) or make it best. Process of reimplementation (editing existing implementation) may continue until there are exactly zero ways to make it better (you made it best). There is no more from best. If you want better, you should redesign.
The process is "develop" and "optimize" are like two parallel lines, which are connected at multiple places. For a best optimized project you need to design it and redesign it until it fits in practice, then implement it, if nessesary redesign and implement it again (may up to several times). Then optimize the implementation and finally optimize globally (if required - redesign and reimplement it). After every implementation you need to test it, if something doesn't work, your implementation or design doesn't fit in practice. At least that works for me, but for large projects, I'm not experienced in to tell.
|
|
|
|
|
Continual redesign and reimplementation is a very wasteful tactic. It is not necessary to come up with the absolutely perfect design up front. I never mentioned anything like that. However, it is important to spend the appropriate amount of time up front and factor in critical, non-functional success factors, such as performance and scalability, before you implement something that does NOT meet those requirements. Agile is not about no-design...its about apropriate design, in the right amounts, at the right times, to maximize effectiveness and minimize waste. It doesn't have to be 100%...but 80% will do...you know how the rule goes.
I was surprised by how many people voted that they do not even think about performance until they are done designing AND implementing. I've been down that road many times, and it is nothing less than pure hell. Continual redesign of a poorly thought initial design is a BAD practice that will lead to tremendous waste.
|
|
|
|
|
One has to understand where this quote comes from.
Realistically, if it was meant as it is often used now, we'd all have to use bubble sort until we could prove the sort is a bottleneck.
Personally, I love the idea that Raymond spends his nights posting bad regexs to mailing lists under the pseudonym of Jane Smith. He'd be like a super hero, only more nerdy and less useful. [Trevel] | FoldWithUs! | sighist
|
|
|
|
|
Ha! Excellent point. People take things too literally...its the spirit of the statement that really matters.
|
|
|
|
|
Usually ignoring performance at the start and adding it later is the correct way to go, because at the start you don't even know where the bottlenecks are going to be
|
|
|
|
|
Thats entirely untrue. It is extremely rare to go into a project knowing absolutely nothing. There are always known factors up front...it there wern't, we wouldn't ever write anything...there wouldn't be a need. It does not take a lot of up front knowledge to be able to extrapolate expected performance requirements. If you don't do that minimal effort up front, again, you are headed for some tremendous waste. Projects don't start in a void...they start in a pool of requirements.
|
|
|
|
|
There won't be any waste because you just add the performance tweeks when it makes sense to do so. If you have a good class structure it will fit in neatly.
Also, it's a bad idea making guesses about performance, because you have no idea what the compiler and processor are doing. Measure everything
|
|
|
|
|
You are still missing the point. You are only thinking of the lowest level of performance...that provided by the code and compiler itself. There are other levels of performance that are determined less by the code written, and more by the technologies used and how things are deployed. And it absolutely IS possible to have some foresight, think ahead a little bit, and project performance requirements based on historical performance and growth data. That is what architects do every day, think about not only the present but also the future, and make EDUCATED proposals based on projected statistics. Its never just about the code and compiler. Higher level concerns must be taken into account, and are often more important than how particularly optimized your algorithms are in the big picture. Factor in the hardware, wire, and physical deployment scenarios, and you will find that fine-tuning your algorithmic performance has a significantly smaller impact on your overall performance than it first appears to. No matter how much you perfect a piece of code, it will almost always be overshadowed by inter-process, inter-server, process-to-disk, and over-the-internet communications costs, unless you somehow managed to be lucky enough to have 100% pure algorithmic code that runs in a single process (I guess if your a game developer your set.)
|
|
|
|
|
Ok, you've just described what a "Programmers Paradise" looks like .... so i think it's time for you to return to "Coders Hell"
|
|
|
|
|
I think a programmer makes his or her own paradise or hell. It doesn't require weeks or months of up-front planning to determine what critical, non-functional success factors a project needs to meet. Usually a couple hours, few days at most, will give you good enough information such that you can start out good enough. You won't start out perfect, but you'll be better off than having nothing at all. In the long run, that little bit of time up front will save you months, perhaps years of wasted work in the long run. Aim for paradise, eventually management realizes the benefit of your ways, and they'll love you for it. Even if they don't...well, at least your climbing out of hell instead of just sitting in it.
|
|
|
|
|