|
jschell wrote: Most applications do not need optimized instruction processing. And game software is a small part of the developer space.
...
Or embedded controllers. HOWEVER those are very small parts of overall development. And I can give you a specific example - how fast does the configuration setup for a game need to be? Do you want a developer spending two weeks to improve that by 10%?
You're absolutely right about that much... having been writing games since the age of 8 (well, OK, crazy simple ones back then, though they sure felt awesome at the time lol), I do tend to give those smaller parts of the overall developer space a far larger place in my heart than what is represented by the overall community in practice. I don't write games exclusively of course. I deal with all sorts of applications; but if I'm not working on a game, I'm usually wishing I was. So, honestly, I do have a specific bias, and I recognize that.
And you're right about much of what you said on the other things you wrote in your last post as well... I'm not going to quote every line, but the bottom line is that certain applications really don't need micro-optimization at all, and I wouldn't waste my time doing it. Games are one thing... where every frame matters, and (with the exception of online components) the execution is local and very much hardware dependant, so micro-optimization can make a difference... but then that's really part of the requirements, isn't it? As many FPS as possible on the lowest end hardware possible. (Unless you work for Crytek, in which case, feel free to consider only the highest end hardware - the users will catch up eventually. lol)
At any rate, I still stand that implementation in any project that is conscientious of performance will do better than if performance is not considered at all. I'm not suggesting that an enterprise class database system needs performance tuning for of every line of code - after all, a piece of code that is called even once every few seconds, does it really matter if it runs at 0.005 seconds or 0.05 seconds? An order of magnitude, yes, but not really a very high impact to the whole of the project... so we can agree there I'm sure. But at least being conscientious of the performance of code while it's being written will go a long way to make sure an implementation isn't overly bloated.
I have a semi-regular project (occasional feature additions, etc.) through a developer who created a database system that is robust, but frustratingly overengineered. (Too many "what if's" and not enough "what's it really need to do's".) I had to build a new machine just to run their developement platform. And that was a design issue, to be sure. So your arguments hold up well in my experience as far as database applications go. That said, I can see situations where poor implementations that work within the confines of a set of requirements could definitely wreak havoc on overall performance. Let's say a customer has the need of managing a large tree of of data (akin to a file system tree) and wants the GUI to show all the nodes. OK, no problem. Just load the data, parse into a tree, etc. Easy enough. What if, however, the programmer chose to iterate through the tree with individual select hits in the DB in order to keep the code small, and not deal with in-memory structures to sort the data into a tree? (Assuming appropriately sized data.) Hitting the DB over and over would wreck performance, to be sure. But then... I guess we're not dealing with someone very experienced in that situation - so I suppose what you've been contending holds true most of the time in practice.
jschell wrote: Actually they often do not even know what a profiler is.
Huh? What's a profiler? ...OK, just kidding. Seriously though, it does surprise me to hear people being unfamiliar with common tools. It's like not having ever heard of source control, or not knowing the difference between an IDE and Notepad. Odd. But I guess nobody's born with the information...
jschell wrote: And at one time I had the entire compiler API memorized, understood how the entire API worked, understood a great deal about how the OS worked and even how the computer boot up worked (at the assembly instruction level.) But times change.
(sniff) Yes they do. I miss DOS. (okay, only a little. )
*** Listening to the song: "INT 21h, where did you go?" *** ...OK, not really.
|
|
|
|
|
Robb Ryniak wrote: where every frame matters, and (with the exception of online components) the
execution is local and very much hardware dependant, so micro-optimization can
make a difference
Some games, not all. As an example card games don't require much of anything in the way of graphics optimization. And some don't even require optimization for strategy - for example solitaire.
Robb Ryniak wrote: Let's say a customer has the need of managing a large tree of of data (akin to
a file system tree) and wants the GUI to show all the nodes....
However the way you phrased it means that it is either a requirements and/or design problem. So one one look at the problem at that level before one implements anything.
|
|
|
|
|
jschell wrote: Some games, not all. As an example card games don't require much of anything in the way of graphics optimization. And some don't even require optimization for strategy - for example solitaire
Come on now, Solitaire's not really game lol... seriously though, I meant "graphically rich games like first person shooters like the Doom franchise, Crysis, F.E.A.R, and RTS like AoE, etc."... you know, the kinda games *I* play. Anything else is "just an app" lolol. (How's that for tunnel vision?? lol) But your point stands - anyone remember those simple (but oddly fun) games that Microsoft released in the Win 3.x era? The "entertainment pack" with Klotski, that mouse/cheese game, etc.? Yeah, not alot of optimization needed there either. At any rate, once we got talking on the same wavelength I think we understand each other well enough.
|
|
|
|
|
The guidelines/best practice/patterns/rules are there to capture knowledge of many coders.
If you try to refactor your code to adhere some standard, you are digging into the whys and hows of others. I would claim that any coder can only get better from such an experience.
If you fancy performance try run FxCop from Microsoft against
your C# code - FxCop has a set of rules that questions different coding constructs. Example: Do you initialize fields with default values e.g.
private bool foo = false;
CheckStyle for is also worthwhile to incorporate in your daily coding... that is if an old dog want to learn some new tricks
|
|
|
|
|
Keld Ølykke wrote: If you try to refactor your code to adhere some standard, you are digging into
the whys and hows of others. I would claim that any coder can only get better
from such an experience.
...and that is such an answer I was looking to get. That makes sense to me. Even if I wouldn't adopt such a policy for a project I was heading up, there is always value in understanding the practices of others, even if I don't agree with them. I have learned many times from other coders, even if I didn't agree with their approach, I can usually glean something useful from the experience. Thanks for the considered answer.
Keld Ølykke wrote: that is if an old dog want to learn some new tricks
Pfft... I'm not that old... (am I? lol) Seriously, if I wasn't interested in learning new stuff, I'd have left the field decades ago. Ours is in industry in constant flux. Good and bad I guess.
|
|
|
|
|
Ahh, this is falling into the classic mistake that a code smell indicates that there is an actual problem that needs solving. This isn't what a code smell is. A code smell refers to an indication that there could be a problem, not that there is one. When someone points out that "such and such is a code smell" should be taken to mean; you really need to take a look at what you did here and make sure that you haven't created a problem due to lack of understanding of a particular pattern, or through dropping out of loops early, etc.
Unfortunately, code smells have become dogma - so you'll hear that singletons are the mutated spawn of Satan, or long methods are Beelzebub's nasal excretions. Yes, your singleton might be wrong because you don't fully understand the impact it has, or your long method might actually chain 30 or 40 if statements deep in there, but if that's the logic that you have to follow in there, it doesn't actually make it wrong.
At the end of the day, this just comes down to knowing which battles you should be fighting in your code, and having an understanding of the implications of your code.
One small side note - performance isn't the be all and end all. If your code quickly arrives at the wrong answer, your code is still wrong. It's just wrong faster.
|
|
|
|
|
Pete O'Hanlon wrote: Ahh, this is falling into the classic mistake that a code smell indicates that there is an actual problem that needs solving.
Yeah, I've been taking such "advice" with a grain of salt... it's just stuff I've seen posted by many seem to indicate that certain practices are "absolute don'ts" rather than "do it carefully", which gets a nice fat eyebrow raise from me... frankly, all coding should be done carefully, with purpose and understanding. I've just been having a hard time understanding how it has become so prevalant for so many to be so very dogmatic over very specific approaches, when there are probably much more important concerns, which seem to have gone largely abandoned over the years in favor of "flavors du jour".
Pete O'Hanlon wrote: One small side note - performance isn't the be all and end all. If your code quickly arrives at the wrong answer, your code is still wrong. It's just wrong faster.
Absolutely. Couldn't agree more. Assuming we're already talking about correct code that works... accurately, reliably, etc.; and is well commented (so others can make heads or tails of it) ...then in that case, I always choose performance over personal convenience.
I guess I simply assumed that the reader would implictly understand that accuracy was a given in my "most important lesson" - since code that's broken isn't really code at all - it's just... well... a mess. lol
"It's just wrong faster"... that's pretty funny (and well said)
|
|
|
|
|
Robb Ryniak wrote: I guess I simply assumed that the reader would implictly understand that
accuracy was a given in my "most important lesson" - since code
that's broken isn't really code at all - it's just... well... a mess. lol
You've read the forum posts?
|
|
|
|
|
Pete O'Hanlon wrote: You've read the forum posts?
Yep... but I must have received my delivery of "mind bleach"; yet I can't seem to remember if I got it or not.
(Funny signature btw, lol)
|
|
|
|
|
Hi,
I'm searching for some open source software that is well designed and is designed for flexibility.
I'd like to study the source to understand how a well designed system is structured and developed.
I've read some books about design but the example are always small and self-contained. So I'd like to see how that work in a mid-big sized project that have different modules interacting.
If you people know any of this, could you please post the name or the link of the software.
If they're written in c#/java and well documented would be a plus.
Or if you have the name of some book on the argument maybe that cover a big software case study it would be even better.
Hope my english is understandable.
|
|
|
|
|
Giuseppe Tollini wrote: mid-big sized project that have different modules interacting.
I would suspect that is a myth.
Applications, which are successful, grow over time and compromises are made based on real time problems. And although solutions work they are not always implemented ideally. And because of such problems were not known with the initial design then the design might need to be worked around.
Something similar occurs with libraries. Libraries however might be less impacted by this if the the functionalit in the library is disparate enough.
Both are also impacted by the desires and/or experience of those that work on it over time.
|
|
|
|
|
Giuseppe Tollini wrote: I'm searching for some open source software that is well designed and is designed for flexibility.
I'd like to study the source to understand how a well designed system is structured and developed.
The only example that comes to mind is Linux.
Giuseppe Tollini wrote: I've read some books about design but the example are always small and self-contained. So I'd like to see how that work in a mid-big sized project that have different modules interacting.
Always specific to the system, it's not like there's "one ideal solution".
|
|
|
|
|
|
We are designing some web services to communicate data updates between applications that we make. It's all about interoperability.
My lead dev. is thinking about making a pair of web services that "talk" to each other. The app that originates a data change could access a web service of the consuming app that simply says "I have data for you." The consuming app then uses SOAP/WebService protocols to ask for the data the the provider has indicated was available in the preceding exchange.
Is this just a simplistic way of doing something that is already a standard?
What would we call that process?
What terms should he be researching to learn more about how this is done nowadays?
Thank you
|
|
|
|
|
I m a fresher but i think it is better to use messaging services like MSMQ or JMS depending on the technology used.
Regards,
Raj Champaneriya
|
|
|
|
|
Bytescream wrote: It's all about interoperability.
And that is all about design and process control. Not technology.
Bytescream wrote: could access a web service of the consuming app that simply says "I have data
for you."
What happens if the other app isn't there - the one that is supposed to receive the data available message>
What happens if the other app receives the data available but then goes down before it can get the data?
What happens if the other app receives the data available message but just ignores it - it never tries to get the other data (yes that is a bug but is still something that the the orgination app must consider.)
|
|
|
|
|
Hi,
i am looking to create a perl webservice which can be consumed from C# or VB.net.perl code in unix server.
Can some one help with an example?
Thanks
Hari
|
|
|
|
|
ernestohari wrote: i am looking to create a perl webservice which can be consumed from C# or VB.net.perl code in unix server.
Can some one help with an example?
Tutorial building webservice with Perl[^]. There's enough examples out there on consuming a webservice using C#/VB.NET.
|
|
|
|
|
I am busy developing a timesheet capture service which has a hierarchical user structure[1], where e.g. the top user in a hierarchy represents a distinct organization or related group of users. This user can then create more users under their self. This design has been laid down by my client. For authorization, I have interpreted this so that a user higher up in the user tree has access to all users below them in the tree, as well as entities owned by those lower users. This seems logical to me, as a user higher in the tree 'owns' users lower down. The client is not clear on whether this is the way to go, as in some situations it is ideal that users lower down have access to entities created by users above them in the tree. My client is unclear on what he wants here and expects some assistance in deriving a practical authorization scheme for this hierarchical user scheme.
Let me introduce the entity Customer to this scenario. Originally, a User owned Customers, so in my interpretation, a user higher up in the tree (more senior) had access to all customers owned by users lower down in the tree. This preserves the higher privilege of higher users, but prevents a more senior user creating customers for more junior users to work on. Now I have to change this one user to many customers relationship to many users to many customers, complicating things somewhat.
I'm not asking for a solution here, but some input and maybe suggestions or warnings for proceeding to try and devise a working authorization scheme for this complex matrix of user trees crossed with customer trees.
[1] Many other entities are also hierarchical, but not yet relevant here.
|
|
|
|
|
Brady Kelly wrote: For authorization, I have interpreted this so that a user higher up in the user tree has access to all users below them in the tree, as well as entities owned by those lower users. This seems logical to me, as a user higher in the tree 'owns' users lower down.
I don't find that logical. If I were a node in your tree, I'd only pay attention to my parent - the topnode would never be relevant in my work. That's based on the idea of a "chain of command". If my bosses' boss could manage me directly, my boss would be superfluous. Better to check with the client whether your interpretation is correct.
Brady Kelly wrote: My client is unclear on what he wants here and expects some assistance in deriving a practical authorization scheme for this hierarchical user scheme.
"Authorization"? You mean the top-dog could login using my credentials?? There's bears down that road; it messes up audits.
Brady Kelly wrote: This preserves the higher privilege of higher users, but prevents a more senior user creating customers for more junior users to work on.
Not per se; just make sure that the junior is "part of" the group working on the customer
|
|
|
|
|
Thanks, I'm slowly discarding big parts of my original concept, and moving toward the tree just representing the org, and everyone in the org (in the tree) having access to everything else in the tree, but limited by user roles.
Eddy Vluggen wrote: "Authorization"? You mean the top-dog could login using my credentials?? There's bears down that road; it messes up audits.
Authorization doesn't involve logins, Authentication does.
|
|
|
|
|
Brady Kelly wrote: Authorization doesn't involve logins, Authentication does.
Yesyes, you're right. I mixed them up.
|
|
|
|
|
I am using MEF in my application. My application have 4 parts(class library) and a main module , the main modules responsibility is to interact with the above said parts and delegates job to them.
I have coded all the interface and abstract base classes in a library called 'Common' and refer it to all the other dlls which have classes inherit from those base classes. I moved all the base class and interface definition to the 'Common' library so that i can add reference of that dll in my main program and all the base classes and interfaces are readily available.
Is this the right way to do it? What is the other option in MEF using which i can refer to the base classes in my main module without referring to a dll.
eg:- abstract class baseAbc and interface IAbc is defined in common.dll, Is it possible to refer baseAbc and IAbc in my main module without adding reference to Common.dll
|
|
|
|
|
Disclaimer; I don't use MEF.
John T.Emmatty wrote: Is this the right way to do it?
Sounds like "yes" to me; that way you can update both common/exe without touching the other.
John T.Emmatty wrote: What is the other option in MEF using which i can refer to the base classes in my main module without referring to a dll.
Don't know if the option exists; but it "would" generate classes, and it should be possible to reference/copy those to your own assembly. I expect this is not the recommended way.
What's against having a second assembly?
|
|
|
|
|
I'm currently given a project on hand gesture recognition. Some color markers will be put on the fingers for easy tracking.
So, can anyone give some advice how should I continue for the next part which is to recognize the pre-processed output image which is now only consists of color from the color markers. How can I classified those pattern.
|
|
|
|
|