|
etkins wrote: will lint catch that error I doubt it. There's nothing to differentiate this case from a normal if statement followed by a scope block.
Software Zen: delete this;
|
|
|
|
|
It 's OK to code like this
It was supposed to various _Uptime get the instance in the "{ xxx }"
|
|
|
|
|
I've recently stumbled across a line of code that made me itch to post here. I've reconsidered however and moved on. I didn't fix the line however, since it did the right thing, just in a silly way. In retrospect, there was no good reason not to fix it, but then, and that probably what I was thinking, neither was there a reason to fix it.
Some time has passed, and somehow I was reminded of that line in a different context: code maintenance and code readability. I asked myself, why was it that I could immediately recognize the sillyness of that line? Apparently I was able to instantly understand it! When I think of the more elegant solution, I would have had to look at it for a fraction of a second to grasp it's meaning. Not so the 'silly' code: it was so glaring obvious that just skimming over it shouted it's meaning into my face without any conscious effort on my part.
Here's the equivalent of that line (I don't recall what it was literally):
bool flag = (a > b) ? true : false;
The reason it's instantly obvious is that the resulting value is written out in plain text, rather than hidden in the somewhat obscure use of a condition as value. Some might argue there is nothing obscure in assigning a condition, but in spite of doing that exact thing hundreds of times, I still find myself hesitating a fraction of a second every time I see something like that in another persons code.
Now I wonder: just how silly is code that is so obvious that you immediately know what it does? Should you really change that code to something more elegant? Should you even tell the programmer who did that to do it differently in the future?
If it works, that's good. If it's readable, that's even better. And if the programmer feels comfortable doing it that way, why urge him to do it another way? I feel it's a trade-off between improving style and maintainability, and in the end it's only the latter that counts.
|
|
|
|
|
If it's easy to read and obvious what it does, then I think it is personal style not objectively silly. We write unnecessary characters in our code all the time to make it easier to read (i.e. why is that variable called flag not f?), and that's what the coder has done here. The places where we put 'readability' (itself not an objective quality) over terseness are largely a matter of style.
|
|
|
|
|
No, Silly rocks[^]!
Simple is great, but simple is in the eye of the maintainer.
I consider your example line simply as redundant and might wonder why the long form was chosen.
---
As usual I'm tempted to make it even more explicit:
bool flag = ((a > b) ? true : false) ? true : false;
Now that's better, isn't it? It spells out the purpose right there: if a > b is true, set flag to true, but if a > b is false, set flag to false.
|
|
|
|
|
peterchen wrote: wonder why the long form was chosen
Many possible reasons. Most likely the line is rather old - parts of this codebase are from the 90s. In any case pretty much all the previous programmers came from old style C and weren't used to the type bool just yet. Since internal definitions for a replacement type BOOL tended to vary, the safe way of assigning a value to one was to explicitely assign either TRUE or FALSE , since directly assigning a condition and hoping it would result in the correct integer constant was risky:
typedef short BOOL;
#define FALSE -1 // 0xFFFF in two's complement
#define TRUE 0 // now we have TRUE == !FALSE in two's complement
BOOL b = (x > y); b = (x > y) ? TRUE : FALSE;
(of course, usually the definition was FALSE = 0, exactly because it let you assign conditions directly. But if you weren't sure and looking up the definition would take longer than just typing out the explicit assignment of TRUE or FALSE, why bother?)
|
|
|
|
|
Unless you're explicitly checking for equality against TRUE or FALSE (which you shouldn't be), the value of a condition is guaranteed to be something that you can put into an if check or similar with the expected results. Even in C. That includes places where you want to use !, which is defined to check against 0 (source[^]).
And really, #define TRUE 0 is worthy of an entry in the Hall of Shame, pre-renaming, all of its own.
|
|
|
|
|
I hear you, and I agree. But, unfortunately, equating TRUE to 0 is not so uncommon as you might think. Just think of the exit code: A value of 0 is generally considered as success...
20 years ago, you simply couldn't rely on TRUE to be 0 or 1 or -1 - you had to either check the definition, or just make a safe test to ensure that your code will still work, no matter what the typedef:
if (b == TRUE)
...
This would work, even if the values of TRUE and FALSE were changed later. And that was a very real possibility at those times! I made it a habit to code like that, because I needed to! Needless to say, I was very relieved when the type bool was finally introduced!
|
|
|
|
|
The exit code from main is not a bool. NO_ERROR is 0, that's true, but that doesn't have any bearing on what TRUE should be.
Setting TRUE to 0, or FALSE to a non-zero value, is directly contradictory to the C standard on boolean types and results in absurdities like
if(TRUE){ ... }
... never running the content.
|
|
|
|
|
I am aware that main returns an int , but in most cases it is used as a bool (if at all). And that is my whole point. I've seen lots of programs that return various different return codes, but I've seen very few script files calling C-programs that actually check any other return value than 0.
|
|
|
|
|
In that particular case, it is not a boolean... it is an error code.
Or, if you want, a boolean that says: Error!
So, a value of true = it has errors. A value of false is: There are no errors.
There is no reason to say that 0 is TRUE. That's a bad practice, independent if it works.
|
|
|
|
|
Stefan_Lang wrote: In any case pretty much all the previous programmers came from old style C and
weren't used to the type bool just yet.
Yes, there was no data type, but it was clearly defined: Zero was false, everything else true. So any data type could be a boolean.
This was very common for C programmers:
char b = x > y;
Don't think, the Earth was a disc before .NET!
But I've seen something like this from many native VB programmers:
bool b;
if (x > y)
b = true;
else
b = false;
|
|
|
|
|
I've started programming in C around 1985, C++ in 1986. And I've seen my share of #define TRUE 0 or equivalent. It may not have been the norm, and it clearly wasn't sensible when you think about it, but then not many people bothered to think about coding sense at university as long as it worked. (and it does as long as you don't try to directly assign conditions)
Anyway, this was just one possible reason I considered for the code example I gave in my original posting - it is not as such related to the topic, just what I consider a possible cause.
|
|
|
|
|
In the old days, I'd do this
#define FALSE (1 == 0)
#define TRUE (!FALSE)
or, conversely (but settled on the former for some reason),
#define TRUE (1 == 1)
#define FALSE (!TRUE)
Comparing these to the result of a logical operation would always work, by definition, but, since I prefer the "short form", these were usually just assigned to variables instead of compared.
|
|
|
|
|
I've seen the following written by people who've been programming for over 25 years. Not everyone understands terse code. Even less understand what macros were originally made for.
bool flag;
if (a > b)
flag = true;
else
flag = false;
|
|
|
|
|
I don't know if this is C++ or C#, but my experience with beginner programmers is that the ? and : operators are the hardest ones to understand.
I am pretty sure that many people that I already worked will be sure that:
bool flag = (a > b);
Will be true if A is greater than B.
With the ? and :, they will think if it is really putting true when A is greater than B, or if it is exactly the opposite.
So I will say that it is not only a matter of readibility, but a matter on how used people is the the construction itself. I personally never user : and ? operators.
|
|
|
|
|
I agree.
Today I'm very trained to immediately understand the ternary operator's meaning. But, many are not so comfortable and it could confuse even more.
To me assigning the comparison is as easy as the ternary operator form, but I believe more people would have a harder time to understand the ternary operator form than the opposite.
If someone really wants it to be more clear than that, he can:
bool flag;
if (a > b)
flag = true;
else
flag = false;
It can't get more readable than that (ok, maybe it can, but I can't think of any right now).
To alcohol! The cause of, and solution to, all of life's problems - Homer Simpson
----
Our heads are round so our thoughts can change direction - Francis Picabia
|
|
|
|
|
You may get it wrong, and because of that you need to "correct" the code and even ask the programmer to fix it. I would do so!
_____________________
I'm a nobody and nobody is perfect!
|
|
|
|
|
|
The code is unreadable - please use formatting and actually write what the issue is..
|
|
|
|
|
Actually, it isn't me asking the question over there, so the code isn't mine. The first time i looked at that code, i just closed my eyes.
|
|
|
|
|
The cure for that is obvious: "get a new job"
Ideological Purity is no substitute for being able to stick your thumb down a pipe to stop the water
|
|
|
|
|
wouldyoulikefrieswiththat?
I wasn't, now I am, then I won't be anymore.
|
|
|
|
|
|
Just COALESCE it. That fixes it every time.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|