|
i would go with:
m_boolVar = intVar ? true : false;
|
|
|
|
|
That would be invalid in C# or Java. Even in C/C++ I would always query against a Boolean expression for conceptual clarity.
Kevin
|
|
|
|
|
Swinefeaster wrote: i would go with:
m_boolVar = intVar ? true : false;
That combines the worst of expression one with the worst of expression three.
|
|
|
|
|
I use the style of your second example though usually with brackets around the expression.
m_boolVar = (intVar != 0);
However, recently I've been moving towards dispensing with the brackets too, partly prompted by my refactoring tool.
Kevin
|
|
|
|
|
I prefer your coworkers style . In terms of efficiency i doubt if there is any significant difference unless this is getting called repeatedly , and it is much much easier to read and understand than m_boolVar = intVar != 0; as this requires you to think about precedence. Just becasue you understand it does not mean that the maintenance programmer will . But at the end of the day its a style question , and nothing divides programmers more than that .
|
|
|
|
|
Andrew Torrance wrote: it is much much easier to read and understand than m_boolVar = intVar != 0; as this requires you to think about precedence.
If you have to think about precedence, you need to work on your skill set. I guarantee that if you were a junior I was mentoring, you wouldn't have to think about it by the time I got through with you.
|
|
|
|
|
Now that I've gotten this off of my chest, I can admit to myself that it really was more of a style issue rather than a coding horror as modern compilers would probably generate similar, if not same, code for any of the alternatives. I probably should have posted it to Soap Box or Lounge, but it is what it is at this point.
Thanks for all of your inputs. Some people agreed, some did not, and yet others were bored by the whole thing (in which case why did they even bother to reply?).
Meanwhile, my style is best...
|
|
|
|
|
geoffs wrote: Now that I've gotten this off of my chest, I can admit to myself that it really was more of a style issue rather than a coding horror as modern compilers would probably generate similar, if not same, code for any of the alternatives. I probably should have posted it to Soap Box or Lounge, but it is what it is at this point.
Don't let the idiots beat on you. It's not style, it's a clear indicator of someone uncomfortable with the language. Perhaps this is a minor thing, but I despise working with people (a) who are uncomfortable working with their language, (b) and who don't work like heck to GET comfortable.
|
|
|
|
|
geoffs wrote: m_boolVar = !!intVar;
What is so perverse with that?
To those who understand, I extend my hand.
To the doubtful I demand: Take me as I am.
Not under your command, I know where I stand.
I won't change to fit yout plan. Take me as I am.
|
|
|
|
|
Ah, a man after my own heart!
It is not perverse to me. In fact, I am quite at home with that syntax and like it. It is perverse because so many of my fellow programmers hate it and my use of it makes them froth at the mouth or sit there with a dumbfounded look because they don't understand what it is doing.
|
|
|
|
|
geoffs wrote: because they don't understand what it is doing
I hate people who use C to program in Java.
To those who understand, I extend my hand.
To the doubtful I demand: Take me as I am.
Not under your command, I know where I stand.
I won't change to fit yout plan. Take me as I am.
|
|
|
|
|
leonej_dt wrote: geoffs wrote:
m_boolVar = !!intVar;
What is so perverse with that?
Dependence on zero values being interpreted as false and non-zero as true is a particular feature of C-style languages. It confabulates boolean and integer types, which was OK in C (which has no native boolean type) but is highly inappropriate for any language that does support boolean.
|
|
|
|
|
cpkilekofp wrote: It confabulates boolean and integer types, which (...) is highly inappropriate for any language that does support boolean.
What is a boolean when the computer understand data in 8 bit chunks?
|
|
|
|
|
leonej_dt wrote: What is a boolean when the computer understand data in 8 bit chunks?
I know this is marked as a joke....but what IS the joke??
|
|
|
|
|
cpkilekofp wrote: but what IS the joke??
What's the need for such an abstraction as a "boolean"?
|
|
|
|
|
leonej_dt wrote: What's the need for such an abstraction as a "boolean"?
I should think it's obvious. Booleans represent either true or false, yes or no. "Is the door open? yes." Using a boolean type, you're guaranteed that the value of an instance of that type will be one of two choices, true or false. Any other questions?
|
|
|
|
|
And what's the difference between an 8-bit boolean and an 8-bit char whose values are restricted to 0 and 1? I still don't get it.
|
|
|
|
|
leonej_dt wrote: And what's the difference between an 8-bit boolean and an 8-bit char whose values are restricted to 0 and 1? I still don't get it.
What does the restriction? Either the language provides facilities for doing it, or the programmer has to provide the facilities. If its in the language, it can be taught in a standard way and understood in a standard way. If the programmer provides it, the programmer has to explain it, and only those with access to the programmer or his documentation will understand it. Why else develop languages in the first place? If everyone in the world had the ability to program effectively in machine language, we'd all be doing that.
|
|
|
|
|
When I program (I don't know about other people), my priorities are, in decreasing order:
1. the program has to be as fast as possible
2. the compiled executable has to be as small as possible
3. the source code has to be as small as possible
Any sufficiently good C programmer knows what the double negation does. But, if you can't quite get it, you can define a macro
#define IS_NOT_ZERO(a) !!(a)
And the double negation won't bug your mind everywhere in the code.
|
|
|
|
|
leonej_dt wrote: When I program (I don't know about other people), my priorities are, in decreasing order:
1. the program has to be as fast as possible
2. the compiled executable has to be as small as possible
3. the source code has to be as small as possible
Nice, as long as no one will ever have to change your code again. Naturally, you forgot the fourth rule, which distinguishes the professional from the cowboy:
4. The code must be readable by someone with no more than six months of experience.
5. The code must be modifiable and extendable without a major rewrite.
You see, there's no guarantee that the next person to maintain your code is as sharp on all the nuances of code as you are; in fact, in any group setting, first cuts are often written by senior programmers, while modifications are often written by juniors. And, in fact, your own observation qualities fall short: this is a C# example, not a C example.
I was an expert in C programming for the first half of my career, to the point that I had the names of the standard headers and most of their standard declarations memorized. I used this trick and many others unique to C-like languages. It is, however, a cowboy trick, and if I found it in a review of your C# code, you'd be fixing it the next day. Tricks like this were often necessary in C because C didn't support booleans, but it's unnecessary in C#.
leonej_dt wrote: Any sufficiently good C programmer knows what the double negation does. But, if you can't quite get it, you can define a macro
#define IS_NOT_ZERO(a) !!(a)
Again, this is a C# example, and C# does not have macro capabilities, any more than it has pointers.
Now, in some environments the skill level required is so high, and will remain so high, that you can write everything in obscure C with no indentation and the next programmer will be able to read it at a glance. Making your employer depend on this level of skill when they don't have to is a disservice to your employer.
|
|
|
|
|
cpkilekofp wrote: 4. The code must be readable by someone with no more than six months of experience.
My code might not always be readable by someone with little experience, but...
cpkilekofp wrote: 5. The code must be modifiable and extendable without a major rewrite.
... even when I'm doing low-level stuff, I take enough care to ensure the logic of the program can be understood. [Edit starts here] If comments are not enough, I document what I'm doing, both the mathematical proof of correctness and the actual algorithm, step by step. [Edit ends here]
cpkilekofp wrote: is a C# example, not a C example.
Excuse me, sir. You are wrong. The IS_NOT_ZERO macro is a C example, not a C# one. You can't apply the ! operator to an integer in C#. You can do that in C.
cpkilekofp wrote: It is, however, a cowboy trick, and if I found it in a review of your C# code, you'd be fixing it the next day.
The IS_NOT_ZERO macro was a joke. I was making fun of people who can't understand what !! does to an integer.
cpkilekofp wrote: that you can write everything in obscure C with no indentation and the next programmer will be able to read it at a glance.
I may be a cowboy coder, but I always indent my code. If my code is not clear, that might have been the result of
a) intentional obfuscation
b) not so obvious optimization
In either case, I document what I'm doing.
|
|
|
|
|
leonej_dt wrote: cpkilekofp wrote:
is a C# example, not a C example.
Excuse me, sir. You are wrong. The IS_NOT_ZERO macro is a C example, not a C# one. You can't apply the ! operator to an integer in C#. You can do that in C.
Excuse me, sir, but the original coding horror in this thread is a C# coding horror, not a C coding horror, thus my comments about using booleans where you MEAN it is either true or false are appropriate, while your comments that it doesn't make a difference apply to C where booleans (at least for most of the language's lifespan) were not supported; is that, finally, clear?.
Additonally, I based my comments regarding your statements about coding standards on what exactly you wrote; getting upset because I responded to, again, exactly what you wrote rather than what you actually do would seem to me to be a problem with your description. Further, I wasn't referring to your code when I commented about "indentation" - how could I, when the only piece of code from you that I saw was a single line? However, I've had to read, debug, and modify more than one piece of obscure C that made it into production in the time before code reviews became commonplace.
In general, the comments you've made on this topic are in line with what I call "C bigotry", an attitude that naturally comes to most good C programmers (I was most definitely guilty of it when C was my primary development language). The elegance and concision of C creates in the C developer an automatic contempt for languages which lack those features, and for those who find it difficult to impossible to read C when used by one well versed in all its subtleties. Please note: back when I was using C regularly, we had macros almost identical to yours to convert integers to true/false when we were building files for processing by, say, COBOL mainframe programs that looked for 0 or 1 as false or true (about 17-18 years ago), so I don't see your macro as a joke; however, I do view your comment as an indicator that, perhaps, your experience in different arenas of development seems to not be as widespread as mine (this is NOT a comment on your skill, just on the limited number and type of programming environments to which you have been exposed - once you reach the point where you are working with junior programmers with much less skill than you have, some of these points will not only be clearer to you, they'll seem like "common sense").
|
|
|
|
|
cpkilekofp wrote: Excuse me, sir, but the original coding horror in this thread is a C# coding horror, not a C coding horror,
The original post was a C coding horror:
geoffs wrote: So, in reviewing a coworker's code I come across the following line:
m_boolVar = (intVar == 0 ? false : true) ;
Yes, parenthesization and spacing exactly as shown above. Were it my code, it would have been written as:
m_boolVar = intVar != 0; // (corrected from == 0 by GDavy -- I typed too fast!
...or if I was feeling in a bit more perverse mood:
m_boolVar = !!intVar;
There were much bigger fish to fry in this code, but there are times when I just can't let things like this go by. These things are like misspelled words that shout out at me from among the surrounding text.
Although the first two lines of code could also be valid C#, the third one can't be valid C#.
cpkilekofp wrote: Additonally, I based my comments regarding your statements about coding standards on what exactly you wrote
My coding standards are those I have written one or two posts ago.
cpkilekofp wrote: In general, the comments you've made on this topic are in line with what I call "C bigotry", an attitude that naturally comes to most good C programmers (I was most definitely guilty of it when C was my primary development language).
I'm actually more guilty of C++ bigotry. I seldom use classes (90% of my classes are Class-Wizard-generated GUI classes), but I'm very fond of function pointers and templates.
cpkilekofp wrote: back when I was using C regularly, we had macros almost identical to yours to convert integers to true/false when we were building files for processing by, say, COBOL mainframe programs that looked for 0 or 1 as false or true (about 17-18 years ago)
Wow. I was 2 years old back then.
cpkilekofp wrote: perhaps, your experience in different arenas of development seems to not be as widespread as mine
Agreed. I'm just a college student.
|
|
|
|
|
LMAO...it had never occurred to me to compile "m_boolVar = !!intVar;", but you're right, C# rejected it because the bang operator is not valid for integers. So it is a C coding horror.
Your coding standards as originally stated,
1. the program has to be as fast as possible
2. the compiled executable has to be as small as possible
3. the source code has to be as small as possible
are, as I said earlier, quite valid and sensible in standalone programming, where no one will ever have to maintain your code...including you, ten years later. Unfortunately, even you won't necessarily remember what some code was supposed to do after that length of time, and the more "optimized" it is, the more difficult it is to read, to the point where depending on the size of the code block you are examining, it may take many hours to figure out what you meant back then.
I know you think this is an extreme example, but it is an example I pull from personal experience: code I wrote in my first programming job was code I was still maintaining as a part-time consultant two full-time jobs later, then again when my original boss founded an Internet start-up - and some of that highly efficient code I wrote in 1990 (necessary for multi-megabyte programs being loaded a chunk at a time as overlays in old MSDOS, well before OS/2 and Windows brought virtual memory to the PC-compatible world) was just plain obscure when I had to modify it to sit in a COM object in 2000 (if you think Y2K was a problem, understand how much trouble you can get into when code is written with explicit assumptions of 16-bit integers when it gets moved to a 32-bit integer environment).
You may not realize it, but function pointers have been part of C since its inception - it was one of the most fascinating aspects of the language for many of us, and I used them extensively. In fact, this language feature allowed overlays to be built in a language other than assembly language for the first time, one (carefully!!) loaded the code from the binary executable, cast a function pointer to it, then executed it. Templates were described in the original version of the C++ Annotated Reference, but didn't appear in DOS/Windows compilers until, I think, 1992, but I remember writing some pretty frightening code to achieve the same effect (historical note for you: the first C++ compilers weren't compilers, they were preprocessors similar to the one that provides #include and #ifdef, and its output was C code).
In order to move from being a good programmer in C++ to a great developer in any environment, you'll need to develop appreciations for things that just seem like a waste of time right now. I still remember when my best friend, an electrical engineer who spent 80% of his time developing embedded software, was having trouble using his hardware stack for several things at once in a project he was working on in 1983; until I suggested it, it had never occurred to him that he could build a software stack and keep his hardware stack (the one the CPU used for things like storing return pointers from a function) strictly for its original purpose. Up to that point, he'd never learned to look at code as a series of logical abstractions, but afterward he never forgot to do so.
I lost track of how many languages that I'd used professionally by 1997, but it was at least ten (including script languages like DOS Batch, Rexx, Awk, and Korn Shell - yes, an amazing number of production programs for business purposes have been written in, and are still running on, AWK) with another four or five that I'd used in graduate school projects, including LISP and Prolog, and at least four assembly languages, x86 being the only one I've ever used professionally. At that point, one begins to look at programming languages in a highly abstract sense, and one notes which languages should be restricted to use for the brightest intellects and which are safe to use in environments where one can only afford to use dim bulbs (this is the biggest reason that COBOL, the second oldest commonly used language, is still the most prolific computer language used in business - and beware of having a manager who never programmed in anything BUT COBOL).
Thus, as one small example, booleans: when you need to depend on coders who don't and never will understand the hardware substrate on which their computer sits, you need abstractions they can understand, and many coders will never understand the difference between a byte used as boolean and a bite in the Rse.
Regards
|
|
|
|
|
I should add, for fairness, that many of those "dim bulbs" I referred to are, in fact, sophisticted experts in their own fields...but they are not computer scientists. FORTRAN, the first high-level programming language, was created so that scientists could write complex calculations without having to learn anything about the inner workings of the computer they were using for the calculation. COBOL was created with a similar intent for business and administrative purposes. C was created so that a portable language existed for computer experts that allowed one to program directly on the computer without having to use assembly language. In fact, some of the dialects of C I used in my early career contained primitives for directly address CPU registers, and most of them contained facilities for allowing in-line assembly language blocks; I may be wrong, but I doubt that you've had a use for either of these facilities in your academic career to date
|
|
|
|
|