|
As someone suggested it is a religious war. But maybe here is a way to peace.
Here is a scenario:
You load a date value from your DB. Then you want to display an image depending on the month value. So you use something like image[month], month being derived from your date value. Alas you get an error on image[12] and you realise despite voting for zero-based arrays on CodeProject, and implementing such in your code, your evolutionary past has made you make an off-by-one error. Your computer counts the months from 0 to 11 but you do from 1 to 12. And no matter how hard you try, after decades of programming, you cannot not make these mistakes.
So what do you do, Either of two things:
1) Waste image[0] and use image[1] to image[12]
2) Modify your reference in code to read: image[month-1]
Are 90% of voters telling me they prefer not wasting memory (of what could be as little as one byte), instead preferring to have the additional subtraction in their code?
I prefer to sacrifice element 0 for clarity in my code, not to mention saving the extra subtraction operation.
Having said that, as pointed out by many others, many representations require us to also be able to represent absence of something, such as a primary color in a color combination. We simply cannot do without a zero index there.
So, my conclusion: let's put our religious preferences aside. In each case do what is practical and comes more naturally.
|
|
|
|
|
That seems to me like a troubling coding practice.
Consider that if your array is an array of extremely large structures, say of 500 megabytes each, then it can hardly be said that you are losing only one byte (that entire space will be allocated for one such structure, complete with all of it's fields set to their defaults). Yes, I voted zero, and I am not in the habit of wasting 500 megs, or any other sizable chunk of memory, just for the sake of my code being a little more "human-readable"--whatever that's supposed to mean these days. To that end I cringe at the thought of wasting even just one byte, but maybe that's because I had an inspirational C teacher who wanted me to know whether ++i or i++ was faster by a matter of picoseconds, with gcc optimizations turned off, or how to beat quicksort when given numerous details about the distribution and type of data to be sorted (perhaps a non-comparison sort? linear time, anyone?). On the matter of forsaking the entire 0th element in an array for no reason other than "taste," I politely refuse--so fire me.
-Gatsby
|
|
|
|
|
Jay Gatsby wrote: Consider that if your array is an array of extremely large structures, say of
500 megabytes each, then it can hardly be said that you are losing only one byte
(that entire space will be allocated for one such structure, complete with all
of it's fields set to their defaults).
You would be crazy to have an array of large structures. Use an array of pointers to the structures, much more efficient when it comes to sorting and stuff like that. Also, then it not very expensive (4 byte on a 32 bit machine) to skip element zero.
You may be right
I may be crazy
-- Billy Joel --
Within you lies the power for good - Use it!
|
|
|
|
|
That may be so in some cases, but consider that my example is a theoretical to begin with. To the point, I can turn around and say "yeah, but suppose you're programming in some god-forsaken language that completely hides pointers from you entirely, and uses references rather unintelligently." The example I gave was to illustrate the fact that memory is going to waste, period. Whether you're wasting a 4-byte pointer or a 500-megabyte data cell, it's still a waste, and my point still stands--that's just silly.
Suppose you have an algorithm which uses 4 byte pointers to 500 megabyte data cells that requires 125,000,000 copies of this list to be made in order to do some computation. one 4 byte pointer times the 125,000,000 times it's wasted is 500 megabytes total of wasted memory--hah, you see what I did there? I can contrive extreme examples all day, but the point is still the same, a waste is still a waste, regardless of size.
-Gatsby
|
|
|
|
|
"To that end I cringe at the thought of wasting even just one byte..."
So you would rather waste the space taken by the assembly code for subtraction and a tiny bit of time?
"Learn from the mistakes of others. You can't live long enough to make them all yourself."-Unknown
|
|
|
|
|
I would rather be as efficient as possible, period--regardless of what that actually means in any context, which is probably why I write my for loops as for(int i=0; i!=count; ++i) in the same code file where I use the 0th element of the array and index from zero in a mere 10 integer array. The only place where there is even room for debate on this issue is time vs space trade-offs, at which point I'm open for arguments. As it stands, however, the original suggestion here was simply to ditch an entire array element for no reason other than "well, humans count from 1." There is no reason to write bad code, ever, even if you know that the compiler optimizations will fix it, even if you know it's not time or space critical in the slightest, or any other reason to be lax about efficacy. I don't know how else to illustrate it.
-Gatsby
|
|
|
|
|
You don't need to waste anything.
If we're choosing the array bounds we can choose 1 and still let the compiler assign the first element at offset zero.
An RGB color isn't a array (to my way of thinking) it is three values.
We would gain nothing in clarity by using 1-256 instead of 0-255, so I'd leave that alone.
Same with IP addresses, you just leave it alone because you gain nothing by fooling with it.
(doesn't really even mean anything as a number)
And I'll shut up now
|
|
|
|
|
When you start counting something, do you start at zero?
Sure, zero makes sense at the basic level and made sense in assembler and when you deal with pointers and stacks.
But in these days where programmers deal with objects a lot more than they do with bits; where the representation of data has been transformed into something all humans deal with every day of their life, objects; it makes sense that things such as this are also humanized and we should start to count at 1, just as humans do.
For humans, zero is the absence of something. One means that there is at least one element.
Jacques Bourgeois
|
|
|
|
|
Humans start counting at one, but start measuring at zero. So if you say, "On the third day he will come", what you are saying is that it will be two more days before he comes. This leads to a lot of confusion in interpreting ancient texts, because in ancient times they counted (where today = day 1, etc.), but in modern times we measure (today -> third day = 2 days).
|
|
|
|
|
Hans Dietrich wrote: Humans start counting at one, but start measuring at zero.
Heh. I really appreciate you bringing to light that distinction.
Hans Dietrich wrote: This leads to a lot of confusion in interpreting ancient texts
Interesting! I didn't know that!
Marc
|
|
|
|
|
Marc Clifton wrote: Interesting! I didn't know that! Yes, there's an obvious religious text I could quote, that illustrates exactly this problem.
|
|
|
|
|
Hans Dietrich wrote: Yes, there's an obvious religious text I could quote, that illustrates exactly this problem
Genesis?
Marc
|
|
|
|
|
Heh. I'll just say it has something to do with something happening on the third day.
|
|
|
|
|
The third day is only 2 days away from the first day. Today is 0 days away from Today.
The concepts surrounding the difference between absolutes and relativity are vaguely understood and even harder to describe but not for mathmatics.
Lets not forget that numbers are expressions. For the most part, they are logical expressions, but 0 and 1 are also states. False and True. None existance and the Existence of.
|
|
|
|
|
How many elephants are in the room?
|
|
|
|
|
|
no no no, has to be one - in the corner..
|
|
|
|
|
Well, it may be true for you, but when you are in the embedded world, you usually don't use objects, sometimes you are even forbiden to use it.
In other languages, such as ADA, you start counting whenever you want, you just need to tell the compiler what are your limits. Sometimes you don't even count numbers.
Maybe it is just me, but I think this question does not make sense and serves to just start another language war just by its text saying C is ancient history when it is still widely used.
|
|
|
|
|
Agree. Starting from 0 could be a mistake or convenience due to system restrictions in the early development of computing technology.
TOMZ_KV
|
|
|
|
|
What do You do when You have to set numer of loops? Usually we do sample like this: variableName= variableName - 1. So we DO one operation more when we starting counting at 0.
I can Youse counting at 0 or 1 but i think that starting counting at 1 is closer for people and decreases implementation bugs.
|
|
|
|
|
No - most loops are from start to end, so
int count = GetNumberOfElements();
for (int i=0; i < Count; i++) {} --> 0-based is ok
If you have to loop backwards:
int i = GetNumberOfElements();
while (i--) {} --> 0-based is ok, too
|
|
|
|
|
Ok, You're right but if You don't have built in function in some languages....
Ok Klaus, We can decrease count of loops but still we have to remember that some lang. have 0, others 1 index at start. And WEe can write code like yours but if You want to generate grid for customer and each row must be numerated, so You must write row 1, row 2 , not row 0 row 1 etc. So we must read cell from array and ADD +1 (rowNumber+1). If We could write in language where each indexing array would be starting at 1 it could decrease number of bugs and maybe number of operations.
|
|
|
|
|
Yes, but try to to revesely:
for(unsigned i=0; i<N; ++i)
a[i]=...;
becomes
for(unsigned i=N-1; i!=0u-1; --i)
a[i]=...
abusing of -1 == MAX_UNSIGNED_INT
or
for(unsigned i=N; i>0; --i)
a[i-1]=...;
for(unsigned i=0, j=N-1; i<N; ++i,--j)
a[j]= ...
asymmetrical, or using cont and indexes as separate concepts.
But if everything start at 1...
for(unsigned i=1; i<=N; ++i) ...;
for(unsigned i=N; i>0; --i) ...;
perfectly symmetrical.
2 bugs found.
> recompile ...
65534 bugs found.
|
|
|
|
|
Maybe it is true about the humans. Homo programmicus is a separate species.
Nick Polyak
|
|
|
|
|
I agree. If we were "resetting" the programming industry we should start from 1. But most of us have just gotten used to zero-based indexes so it seems odd now to do anything different.
Kevin
|
|
|
|
|