|
Jay Gatsby wrote: Consider that if your array is an array of extremely large structures, say of
500 megabytes each, then it can hardly be said that you are losing only one byte
(that entire space will be allocated for one such structure, complete with all
of it's fields set to their defaults).
You would be crazy to have an array of large structures. Use an array of pointers to the structures, much more efficient when it comes to sorting and stuff like that. Also, then it not very expensive (4 byte on a 32 bit machine) to skip element zero.
You may be right
I may be crazy
-- Billy Joel --
Within you lies the power for good - Use it!
|
|
|
|
|
That may be so in some cases, but consider that my example is a theoretical to begin with. To the point, I can turn around and say "yeah, but suppose you're programming in some god-forsaken language that completely hides pointers from you entirely, and uses references rather unintelligently." The example I gave was to illustrate the fact that memory is going to waste, period. Whether you're wasting a 4-byte pointer or a 500-megabyte data cell, it's still a waste, and my point still stands--that's just silly.
Suppose you have an algorithm which uses 4 byte pointers to 500 megabyte data cells that requires 125,000,000 copies of this list to be made in order to do some computation. one 4 byte pointer times the 125,000,000 times it's wasted is 500 megabytes total of wasted memory--hah, you see what I did there? I can contrive extreme examples all day, but the point is still the same, a waste is still a waste, regardless of size.
-Gatsby
|
|
|
|
|
"To that end I cringe at the thought of wasting even just one byte..."
So you would rather waste the space taken by the assembly code for subtraction and a tiny bit of time?
"Learn from the mistakes of others. You can't live long enough to make them all yourself."-Unknown
|
|
|
|
|
I would rather be as efficient as possible, period--regardless of what that actually means in any context, which is probably why I write my for loops as for(int i=0; i!=count; ++i) in the same code file where I use the 0th element of the array and index from zero in a mere 10 integer array. The only place where there is even room for debate on this issue is time vs space trade-offs, at which point I'm open for arguments. As it stands, however, the original suggestion here was simply to ditch an entire array element for no reason other than "well, humans count from 1." There is no reason to write bad code, ever, even if you know that the compiler optimizations will fix it, even if you know it's not time or space critical in the slightest, or any other reason to be lax about efficacy. I don't know how else to illustrate it.
-Gatsby
|
|
|
|
|
You don't need to waste anything.
If we're choosing the array bounds we can choose 1 and still let the compiler assign the first element at offset zero.
An RGB color isn't a array (to my way of thinking) it is three values.
We would gain nothing in clarity by using 1-256 instead of 0-255, so I'd leave that alone.
Same with IP addresses, you just leave it alone because you gain nothing by fooling with it.
(doesn't really even mean anything as a number)
And I'll shut up now
|
|
|
|
|
When you start counting something, do you start at zero?
Sure, zero makes sense at the basic level and made sense in assembler and when you deal with pointers and stacks.
But in these days where programmers deal with objects a lot more than they do with bits; where the representation of data has been transformed into something all humans deal with every day of their life, objects; it makes sense that things such as this are also humanized and we should start to count at 1, just as humans do.
For humans, zero is the absence of something. One means that there is at least one element.
Jacques Bourgeois
|
|
|
|
|
Humans start counting at one, but start measuring at zero. So if you say, "On the third day he will come", what you are saying is that it will be two more days before he comes. This leads to a lot of confusion in interpreting ancient texts, because in ancient times they counted (where today = day 1, etc.), but in modern times we measure (today -> third day = 2 days).
|
|
|
|
|
Hans Dietrich wrote: Humans start counting at one, but start measuring at zero.
Heh. I really appreciate you bringing to light that distinction.
Hans Dietrich wrote: This leads to a lot of confusion in interpreting ancient texts
Interesting! I didn't know that!
Marc
|
|
|
|
|
Marc Clifton wrote: Interesting! I didn't know that! Yes, there's an obvious religious text I could quote, that illustrates exactly this problem.
|
|
|
|
|
Hans Dietrich wrote: Yes, there's an obvious religious text I could quote, that illustrates exactly this problem
Genesis?
Marc
|
|
|
|
|
Heh. I'll just say it has something to do with something happening on the third day.
|
|
|
|
|
The third day is only 2 days away from the first day. Today is 0 days away from Today.
The concepts surrounding the difference between absolutes and relativity are vaguely understood and even harder to describe but not for mathmatics.
Lets not forget that numbers are expressions. For the most part, they are logical expressions, but 0 and 1 are also states. False and True. None existance and the Existence of.
|
|
|
|
|
How many elephants are in the room?
|
|
|
|
|
|
no no no, has to be one - in the corner..
|
|
|
|
|
Well, it may be true for you, but when you are in the embedded world, you usually don't use objects, sometimes you are even forbiden to use it.
In other languages, such as ADA, you start counting whenever you want, you just need to tell the compiler what are your limits. Sometimes you don't even count numbers.
Maybe it is just me, but I think this question does not make sense and serves to just start another language war just by its text saying C is ancient history when it is still widely used.
|
|
|
|
|
Agree. Starting from 0 could be a mistake or convenience due to system restrictions in the early development of computing technology.
TOMZ_KV
|
|
|
|
|
What do You do when You have to set numer of loops? Usually we do sample like this: variableName= variableName - 1. So we DO one operation more when we starting counting at 0.
I can Youse counting at 0 or 1 but i think that starting counting at 1 is closer for people and decreases implementation bugs.
|
|
|
|
|
No - most loops are from start to end, so
int count = GetNumberOfElements();
for (int i=0; i < Count; i++) {} --> 0-based is ok
If you have to loop backwards:
int i = GetNumberOfElements();
while (i--) {} --> 0-based is ok, too
|
|
|
|
|
Ok, You're right but if You don't have built in function in some languages....
Ok Klaus, We can decrease count of loops but still we have to remember that some lang. have 0, others 1 index at start. And WEe can write code like yours but if You want to generate grid for customer and each row must be numerated, so You must write row 1, row 2 , not row 0 row 1 etc. So we must read cell from array and ADD +1 (rowNumber+1). If We could write in language where each indexing array would be starting at 1 it could decrease number of bugs and maybe number of operations.
|
|
|
|
|
Yes, but try to to revesely:
for(unsigned i=0; i<N; ++i)
a[i]=...;
becomes
for(unsigned i=N-1; i!=0u-1; --i)
a[i]=...
abusing of -1 == MAX_UNSIGNED_INT
or
for(unsigned i=N; i>0; --i)
a[i-1]=...;
for(unsigned i=0, j=N-1; i<N; ++i,--j)
a[j]= ...
asymmetrical, or using cont and indexes as separate concepts.
But if everything start at 1...
for(unsigned i=1; i<=N; ++i) ...;
for(unsigned i=N; i>0; --i) ...;
perfectly symmetrical.
2 bugs found.
> recompile ...
65534 bugs found.
|
|
|
|
|
Maybe it is true about the humans. Homo programmicus is a separate species.
Nick Polyak
|
|
|
|
|
I agree. If we were "resetting" the programming industry we should start from 1. But most of us have just gotten used to zero-based indexes so it seems odd now to do anything different.
Kevin
|
|
|
|
|
Jacques Bourgeois wrote: For humans, zero is the absence of something. One means that there is at least one element.
100% agree. Zero-based arrays and such need to go. Years in the past, a lot of the old BASIC languages gave you a choice. Something on the order of Option1 or Option0 if I remember correctly.
Everybody SHUT UP until I finish my coffee...
|
|
|
|
|
Not so "old BASIC". In Visual Basic classic, that is up to VB6, you could specify Option Base 1 at the top of a file for arrays to start at an index of 1.
Even better (or worse, depending on your position), you could define arrays any way you wanted:
Dim x(3 to 10).
Dim y(-20 to 20) that one being very useful to plot coordinates on a graph.
Although they were useful in some applications, those things tended to confuse the non initiated, and forced the initiated to often check the declaration in order to use an array properly.
So I am quite glad that in VB.NET they fixed the basic index of an array.
But since most collections start at 1, why did they not do the same for arrays?
Jacques Bourgeois
|
|
|
|