Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Nine Reasons Not To Use Int

0.00/5 (No votes)
9 Feb 2004 1  
A Parody

Nine Reasons Not To Use "int". A parody on "Nine reasons not to use serialization".
Suggested by Dirk Vandenheuvel.

For my first article, I thought we'd have a little fun. I hope nobody minds. I hope people take it with good humor! And I really hope that it gets listed in the ProductShowcase page!

Introduction

If you want to know to get your application to save information in the form of numbers, then a quick skim through MSDN magazine or a quick search on newsgroups will give you the answer: int.

Just declare your variables as "int" and there you go. It's a simple matter of typing in three little letters: i-n-t, and couple seconds later it's done. Alternatively, you could use float, double, real, byte, char, short.

All very simple, but unfortunately all very wrong. There are a number of reasons why you not not opt for the simple approach. Here are nine important ones.

1. It forces you to design your classes a certain way

"int" only works with integers. This means that your class needs to manage only integers, not real things like "float" or "imaginary" or polar coordinates. You can not have numbers that have digits past the decimal point. And it forces restrictions on how you perform math--dividing one "int" by another "int" may not result in the correct value!

2. It is not future-proof for small changes

If you use "int", then all the stuff after the decimal point will get dumped. You have no control over this. If you change the name of the variable to something other than "int", then your code will break. You can get around this by implement the IInt interface. This gives you much better control of how data is stored and retrieved to and from an "int". Unfortunately...

3. It is not future-proof for large changes

An "int" is a type. If you change your "int" variable names or strong-name your assemblies, you're going to hit all sorts of problems. Even if you manage to code the necessary contortions to get round this, you're going to find that ...

4. It is not future-proof for massive changes

"int" isn't going to be around in five years or so. By then, we'll all be coding in real numbers (puns intended). If you start implementing the IInt interface in your code now, then its tendrils are going to be everywhere in five years' time. Your code is going to be full of little hacks to cope with version changes, class re-naming, refactoring, etc. Some time in the future, .NET will be superseded by something even more wonderful. Nobody knows what this something wonderful will be, but you can bet that writing code-read data serialized by version 1.1 of the .NET's "int" type is going to be a pig. I wrote some VB6 code 5 years ago and used "int" on a 16 bit processor, when "int" meant 16 bits. A neat, easy way of storing information to disk, I thought. And it was, until .NET came along and then I was stuck, because you see, on my new 32 bit processor, "int" now means 32 bits!

5. It is not secure

Using "int" is inherently insecure. It's bit format is widely known. In addition, "int" works by creating "bits", either a 1 or a 0. Disk files on disk containing 1's and 0's will pose a potential security risk. If, instead, you implement the IInt interface, then, even if you're not exposing bits through your classes, anyone can see your bits anyway, since "int" is a public type.

6. It is inefficient

"int" is verbose. It often has many more bits than you actually need. And, if you are using the IInt interface, more bits gets stored along with data. This makes "int" very expensive in terms of disk space.

7. It is a black box

The odds are you don't really know how "int" works. I certainly don't. Is it big-endian or little-endian? Is the MSB the first bit or the last bit? What does MSB mean anyways? All this means that there are going to have all sorts of quirks and gotchas that you can't even conceive of when you start using "int". Did you know that "int" actually uses the 32 bits? When you think you're creating a bunch of "int's", actually hardware is doing something. What are the implications of that? The only thing I know is that I will not know about them until it's too late.

8. It is slow

When I did some research for a previous article (http://www.programmersAgainstInt.com), I noticed a few interesting things. I wrote a class that contained two "int" values. I created 100,000 instances of this class, stored them to disk, and then read them back again. I did this two ways. First of all, I did it the "proper" way, by implementing two variables of type "int". Secondly, I did it the "dirty" way, by streaming out and back in 100,000 pairs of "int". Which way was faster? Perhaps not surprisingly, the dirty way. Lots faster. Surprised? I wasn't.

9. It is weird

"int" does a lot of cunning work. This means that it doesn't necessarily behave the way you might expect. When you divide by 0, for example, exceptions get thrown.

Have no regrets

Although .NET provides a number of quick and easy ways to use "int", do not use them. A week, a month, a year, or five years down the line you will regret it.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here