In this post, I would like to talk about the floating-point model of the IEEE 754 standard. Before we start, I would like to state that I’m neither a mathematician nor an expert on this topic. I’m just interested in this and I’m learning the best myself by explaining it to others. So feel free to leave comments about misunderstandings or other problems you see in the explanations of this post. As a reference, I’m using the IEEE 754 standard and the book “Numerische Mathematik” which is, in my opinion, one of the best books about numerical calculation, but unfortunately only available in German.
The IEEE 754 standard is defining the binary representation of floating-point numbers, this is what we focus on now, and their operations. A Floating-point number x is defined as:
Where s is the sign of the number represented by one bit (which is either 1 for negative, and 0 for positive numbers), m is the mantissa (also called precision because its size defines the precision of the floating-point model) represented by p bits, b is the base of the numeral system (which is 2 for our binary numeral system on today’s computers), and e is the exponent represented by r bits.
The mantissa m is defined with a precision k, the number (which is in a binary system 0 or 1), and as
To be able to store also negative exponents e with a positive number, we need to add a bias, called B. Therefore we can say our exponent e can be stored as the positive number E with . And B can be calculated by . Typical values for single and double precision are:
Type | Size | Mantissa | Exponent | Value of Exponent | Bias |
Single | 32 | 23 | 8 | | 127 |
Double | 64 | 52 | 11 | | 1023 |
Let’s try an example now. How would be the representation of look like?
As a result ‘s binary representation is 11,00100011110101110001 and after normalization with and bit
we get the mantissa and an exponent with bias of . Now looks like the following as represented in the floating point model:
But wait, I remember values like 0.3 are a problem in binary representation. Is that true? Let’s try.
Ok, that seems to be really a problem, fractions like 0.3 can’t be represented exactly in a binary system because we would need an infinite number of bits and therefore rounding has to be applied while converting back from binary to a decimal representation. The machine epsilon is giving the upper bound of the rounding error and can be calculated by:
What we have talked about until now is valid for normalized numbers, but what exactly are normalized numbers? And are there denormalized numbers? In short, yes they do. But let’s start with normalized numbers Normalized numbers are numbers which are at least as big as the precision of the system and therefore have a leading implicit number/bit. You remember the 1,1001…? Because every normalized number has this implicit bit, we can save one bit storage. Denormalized numbers on the other hand are smaller than the precision and therefore can’t be represented with but with with de as the smallest possible normal exponent. So let’s have a look at how all this looks like in a number series and think about what this will tell us.
The number series is representing all normalized numbers in red and all denormalized numbers in blue. As we can see the first red line represents the smallest (or closest) possible positive value to zero. For single precision, the smallest absolute value is . For double precision, the smallest absolute value is . Also very important to point out is the fact that the distance between the numbers is not equidistant in a binary system, they are increasing by a factor of 2 with each power of two. This is one reason why comparing floating-point numbers can be a hard job.
Herewith, I would like to close this post. We’ve had an exciting little journey into number representation of binary systems and experienced their benefits, but also drawbacks. And for sure it’s not an easy topic, there is quite more to say about.
Did you like the post?
What are your thoughts?
Feel free to comment and share this post.