Introduction
This article is just a warning to those using high precision floating point math. Sometimes you don't get what is expected.
Background
After years of using double values in calculations, I am occasionally reminded that floating point math is inherently uncertain. Recently, I was pulling some values for a date range from a Firebird database and then averaging the values I received. I needed to get the first value greater than or equal to the average. So I loaded a list of doubles, ran a Linq average query to get the value, and another to get the first value >= than the average. Simple right? That is until all of my numbers were identical, high precision numbers.
Using the code
Look at the code below. Think the value of averaging 10 numbers of 1.79999995231628 is 1.79999995231628?
class Program
{
static void Main(string[] args)
{
var values = new List<double>();
values.Add(1.79999995231628);
values.Add(1.79999995231628);
values.Add(1.79999995231628);
values.Add(1.79999995231628);
values.Add(1.79999995231628);
values.Add(1.79999995231628);
values.Add(1.79999995231628);
values.Add(1.79999995231628);
values.Add(1.79999995231628);
var avg = values.Average();
}
}
Just keep in mind that float point math does not always behave as expected. If you use ToString() to display the result, it will display the correct value (1.79999995231628) but the underlying value is still not the value expected. Lots of great articles why this happens. Here's one for starters.
http://stackoverflow.com/questions/2100490/floating-point-inaccuracy-examples
Try using Decimal in place of double above. This will give you the value you expect. If you want accuracy, Decimal is the way to go, although it tends to be slower as it is calculated in software not hardware.