|
Thanks Guy!
1.
I have made some further test. I want to use float.Epsilon to check whether the result is the same -- if the differences between two float numbers are smaller than float.Epsilon, I will treat they are the same. But my code is not working. Any ideas?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TestFloat
{
class Program
{
static void Main(string[] args)
{
float TotalBonus = 199.321F;
float Worker1 = 100F;
float Worker2 = 300F;
float Result1 = TotalBonus * Worker1 / (Worker1 + Worker2);
float Result2 = TotalBonus * Worker2 / (Worker1 + Worker2);
float ResultTotalSent = Result1 + Result2;
string Result1String = Result1.ToString();
string Result2String = Result2.ToString();
string ReceivedString1 = Result1String;
string ReceivedString2 = Result2String;
float Received1 = float.Parse(ReceivedString1);
float Received2 = float.Parse(ReceivedString2);
float ResultTotalReceived = Received1 + Received2;
if ((ResultTotalReceived - ResultTotalSent) < float.Epsilon)
{
Console.WriteLine ("Treated the same. ");
}
return;
}
}
}
2.
Your referred link is great! But it is about float calculation. My question is about Decimal. Any comments?
regards,
George
|
|
|
|
|
Hi,
All I can say is that when I googled C# decimal and float I came across an article saying always use decimal for financial calculations due to the issue you are coming up against.
Sorry I can't be of more help - busy at the moment...
Regards
Guy
Continuous effort - not strength or intelligence - is the key to unlocking our potential.(Winston Churchill)
|
|
|
|
|
Thanks all the same, Guy!
regards,
George
|
|
|
|
|
MSDN[^] briefly explains how decimal numbers are used in one paragraph. Comparing the data there to some of the data in the document Guy referenced pretty much explains it for me.
I don't have a mathematics degree and I don't claim to understand it all, but it seems to makes sense.
Bottom line - if accuracy is critical - use decimal!
Dave
|
|
|
|
|
Thanks Dave,
My question is, I think using Decimal is a great idea to reduce rounding errors, but can not completely reduce rounding errors compared with float/double, for example, no matter how large data type you are using, you still can not represent 3 divide 10 in precise. Any comments?
regards,
George
|
|
|
|
|
I think you meant 10 divided by 3?
If so - try this:
float a = 10f;
float b = 3f;
float c = a / b;
decimal d = 10m;
decimal e = 3m;
decimal f = d / e;
Console.WriteLine("Float Result: " + c);
Console.WriteLine("DecimalResult: " + f);
There is a finite limit to the precision a computer can provide due to the way the data is stored and manipulated. After all, we want to deal with large numbers with large numbers of decimal places that we think about in base 10. A processor can in reality only deal with two numbers - 0 and 1. Putting many of these bits together gives it the apparent capability to do so much, but it's basically still just crunching 0s and 1s - which means until we stop having fractions, start thinking in binary (base 2), octal, hex ...., and have unlimited memory and processor address busses then we can never get total precision.
Dave
|
|
|
|
|
Thanks Dave,
I have tried the result is,
Float Result: 3.333333
DecimalResult: 3.3333333333333333333333333333
So, the conclusion is Decimal can reduce, not can not completely solve ths rounding issues we found in float data type computing, right?
regards,
George
|
|
|
|
|
If accurate calculation is required as it is with most payment/bonus/currency systems then always use decimal.
Dave
|
|
|
|
|
|
This is actually the subject of a book or two... and you haven't yet noticed that when using binary floating-point (as in float or double), numbers that look fairly simple (like 0.1) using decimal notation look more like PI using binary notation (0.1 converted to binary has an infinite number of binary decimals and thus cannot be represented exactly using a binary floating-point type).
In any case - here are some rules of thumb (coming from years of building business software doing monetary calculations using binary floating-point types):
- After each step in a calculation, round the result to the desired number of decimals (depends on currency - different currencies have different number of decimals - and in some cases such as invoice totals, you have no decimals depending on local traditions).
- When doing comparisons, never compare for equality (0.1 stored as a double will NOT equal 0.1 stored as a float, for instance). Instead, make sure the difference is within an acceptable range.
Examples:
double total = 0;
foreach (row in rows)
{
row.Amount = Math.Round(row.Quantity * row.UnitPrice, 2);
total = Math.Round(total + row.Amount, 2);
}
if (Math.Round(Math.Abs(a - b), 2) < 0.001)
{
}
if (Math.Round(b - a, 2) > 0.001)
{
}
And so on... Note that my using 0.001 is an assumption that any round-off errors (after rounding to two decimals, which is the norm for Swedish currency) will be less than that. Using double (and not working on extremely large amounts), this is normally true.
To a "computer math expert" (I used to be one a long time ago) these rules are somewhat naïve and over-simplified, but they do work well for simple financial calculation (such as totaling an invoice and so on). The goal in such circumstances isn't to be as mathematically accurate as possible but to get the same result a person using a financial calculator would (you don't want customers calling in telling you that you can't add properly).
The examples all fulfill that requirement.
If you're interested in more about how binary floating-point works, try looking for a good book on it - it's not a small subject(actually, it's a career for some). I take it you're a CS student? If so, ask your teacher - he or she should be able to point you in the right direction (I no longer know what's current). It's a very interesting field - at least I thought so when I studied (and worked as an assistant teacher in) numerical analysis almost 30 years ago...
Peter the small turnip
(1) It Has To Work. --RFC 1925[^]
|
|
|
|
|
Thanks Peter,
Your reply is really great! I posted my code below again. My confusion is, why the ToString method will do the rounding work? I think it should be the float number calculation which will do the rounding work.
In my sample, you can see in the float calculation,
float Result1 = TotalBonus * Worker1 / (Worker1 + Worker2);
float Result2 = TotalBonus * Worker2 / (Worker1 + Worker2);
there is no rounding in Result1 and Result2.
But in the ToString method,
string Result1String = Result1.ToString();
string Result2String = Result2.ToString();
There is rounding for Result2String. Any ideas?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TestFloat
{
class Program
{
static void Main(string[] args)
{
float TotalBonus = 199.321F;
float Worker1 = 100F;
float Worker2 = 300F;
float Result1 = TotalBonus * Worker1 / (Worker1 + Worker2);
float Result2 = TotalBonus * Worker2 / (Worker1 + Worker2);
float ResultTotalSent = Result1 + Result2;
string Result1String = Result1.ToString();
string Result2String = Result2.ToString();
string ReceivedString1 = Result1String;
string ReceivedString2 = Result2String;
float Received1 = float.Parse(ReceivedString1);
float Received2 = float.Parse(ReceivedString2);
float ResultTotalReceived = Received1 + Received2;
if ((ResultTotalReceived - ResultTotalSent) < float.Epsilon)
{
Console.WriteLine("Treated the same. ");
}
}
}
}
regards,
George
|
|
|
|
|
Hi again,
I didn't test your code (no time), but I suspect that you see a different number of decimals in the results? If so, count the digits, not the decimals. The type float only gives you 7 digits of precision, so the ToString() stops producing decimals once you have that many significant digits (as the rest would just be zeros). If you want a different layout, use a format string with ToString().
Side note: You should use double in almost all cases, as the precision of float is usually not enough for any serious monetary calculations.
Peter the small turnip
(1) It Has To Work. --RFC 1925[^]
|
|
|
|
|
Thanks Peter,
PeterTheSwede wrote: The type float only gives you 7 digits of precision
But, Result2 as a float type, could have 9 digits in debugger, why when converting to string, it loses precision?
regards,
George
|
|
|
|
|
The reason for the 7-digit precision in ToString() is that the ULP for float is 2^-23 (approximately 10^-7). Showing more digits is generally misleading. However, the internal precision can sometimes be larger, up to a maximum of approximately nine digits, which is probably (wild guess) the reason the debugger chooses to show that many.
This is only briefly explained in the help for System.Single, but for a reasonably good definition of ULP and floating point arithmetics, look here[^].
Then again: Never use float . Use double . Except possibly at gunpoint. Consider float to be obsolete. And remember that in SQL Server, float is double and real is single (go figure).
Peter the small turnip
(1) It Has To Work. --RFC 1925[^]
|
|
|
|
|
Thanks Peter,
ULP is short for?
regards,
George
|
|
|
|
|
Again, that is here[^]
(and in case your browser doesn't allow you to search on the page, it's "Unit in the Last Place", which is defined as "the arithmetical difference between two consecutive representable floating-point numbers which have the same exponent")
Peter the small turnip
(1) It Has To Work. --RFC 1925[^]
|
|
|
|
|
Hi Peter,
1.
Great!
I did some further thinking and proofing yesterday. And I think there are values which could be represented precisely by base 10, like 0.1, while at the same time can not be represented by base 2.
But there is no values vice versa, i.e. the value could be represented by base 2, not but be able to represented by base 10. Any comments? Agree or not?
2.
So, I think only two kinds of numbers can not be represented by Decimal -- infinite (like 1/3) and the values too large/small which are out of range.
regards,
George
|
|
|
|
|
Hi!
Now you're starting to get really interesting...
1. I'm not competent enough with maths these days (I used to be an assistant teacher in numerical analysis, but that was almost 30 years ago) to either agree or disagree. I suspect you may be right, though, and I suspect it may be related to the fact that 10 is a multiple of 2. That is, binary 0.1 is representable as decimal 0.5, from which would follow that all values that can be represented with a finite number of binary digits should be representable with a finite number of decimal digits as well. But again, this is a guess, not a fact...
2. Again, I suspect you are right. Then again - there are an infinite number of values with an infinite number of decimals (or that require a larger number of digits to be exactly represented than any of the precisions we use - whether double , float or decimal ) regardless of base, and there are many calculations that can cause such values to appear. So: Rounding problems will occur in any case, whether we use binary or decimal.
However, when dealing with monetary calculations decimal has an advantage in that it behaves the way a financial calculator does - so it causes fewer surprises for accountants... Then again, you can accomplish the same by carefully using rounding with, say, double .
Peter the small turnip
(1) It Has To Work. --RFC 1925[^]
|
|
|
|
|
Thanks Peter,
"binary 0.1 is representable as decimal 0.5" -- you get it from 0 + 1 * 2 ^ (-1) = 0.5?
regards,
George
|
|
|
|
|
George_George wrote: "binary 0.1 is representable as decimal 0.5" -- you get it from 0 + 1 * 2 ^ (-1) = 0.5?
Precisely so! Binary fractions are negative powers of two, as in:
binary 0.1 = 2 ^ -1 = decimal 0.5
binary 0.01 = 2 ^ -2 = decimal 0.25
... and so on
It becomes even clearer when you consider the following decimal fractions:
decimal 0.1 = 10 ^ -1
decimal 0.01 = 10 ^ -2
... and so on
Same pattern, different base...
Peter the small turnip
(1) It Has To Work. --RFC 1925[^]
|
|
|
|
|
Thanks Peter!
Wonderful "long" help.
regards,
George
|
|
|
|
|
Finally a thought on using decimal (as others have suggested):
It doesn't solve the problem, it only moves it around a bit...
With decimal data types (such as decimal in C#) you still have the problem that for example 1/3 doesn't have an exact representation. It is always rounded to whatever precision you have (0.33333, 0.33333333333 or whatever - no matter how many decimals you add, it's never exactly 1/3).
What you do get rid of is the problem with values like 0.1 that have an exact decimal representation but don't have an exact binary representation. Or, to put it bluntly: using binary decimal, you no longer confuse people who don't understand how binary floating-point works.
So... if you need to cater for values such as 1/3 (and you always do - for example if you need to amortize a loan over 3 years you would divide by 3, and then you have to make sure you round it correctly), you pretty much have to do what I wrote in my previous answer even when using decimal data types (if you want to make sure you get results that people can understand and verify).
The exception is if you have a language like ADA or T-SQL (Microsofts SQL version) where you can define decimal types with exactly the number of decimals you need. In that case, the rounding I do in my examples is done automatically so you don't have to think about it in your code.
The general rule is: know what you do. It never fails, whether using binary or decimal representations...
Peter the small turnip
(1) It Has To Work. --RFC 1925[^]
modified on Thursday, May 29, 2008 7:09 AM
|
|
|
|
|
Hi Peter,
Your reply is so great! I am condering,
1. Why using Decimal type, we can represent 0.1 precisely? Because Decimal type will use base 10 other than base 2 to represent a number?
2. For the numbers (which can be precisely represented by base 2), are there any possibilities which they can not be represented by base 10 precisely? If yes, I think it is not correct to say Decimal has less issues compared with float type?
3. Both float and double will use base 2 as internal representation?
regards,
George
|
|
|
|
|
George_George wrote: 1. Why using Decimal type, we can represent 0.1 precisely? Because Decimal type will use base 10 other than base 2 to represent a number?
Exactly. Internally, I beleive that the C# decimal is represented as a scaled binary integer, but it behaves as if it were a "true" decimal type. And as the names imply, "decimal"=base 10, "binary"=base 2.
George_George wrote: 2. For the numbers (which can be precisely represented by base 2), are there any possibilities which they can not be represented by base 10 precisely? If yes, I think it is not correct to say Decimal has less issues compared with float type?
Regardless of base (10, 2 or whatever), some real numbers won't have a finite decimal (binal is the correct word in Swedish using base 2, I don't know the English word) representation. In base 2 I gave 1/10 (0.1) as an example, in base 2 I gave 1/3 as one. But there are others - infinitely many, in fact. This means that decimal floating-point types don't have less issues than binary floating-point types. They just have other issues.
One exception: Binary floating point types can confuse people who don't understand a) that they are binary or b) what that means. Decimal types don't have that issue. In all other respects (precision limits, error accumulation and whatever) they are identical.
Like I mentioned previously, though, there are also decimal fixed-point types (as in those you create using decimal(13,4) in SQL. These are even easier to understand for people working with economics, as the round everything to the specified number of decimals all the time. They're usually quite useless for advanced calculations, though.
George_George wrote: 3. Both float and double will use base 2 as internal representation?
Yes, they differ only in precision, range and size - see System.Single (=C# float) and System.Double (=C# double) in the documentation for details. A look at System.Decimal (=C# decimal) could be informative, too...
Peter the small turnip
(1) It Has To Work. --RFC 1925[^]
|
|
|
|
|
Thanks Peter,
1.
What means "a scaled binary integer"?
2.
0.1 is a great sample by you, which could be represented by base 10, but not base 2. Do you have an adverse sample, which could be represented by base 2, but not be able to represented to base 10?
3.
PeterTheSwede wrote: One exception: Binary floating point types can confuse people who don't understand a) that they are binary or b) what that means. Decimal types don't have that issue. In all other respects (precision limits, error accumulation and whatever) they are identical.
I think item a means Float and Double are both using base 2 as internal number represetation, correct? Your point b means?
regards,
George
|
|
|
|
|