Introduction
This small struct
Fraction
fills a gaping hole in the .NET 1.1 framework: the lack of precision rational numbers. A rational number is the fraction of two integers that is not reduced to some limited precision datatype like a decimal
or double
. Consider these two test cases:
Assert.AreEqual(new Fraction(1,3), new Fraction(2,3) / 2);
Assert.AreEqual(1m/3m, (2m/3m) / 2m);
The first test, using our Fraction
-struct
succeeds, but the second test fails because of the limited precision of Decimal.
Fraction
is fully integrated in the .NET numeric classes and even supports the (limited precision) conversion to and from Decimal. Supported operations are comparison and the common five operators +
, -
, *
, /
and %
. CLS-friendly alternative methods are provided for languages other than C#.
Using the struct
You create fractions either through the constructor or through one of the supported conversions. Conversion from integral values result in a unity-fraction, and conversion from decimal results in an approximated fraction with potential rounding errors. See the Implementation details section for more information on this conversion. For now, you need to know that there is no loss of precision when the decimal number has 18 or less significant digits.
new Fraction(3, 4) == 3/4
new Fraction(-3, 4) == -3/4
new Fraction(3, -4) == -3/4
new Fraction(0, 42) == 0/1
new Fraction(42, 0) => Division by zero
(Fraction)(3m/4m) == 3/4
(Fraction)3 == 3/1
(Fraction)0 == 0/1
Fractions do exhibit unlimited precision results as long as numerator and denominator stay in the bounds of Int64.MaxValue
. This means, the largest positive resp. negative number is Int64.MaxValue
/1 resp. -Int64.MaxValue/1
. The smallest positive resp. negative number is 1/Int64.MaxValue
resp. -1/Int64.MaxValue
. Operations that exceed this range result in System.OverflowException
. Note that overflow-exceptions may be produced even when the reduced fraction itself is in the legal range. This is due to the implementation of the operators, which does not attempt to reduce the arguments of operations if the resulting fraction is reducable (by GCD).
Examples for the use of Fraction
are also found in the accompanying NUnit-test.
Implementation details
A Fraction
is represented as the reduced (normalized) fraction of two Int64
numbers that are stored as numerator resp. denominator. Normalization is performed by the division of both numbers through their GCD (see Fraction.Gcd
), and normalization is always performed. The denominator is always kept positive and the numerator is adjusted accordingly.
Arithmetic overflow checks are delegated to the underlying Int64
numbers. So don't be surprised when the exception details are not related to the fraction but talk about integer arithmetic overflows.
The conversion decimal to fraction deserves some detailed discussion (and offers some room for improvement). A decimal is binary represented by an unsigned integer part (96 bits), a sign, and a scale in the range 0-28: sign * integer / (10^scale). Naively, this is the way how to convert a decimal to a fraction. However, both the resulting numerator and denominator are way too large to be stored in Int64
.
Given the 96bit unsigned integer and the scale as a power of 10, we reduce the precision of the decimal by dividing integer and scale iteratively by two until the integer has only 63 significant bits (one bit for the sign, thank you!). Hence the integer part may lose as much as 33 bits in precision. Also, the scale might be divided by at most 2^33, which might as well overflow the precision of the scale number (we use a decimal for maximum precision).
We are not finished yet, since the scale might be too large to fit into a Int64
as well. We check if the scale is larger than 10^18 (it might be as large as 10^28), and divide integer part and scale by the difference of allowed maximum scale and the actual scale (computed in powers of 10).
As a result, we obtain a signed 64 bit approximation of the integer part and a scale that is limited to 1-10^18. From this, we create the fraction and reduce it. The consequence of the algorithm is this:
- the highest possible precision of a decimal conversion is 18 digits.
- the precision gets worse when the decimal gets larger.
- perfect precision is only given for decimals that use 18 or less significant digits.
History
This is version 1.1.
- updated the documentation to give the right lower limit.
- corrected
CompareTo
for equal fractions.
- added 128bit comparison to avoid
OverflowException
in hopefully all cases.
- added a few properties for the limit fraction values.