First of all, I really, really appreciate your struggle for using of the "special values" -Inf, +Inf and NaN. I just voted my 5 for this fundamental and practically very important question.
There are two behaviors: throwing exceptions (which are hard-coded exceptions thrown by the CPU's FPU) or not; in second case, those values should be used. The x86, x86-64 and IE64 architectures have the feature to switch this modes of operations on and off, but the question is: how to do it in a standard way?
I found it!
This include file
<cfenv>
or "fenv.h", provides the way to define floating-point exception behavior:
http://www.cplusplus.com/reference/cfenv[
^].
For Microsoft, there is the C++ compiler switch:
http://msdn.microsoft.com/en-us/library/e7s85ffb.aspx[
^].
Now, from your observations, if only they are correct, I couldn't stand one conclusion: Windows sucks. Linux is also not good enough, because the default is to throw the exceptions. But what to do? For a workaround, explicit floating-point exception mode should be set.
Now, some background:
As I say, The x86, x86-64 and IE64 architectures have the feature to switch this modes of operations on and off, which can be done, with C++, using the inline Assembly, which would kill the platform portability of code. Moreover, it's apparent that on the level of CPU instructions, the option will be set for a current thread, so different threads could use different options.
If exception is not thrown, some arithmetic operations will return "special values" −Inf, +Inf or NaN. They are defined in the standard IEEE 754, along with their semantics:
http://en.wikipedia.org/wiki/IEEE_floating_point[
^].
This is a very important feature of the standard for floating-point numbers, operations and devices implementing floating-point semantics. Without them, such object would not form a formal
closed algebra. With those "special numbers", the set of floating-point numbers does form such algebra, which is different from the algebra of "real numbers", as it is understood mathematics. The root of such a "special" computer algebra is simple: it's impossible to represent the
continuum using any finite-set machine, such as a computer:
http://en.wikipedia.org/wiki/Continuum_%28set_theory%29[
^],
http://en.wikipedia.org/wiki/Real_numbers[
^],
http://en.wikipedia.org/wiki/Finite-state_machine[
^].
Now, the situation you described looks way too ugly. I never expected that, but the simple check of the default behavior is very frustrating. I clearly remember: quite a while ago, with Microsoft OS and C++ compiler, floating-point division by default always thrown exception. Presently, the default is to return NaN (which is presented as a string in the most ugly way). Better, buy how can we trust the compiler then? In .NET, it was always so-so: there is only one option, to return NaN with no exception. It is much better, because those who want the exception, can always do the check up (at some cost, unfortunately) and throw their own exceptions. Still, this is a viable solution. Yes, I did not test Mono though, and now I'm afraid of testing it :-). This is all way too frustrating.
[EDIT]
I forgot to reference one useful article:
http://www.johndcook.com/IEEE_exceptions_in_cpp.html[
^].
First of all, I did not do it, because the author provided good recommendation on using floating-point "Infs" and "NaNs", managed to say no a word on the topic we are discussing. But advice on testing values is useful.
—SA