[ACCEPTED]-How deterministic is floating point inaccuracy?-floating-point
From what I understand you're only guaranteed 25 identical results provided that you're dealing 24 with the same instruction set and compiler, and 23 that any processors you run on adhere strictly 22 to the relevant standards (ie IEEE754). That 21 said, unless you're dealing with a particularly 20 chaotic system any drift in calculation 19 between runs isn't likely to result in buggy 18 behavior.
Specific gotchas that I'm aware 17 of:
some operating systems allow you to set 16 the mode of the floating point processor 15 in ways that break compatibility.
floating 14 point intermediate results often use 80 13 bit precision in register, but only 64 bit 12 in memory. If a program is recompiled in 11 a way that changes register spilling within 10 a function, it may return different results 9 compared to other versions. Most platforms 8 will give you a way to force all results 7 to be truncated to the in memory precision.
standard 6 library functions may change between versions. I 5 gather that there are some not uncommonly 4 encountered examples of this in gcc 3 vs 3 4.
The IEEE itself allows some binary representations 2 to differ... specifically NaN values, but 1 I can't recall the details.
The short answer is that FP calculations 36 are entirely deterministic, as per the IEEE Floating Point Standard, but 35 that doesn't mean they're entirely reproducible 34 across machines, compilers, OS's, etc.
The 33 long answer to these questions and more 32 can be found in what is probably the best 31 reference on floating point, David Goldberg's 30 What Every Computer Scientist Should Know About Floating Point Arithmetic. Skip to the section on the IEEE standard 29 for the key details.
To answer your bullet 28 points briefly:
Time between calculations 27 and state of the CPU have little to do with this.
Hardware 26 can affect things (e.g. some GPUs are not IEEE 25 floating point compliant).
Language, platform, and 24 OS can also affect things. For a better 23 description of this than I can offer, see 22 Jason Watkins's answer. If you are using 21 Java, take a look at Kahan's rant on Java's floating point inadequacies.
Solar flares 20 might matter, hopefully infrequently. I 19 wouldn't worry too much, because if they 18 do matter, then everything else is screwed 17 up too. I would put this in the same category 16 as worrying about EMP.
Finally, if you are doing 15 the same sequence of floating point calculations 14 on the same initial inputs, then things 13 should be replayable exactly just fine. The 12 exact sequence can change depending on your 11 compiler/os/standard library, so you might 10 get some small errors this way.
Where you 9 usually run into problems in floating point 8 is if you have a numerically unstable method 7 and you start with FP inputs that are approximately the 6 same but not quite. If your method's stable, you 5 should be able to guarantee reproducibility 4 within some tolerance. If you want more 3 detail than this, then take a look at Goldberg's 2 FP article linked above or pick up an intro 1 text on numerical analysis.
I think your confusion lies in the type 23 of inaccuracy around floating point. Most 22 languages implement the IEEE floating point standard This standard lays 21 out how individual bits within a float/double 20 are used to produce a number. Typically 19 a float consists of a four bytes, and a 18 double eight bytes.
A mathmatical operation 17 between two floating point numbers will 16 have the same value every single time (as 15 specified within the standard).
The inaccuracy 14 comes in the precision. Consider an int 13 vs a float. Both typically take up the 12 same number of bytes (4). Yet the maximum 11 value each number can store is wildly different.
- int: roughly 2 billion
- float: 3.40282347E38 (quite a bit larger)
The 10 difference is in the middle. int, can represent 9 every number between 0 and roughly 2 billion. Float 8 however cannot. It can represent 2 billion 7 values between 0 and 3.40282347E38. But 6 that leaves a whole range of values that 5 cannot be represented. If a math equation 4 hits one of these values it will have to 3 be rounded out to a representable value 2 and is hence considered "inaccurate". Your 1 definition of inaccurate may vary :).
Also, while Goldberg is a great reference, the original 11 text is also wrong: IEEE754 is not gaurenteed to be portable. I can't emphasize 10 this enough given how often this statement 9 is made based on skimming the text. Later 8 versions of the document include a section that discusses this specifically:
Many programmers 7 may not realize that even a program that 6 uses only the numeric formats and operations 5 prescribed by the IEEE standard can compute 4 different results on different systems. In 3 fact, the authors of the standard intended 2 to allow different implementations to obtain 1 different results.
This answer in the C++ FAQ probably describes 25 it the best:
http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.18
It is not only that different 24 architectures or compiler might give you 23 trouble, float pointing numbers already 22 behave in weird ways within the same program. As 21 the FAQ points out if y == x
is true, that can 20 still mean that cos(y) == cos(x)
will be false. This is 19 because the x86 CPU calculates the value 18 with 80bit, while the value is stored as 17 64bit in memory, so you end up comparing 16 a truncated 64bit value with a full 80bit 15 value.
The calculation are still deterministic, in 14 the sense that running the same compiled 13 binary will give you the same result each 12 time, but the moment you adjust the source 11 a bit, the optimization flags or compile 10 it with a different compiler all bets are 9 off and anything can happen.
Practically 8 speaking, I it is not quite that bad, I 7 could reproduce simple float pointing math 6 with different version of GCC on 32bit Linux 5 bit for bit, but the moment I switched to 4 64bit Linux the result were no longer the 3 same. Demos recordings created on 32bit 2 wouldn't work on 64bit and vice versa, but 1 would work fine when run on the same arch.
Since your question is tagged C#, it's worth 12 emphasising the issues faced on .NET:
- Floating point maths is not associative - that is,
(a + b) + c
is not guaranteed to equala + (b + c)
; - Different compilers will optimize your code in different ways, and that may involve re-ordering arithmetic operations.
- In .NET the CLR's JIT compiler will compile your code on the fly, so compilation is dependent upon the version of the .NET on the machine at runtime.
This 11 means, that you shouldn't rely upon your 10 .NET application producing the same floating 9 point calculation results when run on different 8 versions of the .NET CLR.
For example, in 7 your case, if you record the initial state 6 and inputs to your simulation, then install 5 a service pack that updates the CLR, your 4 simulation may not replay identically the 3 next time you run it.
See Shawn Hargreaves's 2 blog post Is floating point math deterministic? for further discussion relevant 1 to .NET.
Sorry, but I can't help thinking that everybody 16 is missing the point.
If the inaccuracy is 15 significant to what you are doing then you 14 should look for a different algorithm.
You 13 say that if the calculations are not accurate, errors 12 at the start may have huge implications 11 by the end of the simulation.
That my friend 10 is not a simulation. If you are getting 9 hugely different results due to tiny differences 8 due to rounding and precision then the chances 7 are that none of the results has any validity. Just 6 because you can repeat the result does not 5 make it any more valid.
On any non-trivial 4 real world problem that includes measurements 3 or non-integer calculation, it is always 2 a good idea to introduce minor errors to 1 test how stable your algorithm is.
HM. Since the OP asked for C#:
Is the C# bytecode 14 JIT deterministic or does it generate different 13 code between different runs? I don't know, but 12 I wouldn't trust the Jit.
I could think of 11 scenarios where the JIT has some quality 10 of service features and decides to spend 9 less time on optimization because the CPU 8 is doing heavy number crunching somewhere 7 else (think background DVD encoding)? This 6 could lead to subtle differences that may 5 result in huge differences later on.
Also 4 if the JIT itself gets improved (maybe as 3 part of a service pack maybe) the generated 2 code will change for sure. The 80 bit internal 1 precision issue has already been mentioned.
More Related questions
We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.