# [ACCEPTED]-Why do I see a double variable initialized to some value like 21.4 as 21.399999618530273?-precision

These accuracy problems are due to the internal representation of floating point 5 numbers and there's not much you can do 4 to avoid it.

By the way, printing these values 3 at run-time often still leads to the correct 2 results, at least using modern C++ compilers. For 1 most operations, this isn't much of an issue.

I liked Joel's explanation, which deals with a similar binary 13 floating point precision issue in Excel 12 2007:

See how there's a lot of 0110 0110 11 0110 there at the end? That's because

0.1has 10no exact representation in binary... it's a repeating binary number. It's 9 sort of like how 1/3 has no representation 8 in decimal. 1/3 is 0.33333333 and you have 7 to keep writing 3's forever. If you lose 6 patience, you get something inexact.So you 5 can imagine how, in decimal, if you tried 4 to do 3*1/3, and you didn't have time to 3 write 3's forever, the result you would 2 get would be 0.99999999, not 1, and people 1 would get angry with you for being wrong.

If you have a value like:

```
double theta = 21.4;
```

And you want to 3 do:

```
if (theta == 21.4)
{
}
```

You have to be a bit clever, you will 2 need to check if the value of theta is **really** close 1 to 21.4, but not necessarily that value.

```
if (fabs(theta - 21.4) <= 1e-6)
{
}
```

This is partly platform-specific - and we 17 don't know what platform you're using.

It's 16 also partly a case of knowing what you actually 15 *want* to see. The debugger is showing you - to 14 some extent, anyway - the precise value 13 stored in your variable. In my article on binary floating point numbers in .NET, there's 12 a C# class which lets you see the absolutely *exact* number 11 stored in a double. The online version isn't 10 working at the moment - I'll try to put 9 one up on another site.

Given that the debugger 8 sees the "actual" value, it's 7 got to make a judgement call about what 6 to display - it could show you the value 5 rounded to a few decimal places, or a more 4 precise value. Some debuggers do a better 3 job than others at reading developers' minds, but 2 it's a fundamental problem with binary floating 1 point numbers.

Use the fixed-point `decimal`

type if you want stability 22 at the limits of precision. There are overheads, and 21 you must explicitly cast if you wish to 20 convert to floating point. If you do convert 19 to floating point you will reintroduce the 18 instabilities that seem to bother you.

Alternately 17 you can get over it and learn to work *with* the 16 limited precision of floating point arithmetic. For 15 example you can use rounding to make values 14 converge, or you can use epsilon comparisons 13 to describe a tolerance. "Epsilon" is 12 a constant you set up that defines a tolerance. For 11 example, you may choose to regard two values 10 as being equal if they are within 0.0001 9 of each other.

It occurs to me that you could 8 use operator overloading to make epsilon 7 comparisons transparent. That would be very 6 cool.

For mantissa-exponent representations 5 EPSILON must be computed to remain within 4 the representable precision. For a number 3 N, Epsilon = N / 10E+14

`System.Double.Epsilon`

is the smallest 2 representable positive value for the `Double`

type. It 1 is *too* small for our purpose. Read Microsoft's advice on equality testing

I've come across this before (on my blog) - I think 15 the surprise tends to be that the 'irrational' numbers 14 are different.

By 'irrational' here I'm just 13 referring to the fact that they can't be 12 accurately represented in this format. Real 11 irrational numbers (like π - pi) can't be 10 accurately represented at all.

Most people 9 are familiar with 1/3 not working in decimal: 0.3333333333333...

The 8 odd thing is that 1.1 doesn't work in floats. People 7 expect decimal values to work in floating 6 point numbers because of how they think 5 of them:

1.1 is 11 x 10^-1

When actually they're 4 in base-2

1.1 is 154811237190861 x 2^-47 3

You can't avoid it, you just have to get 2 used to the fact that some floats are 'irrational', in 1 the same way that 1/3 is.

One way you can avoid this is to use a library 2 that uses an alternative method of representing 1 decimal numbers, such as BCD

If you are using Java and you need accuracy, use 2 the BigDecimal class for floating point 1 calculations. It is slower but safer.

Seems to me that 21.399999618530273 is the 3 *single precision* (float) representation of 21.4. Looks like 2 the debugger is casting down from double 1 to float somewhere.

You cant avoid this as you're using floating 12 point numbers with fixed quantity of bytes. There's 11 simply no isomorphism possible between real 10 numbers and its limited notation.

But most 9 of the time you can simply ignore it. 21.4==21.4 8 would still be true because it is still 7 the same numbers with the same error. But 6 21.4f==21.4 may not be true because the 5 error for float and double are different.

If 4 you need fixed precision, perhaps you should 3 try fixed point numbers. Or even integers. I 2 for example often use int(1000*x) for passing 1 to debug pager.

If it bothers you, you can customize the 2 way some values are displayed during debug. Use 1 it with care :-)

Refer to General Decimal Arithmetic

Also take note when comparing floats, see 1 this answer for more information.

According to the javadoc

"If at least 8 one of the operands to a numerical operator 7 is of type double, then the

operation 6 is carried out using 64-bit floating-point 5 arithmetic, and the result of the

numerical 4 operator is a value of type double. If the 3 other operand is not a double, it is

first 2 widened (§5.1.5) to type double by numeric 1 promotion (§5.6)."

More Related questions

We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.