[ACCEPTED]-Real vs. Floating Point vs. Money-types
Have a look at What Every Computer Scientist Should Know About Floating Point Arithmetic.
Floating point numbers in 13 computers don't represent decimal fractions 12 exactly. Instead, they represent binary fractions. Most 11 fractional numbers don't have an exact representation 10 as a binary fraction, so there is some rounding 9 going on. When such a rounded binary fraction 8 is translated back to a decimal fraction, you 7 get the effect you describe.
For storing 6 money values, SQL databases normally provide 5 a DECIMAL type that stores exact decimal 4 digits. This format is slightly less efficient 3 for computers to deal with, but it is quite 2 useful when you want to avoid decimal rounding 1 errors.
Floating point numbers use binary fractions, and 6 they don't correspond exactly to decimal 5 fractions.
For money, it's better to either 4 store number of cents as integer, or use 3 a decimal number type. For example, Decimal(8,2) stores 2 8 digits including 2 decimals (xxxxxx.xx), i.e. to 1 cent precision.
In a nutshell, it's for pretty much the 4 same reason that one-third cannot be exactly 3 expressed in decimal. Have a look at David 2 Goldberg's classic paper "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for 1 details.
To add a clarification, a floating point 14 numbers stored in a computer behaves as 13 described by other posts here, because as 12 described, it is stored in binary format. This 11 means that unless it's value (both the mantissa 10 and exponent components of the value) are 9 powers of two, and cannot be represented 8 exactly.
Some systems, on the other hand 7 store fractional numbers in decimal (SQL 6 Server Decimal, and Numeric data types, and 5 Oracle Number datatype for example,) and 4 then their internal representation is, therefore, exact 3 for any number that is a power of 10. But 2 then numbers that are not powers of 10 cannot 1 be represented exactly.
More Related questions