The floating point type double cannot store values like 0.1 precisely. I it absolutely true. And calculations in floating arithmetic cannot yield a precise result as inputs are imprecise. I would be the first one saying: replace floating point aritmetic by fixed point.

But the difference between the value stored in the double variable and the value as decimal number rounded to 2 decimal digits is **in this case** so small, that is would work correctly for reasonable small amounts of money. "Reasonable small" is lower than 90071992547409.93 in this case.

Double has 52+1 bits of significand. Let 7 or 8 bits bits bould be necessary for the fraction part to distinguish correctly between values like 0.01 and 0.02. There are more free bits of rest of significand to represent the integral part than 32-bit integer has.

To get the first incorrect value I wrote a program in C. It prints decimal numbers using both aproaches. Just compare the strings. The first line which differs shows the lowest incorrectly displayed number.

#include <stdio.h>

int main(int argc, char ** argv)

{

for(int i=0; i<62; ++i){

long long int num1 = 1LL << i;

for(int j=0; j<100; ++j)

{

long long int num2 = num1 + j;

printf("%lli.%02lli %.2f\n", num2 / 100LL, num2 % 100LL, (double) num2 / 100.0);

}

}

}