Dart rounding errors - dart

So doing a print(0.3 - 0.2); will print 0.09999999999999998.
I know floating point arithmetic is impossible for a binary processor to get correct, but I was hoping for something built into Dart that would at least try to kill off the rounding errors.
Getting the above to show 0.1 takes some conversions back and forth, which I'd rather not do:
print(num.parse((0.3 - 0.2).toStringAsPrecision(8)));
-> 0.1
What are my options for not going crazy working with decimals? Is there anything built into Dart to help with this? There seems to be only one library that does the above: https://pub.dev/packages/decimal?

You can round a value to a multiple of ten (or of any other number) by:
double roundTo(double value, double precision) => (value * precision).round() / precision;
You can then do that to either the initial values or the final result:
int precision = 10;
final result = roundTo(0.3 - 0.2, precision);
// or inlined:
final result = ((0.3 - 0.2) * precision).round() / precision;
This ensures that the computation is done on the original values, and you only round the final result.
If you know that your input values all have the same scale, you can do as #brian-gorman suggests and scale the values first, and then round and down-scale the result at the end. For that use, I would recommend rounding early, on the incoming values, so that the computation will not accumulate imprecision.
(That doesn't matter for a single subtraction, but for a more complicated computation, it might).
final difference = (0.3 * precision).round() - (0.2 * precision).round();
final result = difference / precision;
For the record: The result you are seeing is not a rounding error, it is the correct result—for 64-bit floating point numbers as defined by IEEE-754.
You will see the same result in any other language using normal doubles, including JavaScript and Java. The result of 0.3 - 0.2 is not the double represented as 0.1. It is a different number, so it must have a different toString representation.
Neither of 0.1, 0.2 or 0.3 can be represented exactly as doubles. The actual values are:
0.1 : 0.1000000000000000055511151231257827021181583404541015625
0.2 : 0.200000000000000011102230246251565404236316680908203125
0.3 : 0.299999999999999988897769753748434595763683319091796875
0.3 - 0.2 : 0.09999999999999997779553950749686919152736663818359375
So when you write the syntax 0.3 and 0.2, you are really specifying the precise double values above.
The result becomes what it is because the precise mathematical calculation of 0.3 - 0.2 is
0.299999999999999988897769753748434595763683319091796875
- 0.200000000000000011102230246251565404236316680908203125
= 0.099999999999999977795539507496869191527366638183593750
and the result is an exact double value. So the result 0.09999... is precisely the difference between "0.3" and "0.2" and therefore the correct result for the subtraction. (The mistake is assuming that you actually have 0.3 and 0.2 as values, you never did). It is also not the same as the number represented by 0.1.

What most people do is multiply, do math, round, then divide by the desired precision
final precision = 10;
final difference = (0.3 * precision) - (0.2 * precision);
final result = difference.round() / precision; // will give you 0.1
The .round() ensures that any trailing decimals after math gets rounded off before you return to the desired precision.

Related

Rails / Ruby how to always show decimal precision

I am currently converting some python statistics library that needs to produce a number with high decimal precision. For example, I did this:
i = 1
n = 151
sum = (i - 3/8) / (n + 1/4)
it will result to 0.
My question is how to always show decimal precision automatically when I do this kind of computation?
My desired output is:
0.004132231404958678
In ruby all the arithmetic operations result in the value of the same type as operands (the one having better precision.)
That said, 3/4 is an integer division, resulting in 0.
To make your example working, you are to ensure you are not losing precision anywhere:
i = 1.0
n = 151.0
sum = (i - 3.0/8) / (n + 1/4.0)
Please note, that as in most (if not all) languages, Float is tainted:
0.1 + 0.2 #⇒ 0.30000000000000004
If you need an exact value, you might use BigDecimal or Rational.

Why float value is rounded in playground but not in project in Swift?

I'm using float value in my project. when I try to access in Project, it's expanding to 1/billions decimal but when it comes to playground it works perfectly.
In xcodeproj:
let sampleFloat: Float = 0.025
print(sampleFloat) // It prints 0.0250000004
In Playground:
let sampleFloat: Float = 0.025
print(sampleFloat) // It prints 0.025
Any clue what's happening here? how can I avoid expansion in xcodeproj?
Lots of comments, but nobody's posted all the info as an answer yet.
The answer is that internally, floating point numbers are represented with binary powers of 2.
In base 10, the tenths digit represents how many 1/10ths are in the value. The hundredths digit represents how many 1/100ths are in the value, the thousandths digit represents how many 1/1000ths are in the value, and so on. In base 10, you can't represent 1/3 exactly. That is 0.33333333333333333...
In binary floating point, the first fractional binary digit represents how many 1/2s are in the value. The second digit represents how many 1/4ths are in in th value, the next digit represents how many 1/8ths are in the value, and so on. There are some (lots of) decimal values that can't be represented exactly in binary floating point. The value 0.1 (1/10) is one such value. That will be approximated by something like 1/16 + 1/32 + 1/256 + 1/512 + 1/4096 + 1/8192.
The value 0.025 is another value that can't be represented exactly in binary floating point.
There is an alternate number format, NSDecimalNumber (Decimal in Swift 3) that uses decimal digits to represent numbers, so it CAN express any decimal value exactly. (Note that it still can't express a fraction like 1/3 exactly.)

Lua decimal precision loss

Can someone explain why in lua running:
return 256.65 * 1000000 + .000000005 - 256 * 1000000 gives 649999.99999997
whereas
return 255.65 * 1000000 + .000000005 - 255 * 1000000 and
return 268.65 * 1000000 + .000000005 - 268 * 1000000 give 650000.0 ?
From what i can see it seems to be an issue strictly for decimal 65 (and it seems also 15) and for whole numbers within the range 256 - 267. I know this is related to doing these calculations with floating points, but I'm still curious as to what is special about these values in particular
What is special about these values is that 0.65 is not a binary fraction (even though it is a decimal fraction), and so cannot be represented exactly in floating point.
For the record, this is not specific to Lua. The same thing will happen in C.
For the same reason that 10/3 is a repeating fraction in base 10. In base 3, dividing by 3 would result in whole numbers. In base 2 -- which is used to represent numbers in a computer -- the numbers you're producing similarly result in fractions that can be exactly represented.
Further reading.

Math.random on non whole numbers

How can I generate numbers that are less than 1?
for example i would like to generate numbers from 0.1 to 0.9
what I've tried:
math.random(0.1,0.9)
Lua's math.random() with two arguments returns an integer within the specified range.
When called with no arguments, it returns a pseudo-random real number in between 0.0 and 1.0.
To get real numbers in a specified range, you need to do your own scaling; for example:
math.random() * 0.8 + 0.1
will give you a random real number between 0.1 and 0.9. More generally:
math.random() * (hi - lo) + lo
which you can wrap in your own function if you like.
But I'll note that that's a fairly peculiar range. If you really want a random number selected from 0.1, 0.2, 0.3, 0.4, ..., 0.9, then you should generate an integer in the range 1 to 9 and then divide it by 10.0:
math.random(1, 9) / 10.0
Keep in mind that most real numbers cannot be represented exactly in floating-point.
You can use math.random() (no args) to generate a number between 0 and 1, and use that to blend between your two target numbers.
-- generates a random real number between a (inclusive) and b (exclusive)
function rand_real(a, b)
return a + (b - a) * math.random()
end
(math.random(10,90)) / 100
This generates a number from 10 to 90 and the division gives you a number from 0.1 to 0.9.

Strange for loop problem

I'm not sure if this is a bug or not, so I thought that maybe you folks might want to take a look.
The problem lies with this code:
for i=0,1,.05 do
print(i)
end
The output should be:
0
.05
.1
--snip--
.95
1
Instead, the output is:
0
.05
.1
--snip--
.95
This same problem happened with a while loop:
w = 0
while w <= 1 do
print(w)
w = w + .05
end
--output:
0
.05
.1
--snip--
.95
The value of w is 1, which can be verified by a print statement after the loop.
I have verified as much as possible that any step that is less than or equal .05 will produce this error. Any step above .05 should be fine. I verified that 1/19 (0.052631579) does print a 1. (Obviously, a decimal denominator like 19.9 or 10.5 will not produce output from [0,1] inclusive.) Is there a possibility that this is not an error of the language? Both the interpreter and a regular Lua file produce this error.
This is a rounding problem. The issue is that 0.05 is represented as a floating point binary number, and it does not have an exact representation in binary. In base 2 (binary), it is a repeating decimal similar to numbers like 1/3 in base 10. When added repeatedly, the rounding results in a number which is slightly more than 1. It is only very, very slightly more than 1, so if you print it out, it shows 1 as the output, but it is not exactly 1.
> x=0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05
> print(x)
1
> print(1==x)
false
> print(x-1)
2.2204460492503e-16
So, as you can see, although really close to 1, it is actually slightly more.
A similar situation can come up in decimal when we have repeating fractions. If we were to add together 1/3 + 1/3 + 1/3, but we had to round to six digits to work with, we would add 0.333333 + 0.333333 + 0.333333 and get 0.999999 which is not actually 1. This is an analogous case for binary math. 1/20 cannot be precisely represented in binary.
Note that the rounding is slightly different for multiplication so
> print(0.05*20-1)
0
> print(0.05*20==1)
true
As a result, you could rewrite your code to say
for i=0,20,1 do
print(i*0.05)
end
And it would work correctly. In general, it's advisable not to use floating point numbers (that is, numbers with decimal points) for controlling loops when it can be avoided.
This is a result of floating-point inaccuracy. A binary64 floating point number is unable to store 0.05 and so the result will be rounded to a number which is very slightly more than 0.05. This rounding error remains in the repeated sum, and eventually the final value will be slightly more than 1.0, and so will not be displayed.
This is a floating point thing. Computers don't represent floating point numbers exactly. Tiny rounding errors make it so that 20 additions of +0.05 does not result in precisely 1.0.
Check out this article: "What every programmer should know about floating-point arithmetic."
To get your desired behavior, you could loop i over 1..20, and set f=i*0.05
This is not a bug in Lua. The same thing happens in the C program below. Like others have explained, it's due to floating-point inaccuracy, more precisely, to the fact that 0.05 is not a binary fraction (that is, does not have a finite binary representation).
#include <stdio.h>
int main(void)
{
double i;
for (i=0; i<=1; i+=0.05) printf("%g\n",i);
return 0;
}

Resources