Elixir floating point division accuracy [duplicate] - erlang

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 months ago.
I'm observing some unexpected floating point behaviour in Elixir.
4.7 / 0.1 = 47.0 (good!)
4.8 / 0.1 = 47.9999999999 (bad!)
4.9 / 0.1 = 49 (good!)
While I understand the limitations of fp accuracy, in this case, the answer just looks wrong.
Curiously, I tried this in python as well, and got the same result, which is even more mysterious. When I changed the format to 4.8 * (1/0.1), I get the right answer (48.0).
What is going on here?

This is actually a normal behavior for IEEE 754 floating point numbers, and unrelated to erlang/elixir. Python or Nodejs would return the same thing.
For more information, you can read this detailed explanation.

While I understand the limitations of fp accuracy
Clearly not :-)
As always, if you need decimal precision, use Decimal:
iex(2)> Decimal.div(Decimal.new("4.8"), Decimal.new("0.1"))
#Decimal<48>
iex(3)> Decimal.mult(Decimal.new("4.8"), Decimal.new("10"))
#Decimal<48.0>

Related

Why can't Dart calculate simple double value? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 1 year ago.
I found this very simple calculation results incorrect: 0.19999999999999998.
(This is happened on Dartpad too)
void main() {
print(-0.1 + 0.3);
}
Why this is happened, and how to avoid ?
That's because you are loosing precision when using the type double.
To avoid that you have a few alternatives:
Use some decimal library like https://pub.dev/packages/decimal
Multiply by the number of significant digits and do the math using integers, some more information on a related SO question Dart double division precision
This is due to the internal representation of floating point numbers that crops up in almost all programming languages; c.f. this tutorial on floating point representation.
What's going on under the hood is that the string "-0.1" is converted into a string of bits that correspond to a particular number that is approximately equal to the value $-0.1$; same for $0.3$. When you subtract them, these approximation errors line up in such a way as to produce a result that you get by doing the math symbolically.
If you really need "exact" math on numbers like this, you can look into packages that provide either a Decimal or a Rational type.

why does decrementing by 0.1 within for loop produce this undesired result in lua [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 2 years ago.
j=1
for i=1,10 do
j=j-0.1
print(j)
end
the output is:
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
1.3877787807814e-16
where the last entry should be 0
Note: 0.1-0.1 returns 0.0
but, repeatedly doing j=j-0.1 from j>=0.3 produces this result
Numbers in lua are doubles which is Double-precision floating-point format, floating-point numbers has floating precision error. In fact 0.1 is not exactly 0.1, it's the closest representation of 0.1.
Possible fix would be probably using integers or integer with separate decimal part.
Related question with answer: C++ How to avoid floating-point arithmetic error
round off error!
0.1 cannot be represented exactly in IEEE 64 bit floating point
1/10 has a recurring representation in binary!

Understanding the modulo operation for negative integers [duplicate]

This question already has answers here:
C and Python - different behaviour of the modulo (%) operation [duplicate]
(6 answers)
How does the modulo (%) operator work on negative numbers in Python?
(12 answers)
Closed 3 years ago.
this answer explains the modulo operation for positive numbers perfectly. But how does modulo operation work differently for negative integers? for example -1 % 60 == 59 in Python but -1 % 60 == -1 in C.
The C answer makes sense to me in that the logic is same as that for a modulo operation involving positive integers. But what's the skinny with the Python answer?
What is the general principle behind modulo operation for negative integers?
this answer doesn't explain the general principle.
TIA.

Floating point literals

I've two questions regarding writing IEEE floating-point constants, as accepted by Z3's FPA logics:
First, in this question, Christoph used the example:
((_ asFloat 11 53) roundTowardZero 0.5 0))
I'm wondering what the final 0 signifies? I've tried:
((_ asFloat 11 53) roundTowardZero 0.5))
And that seems to work as well. Rummer's paper doesn't seem to require the final 0 either; so I'm curious what role it plays.
Second, when I get a model from Z3, it prints floating-point literals like so:
(as +1.0000000000000002220446049250313080847263336181640625p1 (_ FP 11 53))
How do I interpret the p1 suffix? What other suffixes are possible?
Thanks..
Thanks for pointing these issues out. Both of them are because there is no agreed upon standard for floating-point literals in the input or output yet.
The final 0 in the example represents the (binary) exponent, i.e., (... 0.5 1) == 1.0. We added this simply because numbers sometimes would require a lot of space if the exponent cannot be specified separately. This way, we can often specify them quite succinctly.
The p1 suffix in the output represents the binary exponent, i.e., where e8 means 10^8, the suffix p8 would mean 2^8. Z3 currently uses only binary exponents, so there would always be a p-suffix here, but this may change in the future. The rest of the number is given enough decimal digits to represent a precise result.
Note that the output format is not agreed upon yet by the SMT community. This may change in the future. For instance, there are discussions about whether this should be done in IEEE bit-vector format or an intermediate format that lies somewhere between reals and non-IEEE bit-vectors.

Precision in Erlang

Next code gives me 5.999999999999998 in Result, but right answer is 6.
Alpha = math:acos((4*4 + 5*5 - 3*3) / (2*4*5))
Area = 1/2 * 4 * 5 * math:sin(Alpha)
Is it possible to get 6?
You have run into a problem so common that it has its own web site, What Every Programmer Should Know About Floating-Point Arithmetic. The problem is due to the way floating-point arithmetic works in pretty much every CPU on the market that supports FP arithmetic; it is not specific to Erlang.
If regular floating point arithmetic does not give you the precision or accuracy you need, you can use an arbitrary precision arithmetic library instead of the built-in arithmetic. Perhaps the most well-known such library is GMP, but you'd have to wrap it in NIFs to use it from Erlang.
There is at least one pure-Erlang alternative, but I have no experience with it, so I cannot personally endorse it.
The calculation is done using standard floating point arithmetic on your hardware. Sometimes rounding errors show up.
Do you really need 15 digits of precision?
To get a more "exact" value there are multiple options:
> round(Area). % Will round to integer
6
or you could round to some precision
round(Area * 10000000) / 10000000.
6.0
If the purpose is to print the value, then printing with the default output for floats give you less precision.
io:format("~f~n", [Area]).
6.000000
ok
or with a specific precision
io:format("~.14f~n", [Area]).
6.00000000000000
ok
HTH

Resources