Erlang float_to_binary truncates decimals strangely - erlang

The Erlang float_to_binary function truncates decimals strangely. For instance, I would expect it to convert 0.45 with no decimal places to "0". Instead we get (example in Elixir):
iex> :erlang.float_to_binary(0.45, [decimals: 0])
"1"
iex> :erlang.float_to_binary(0.445, [decimals: 0])
"1"
> :erlang.float_to_binary(0.444, [decimals: 0])
"0"
Thus, it seems like rounding is being applied iteratively from right to left until the desired number of decimals is reached.
Is this expected behavior? Why doesn't it either round correctly or just truncate? Both of those options seem much more predictable to me.

This was a bug in Erlang which was fixed on Jan 15 2018 and first included in Erlang 20.3. If you upgrade to Erlang 20.3 or later, you should get "0" for 0.445.

Related

ZKP, Gnark: Does AssertIsLessOrEqual work with negative numbers?

Does gnarks (ZeroKnowledgeProof framework) AssertIsLessOrEqual work with negative numbers and ecc.BN254 curve?
https://pkg.go.dev/github.com/consensys/gnark#v0.7.0/frontend
It seems most computations including multiplication works with negative numbers but AssertIsLessOrEqual does not work as expected when it has negative parameters.
gnark/bn254 works with unsigned numbers. When you pass -3 then it is 21888242871839275222246405745257275088548364400416034343698204186575808495614
What may mislead as frontend.API.Println will print 21888242871839275222246405745257275088548364400416034343698204186575808495614 as -3
AssertIsLessOrEqual will consider -3 as 21888242871839275222246405745257275088548364400416034343698204186575808495614

Large lua numbers are being printed incorrectly

I have the following test case:
Lua 5.3.2 Copyright (C) 1994-2015 Lua.org, PUC-Rio
> foo = 1000000000000000000
> bar = foo + 1
> bar
1000000000000000001
> string.format("%.0f", foo)
1000000000000000000
> string.format("%.0f", bar)
1000000000000000000
That last line should be 1000000000000000001, since that's the value of bar, but for some reason it's not. This doesn't only apply to 1000000000000000000, I've yet to find another number over that one which gives the correct value. Can anyone give an explanation for why this happens?
You're formatting the number as floating-point, not integer. That's what %.0f is doing. At some point, floats lose precision. double, for example, will lose precision after about 16 decimal digits.
If you want to format an integer as an integer, then you need to format it as an integer, using standard printf rules:
string.format("%i", bar)
log2(1000000000000000000) is between 59 and 60, which means that the binary representation of that number needs 60 bits. double-precision floating point numbers have only 53 bits of precision, plus a power-of-two exponent with 11 bits of range. So to store that large of a number as floating point (which is what you requested with the %f format specifier), six to seven bits of precision are chopped off the end of the number, and the whole thing is multiplied by a power of two to get it back in range (259 in this case, I think). Chopping off those final bits removes the precision that allows 1000000000000000000 and 1000000000000000001 to be distinct from each other.
(This is not a particularly precise description of floating point, apologies if my numbers or descriptions are not exact.)

Unexpected result subtracting decimals in ruby [duplicate]

Can somebody explain why multiplying by 100 here gives a less accurate result but multiplying by 10 twice gives a more accurate result?
± % sc
Loading development environment (Rails 3.0.1)
>> 129.95 * 100
12994.999999999998
>> 129.95*10
1299.5
>> 129.95*10*10
12995.0
If you do the calculations by hand in double-precision binary, which is limited to 53 significant bits, you'll see what's going on:
129.95 = 1.0000001111100110011001100110011001100110011001100110 x 2^7
129.95*100 = 1.1001011000010111111111111111111111111111111111111111011 x 2^13
This is 56 significant bits long, so rounded to 53 bits it's
1.1001011000010111111111111111111111111111111111111111 x 2^13, which equals
12994.999999999998181010596454143524169921875
Now 129.95*10 = 1.01000100110111111111111111111111111111111111111111111 x 2^10
This is 54 significant bits long, so rounded to 53 bits it's 1.01000100111 x 2^10 = 1299.5
Now 1299.5 * 10 = 1.1001011000011 x 2^13 = 12995.
First off: you are looking at the string representation of the result, not the actual result itself. If you really want to compare the two results, you should format both results explicitly, using String#% and you should format both results the same way.
Secondly, that's just how binary floating point numbers work. They are inexact, they are finite and they are binary. All three mean that you get rounding errors, which generally look totally random, unless you happen to have memorized the entirety of IEEE754 and can recite it backwards in your sleep.
There is no floating point number exactly equal to 129.95. So your language uses a value which is close to it instead. When that value is multiplied by 100, the result is close to 12995, but it just so happens to not equal 12995. (It is also not exactly equal to 100 times the original value it used in place of 129.95.) So your interpreter prints a decimal number which is close to (but not equal to) the value of 129.95 * 100 and which shows you that it is not exactly 12995. It also just so happens that the result 129.95 * 10 is exactly equal to 1299.5. This is mostly luck.
Bottom line is, never expect equality out of any floating point arithmetic, only "closeness".

Why delphi's RoundTo method behaves differently? [duplicate]

I expected that the result would be 87.29. I also tried SimpleRoundTo, but produces the same result.
In the help there is also a "strange" example:
ms-help://embarcadero.rs2010/vcl/Math.RoundTo.html
RoundTo(1.235, -2) => 1.24
RoundTo(1.245, -2) => 1.24 //???
Does anybody know which function I need to get the result of 87.29? I mean: If the last digit >= 5 round up, if < 5 round down. As taught in the school :)
I use Delphi2010, and SetRoundMode(rmNearest). I also tried with rmTruncate.
The value 87.285 is stored in a double variable.
Also strange:
SimpleRoundTo(87.285, -2) => 87.29
but
x := 87.285; //double
SimpleRoundTo(x, -2) => 87.28
The exact value 87.285 is not representable as a floating-point value in Delphi. A page on my Web site shows what that value really is, as Extended, Double, and Single:
87.285 = + 87.28500 00000 00000 00333 06690 73875 46962 12708 95004 27246 09375
87.285 = + 87.28499 99999 99996 58939 48683 51519 10781 86035 15625
87.285 = + 87.28500 36621 09375
By default, floating-point literals in Delphi have type Extended, and as you can see, the Extended version of your number is slightly higher than 87.285, so it is correct that rounding to nearest would round up. But as a Double, the real number is slightly lower. That's why you get the number you expected if you explicitly store the number in a Double variable before calling RoundTo. There are overloads of that function for each of Delphi's floating-point types.
87.285 is not exactly representable and the nearest double is slightly smaller.
The classic reference on floating point is What Every Computer Scientist Should Know About Floating-
Point Arithmetic.
For currency based calculations, if indeed this is, you should use a base 10 number type rather than base 2 floating point. In Delphi that means Currency.

Round date to first year of the decade in Ruby

In Rails, I have a date saved in an instance variable. I need to grab the beginning of the decade before it. If #date.year= 1968 then I need to return 1960. How would I do that?
You can do this several ways. As suggested, you can always use integer division which divides the number and truncates the remainder. So 1968/10 returns 196 and if you multiply it by 10, it will give you 1960. Or simply,
#date.year = #date.year/10 * 10
#date.year
=> 1960
I prefer the method of using modular arithmetic. If you do #date.year % 10 it will return the remainder if you divide by 10 which you can then subtract from the year like so:
#date.year = #date.year - (#date.year % 10)
#date.year
=> 1960
The reason I prefer the latter is because integer division truncating the remainder may not be some thing that is obvious to everyone looking at your code. However, modular arithmetic works generally the same in all programming languages.
Keep in mind if you're trying to change the date, you need to use the appropriate method.
#date.change(:year => 1960)
just divide and multiply integers: try #date.year/10*10
Do an integer division by 10, and then multiply by 10.
1.9.3-p286 :001 > 1855/10
=> 185
1.9.3-p286 :002 > 185 * 10
=> 1850
The reason why this works (in Ruby, and in C/C++, Python, and possibly many other languages), is that integer division will always truncate the remainder. This will not be the case if you are dividing by a floating point however.

Resources