I am currently converting some python statistics library that needs to produce a number with high decimal precision. For example, I did this:
i = 1
n = 151
sum = (i - 3/8) / (n + 1/4)
it will result to 0.
My question is how to always show decimal precision automatically when I do this kind of computation?
My desired output is:
0.004132231404958678
In ruby all the arithmetic operations result in the value of the same type as operands (the one having better precision.)
That said, 3/4 is an integer division, resulting in 0.
To make your example working, you are to ensure you are not losing precision anywhere:
i = 1.0
n = 151.0
sum = (i - 3.0/8) / (n + 1/4.0)
Please note, that as in most (if not all) languages, Float is tainted:
0.1 + 0.2 #⇒ 0.30000000000000004
If you need an exact value, you might use BigDecimal or Rational.
Related
So doing a print(0.3 - 0.2); will print 0.09999999999999998.
I know floating point arithmetic is impossible for a binary processor to get correct, but I was hoping for something built into Dart that would at least try to kill off the rounding errors.
Getting the above to show 0.1 takes some conversions back and forth, which I'd rather not do:
print(num.parse((0.3 - 0.2).toStringAsPrecision(8)));
-> 0.1
What are my options for not going crazy working with decimals? Is there anything built into Dart to help with this? There seems to be only one library that does the above: https://pub.dev/packages/decimal?
You can round a value to a multiple of ten (or of any other number) by:
double roundTo(double value, double precision) => (value * precision).round() / precision;
You can then do that to either the initial values or the final result:
int precision = 10;
final result = roundTo(0.3 - 0.2, precision);
// or inlined:
final result = ((0.3 - 0.2) * precision).round() / precision;
This ensures that the computation is done on the original values, and you only round the final result.
If you know that your input values all have the same scale, you can do as #brian-gorman suggests and scale the values first, and then round and down-scale the result at the end. For that use, I would recommend rounding early, on the incoming values, so that the computation will not accumulate imprecision.
(That doesn't matter for a single subtraction, but for a more complicated computation, it might).
final difference = (0.3 * precision).round() - (0.2 * precision).round();
final result = difference / precision;
For the record: The result you are seeing is not a rounding error, it is the correct result—for 64-bit floating point numbers as defined by IEEE-754.
You will see the same result in any other language using normal doubles, including JavaScript and Java. The result of 0.3 - 0.2 is not the double represented as 0.1. It is a different number, so it must have a different toString representation.
Neither of 0.1, 0.2 or 0.3 can be represented exactly as doubles. The actual values are:
0.1 : 0.1000000000000000055511151231257827021181583404541015625
0.2 : 0.200000000000000011102230246251565404236316680908203125
0.3 : 0.299999999999999988897769753748434595763683319091796875
0.3 - 0.2 : 0.09999999999999997779553950749686919152736663818359375
So when you write the syntax 0.3 and 0.2, you are really specifying the precise double values above.
The result becomes what it is because the precise mathematical calculation of 0.3 - 0.2 is
0.299999999999999988897769753748434595763683319091796875
- 0.200000000000000011102230246251565404236316680908203125
= 0.099999999999999977795539507496869191527366638183593750
and the result is an exact double value. So the result 0.09999... is precisely the difference between "0.3" and "0.2" and therefore the correct result for the subtraction. (The mistake is assuming that you actually have 0.3 and 0.2 as values, you never did). It is also not the same as the number represented by 0.1.
What most people do is multiply, do math, round, then divide by the desired precision
final precision = 10;
final difference = (0.3 * precision) - (0.2 * precision);
final result = difference.round() / precision; // will give you 0.1
The .round() ensures that any trailing decimals after math gets rounded off before you return to the desired precision.
Can someone explain why in lua running:
return 256.65 * 1000000 + .000000005 - 256 * 1000000 gives 649999.99999997
whereas
return 255.65 * 1000000 + .000000005 - 255 * 1000000 and
return 268.65 * 1000000 + .000000005 - 268 * 1000000 give 650000.0 ?
From what i can see it seems to be an issue strictly for decimal 65 (and it seems also 15) and for whole numbers within the range 256 - 267. I know this is related to doing these calculations with floating points, but I'm still curious as to what is special about these values in particular
What is special about these values is that 0.65 is not a binary fraction (even though it is a decimal fraction), and so cannot be represented exactly in floating point.
For the record, this is not specific to Lua. The same thing will happen in C.
For the same reason that 10/3 is a repeating fraction in base 10. In base 3, dividing by 3 would result in whole numbers. In base 2 -- which is used to represent numbers in a computer -- the numbers you're producing similarly result in fractions that can be exactly represented.
Further reading.
My rails app is not doing math correctly. I think this has something to do with the variable types (int vs float) but not sure what's wrong.
The root problem is this method in my Stat model:
def lean_mass
self.weight * 0.01 * (100 - self.body_fat)
end
Where
Stat.weight = 140
Stat.body_fat = 15
it returns 119.00000000000001 instead of 119.
However, where
Stat.weight = 210
Stat.body_fat = 15
it returns 178.5, the correct number.
Anyone know why it's throwing in that small decimal?
The datatype for weight is integer and body_fat is decimal if that helps.
Floating-point numbers cannot precisely represent all real numbers. And furthermore floating-point operations cannot precisely represent every arithmetic operation. This leads to many surprising situations.
A simple example that shows this behavior:
0.1 + 0.2
#=> 0.30000000000000004
I advise to read: https://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
You can avoid most of this problems by using BigDecimal instead of floats:
require 'bigdecimal'
BigDecimal.new('0.01') * 140 * (100 - 15)
#=> 119.0
Take a look at ruby BigDecimal
http://www.ruby-doc.org/stdlib-2.1.1/libdoc/bigdecimal/rdoc/BigDecimal.html
For example, try:
sum = 0
10_000.times do
sum = sum + 0.0001
end
print sum #=> 0.9999999999999062
and contrast with the output from:
require 'bigdecimal'
sum = BigDecimal.new("0")
10_000.times do
sum = sum + BigDecimal.new("0.0001")
end
print sum #=> 0.1E1
How can I generate numbers that are less than 1?
for example i would like to generate numbers from 0.1 to 0.9
what I've tried:
math.random(0.1,0.9)
Lua's math.random() with two arguments returns an integer within the specified range.
When called with no arguments, it returns a pseudo-random real number in between 0.0 and 1.0.
To get real numbers in a specified range, you need to do your own scaling; for example:
math.random() * 0.8 + 0.1
will give you a random real number between 0.1 and 0.9. More generally:
math.random() * (hi - lo) + lo
which you can wrap in your own function if you like.
But I'll note that that's a fairly peculiar range. If you really want a random number selected from 0.1, 0.2, 0.3, 0.4, ..., 0.9, then you should generate an integer in the range 1 to 9 and then divide it by 10.0:
math.random(1, 9) / 10.0
Keep in mind that most real numbers cannot be represented exactly in floating-point.
You can use math.random() (no args) to generate a number between 0 and 1, and use that to blend between your two target numbers.
-- generates a random real number between a (inclusive) and b (exclusive)
function rand_real(a, b)
return a + (b - a) * math.random()
end
(math.random(10,90)) / 100
This generates a number from 10 to 90 and the division gives you a number from 0.1 to 0.9.
I'm not sure if this is a bug or not, so I thought that maybe you folks might want to take a look.
The problem lies with this code:
for i=0,1,.05 do
print(i)
end
The output should be:
0
.05
.1
--snip--
.95
1
Instead, the output is:
0
.05
.1
--snip--
.95
This same problem happened with a while loop:
w = 0
while w <= 1 do
print(w)
w = w + .05
end
--output:
0
.05
.1
--snip--
.95
The value of w is 1, which can be verified by a print statement after the loop.
I have verified as much as possible that any step that is less than or equal .05 will produce this error. Any step above .05 should be fine. I verified that 1/19 (0.052631579) does print a 1. (Obviously, a decimal denominator like 19.9 or 10.5 will not produce output from [0,1] inclusive.) Is there a possibility that this is not an error of the language? Both the interpreter and a regular Lua file produce this error.
This is a rounding problem. The issue is that 0.05 is represented as a floating point binary number, and it does not have an exact representation in binary. In base 2 (binary), it is a repeating decimal similar to numbers like 1/3 in base 10. When added repeatedly, the rounding results in a number which is slightly more than 1. It is only very, very slightly more than 1, so if you print it out, it shows 1 as the output, but it is not exactly 1.
> x=0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05+0.05
> print(x)
1
> print(1==x)
false
> print(x-1)
2.2204460492503e-16
So, as you can see, although really close to 1, it is actually slightly more.
A similar situation can come up in decimal when we have repeating fractions. If we were to add together 1/3 + 1/3 + 1/3, but we had to round to six digits to work with, we would add 0.333333 + 0.333333 + 0.333333 and get 0.999999 which is not actually 1. This is an analogous case for binary math. 1/20 cannot be precisely represented in binary.
Note that the rounding is slightly different for multiplication so
> print(0.05*20-1)
0
> print(0.05*20==1)
true
As a result, you could rewrite your code to say
for i=0,20,1 do
print(i*0.05)
end
And it would work correctly. In general, it's advisable not to use floating point numbers (that is, numbers with decimal points) for controlling loops when it can be avoided.
This is a result of floating-point inaccuracy. A binary64 floating point number is unable to store 0.05 and so the result will be rounded to a number which is very slightly more than 0.05. This rounding error remains in the repeated sum, and eventually the final value will be slightly more than 1.0, and so will not be displayed.
This is a floating point thing. Computers don't represent floating point numbers exactly. Tiny rounding errors make it so that 20 additions of +0.05 does not result in precisely 1.0.
Check out this article: "What every programmer should know about floating-point arithmetic."
To get your desired behavior, you could loop i over 1..20, and set f=i*0.05
This is not a bug in Lua. The same thing happens in the C program below. Like others have explained, it's due to floating-point inaccuracy, more precisely, to the fact that 0.05 is not a binary fraction (that is, does not have a finite binary representation).
#include <stdio.h>
int main(void)
{
double i;
for (i=0; i<=1; i+=0.05) printf("%g\n",i);
return 0;
}