How can I generate numbers that are less than 1?
for example i would like to generate numbers from 0.1 to 0.9
what I've tried:
math.random(0.1,0.9)
Lua's math.random() with two arguments returns an integer within the specified range.
When called with no arguments, it returns a pseudo-random real number in between 0.0 and 1.0.
To get real numbers in a specified range, you need to do your own scaling; for example:
math.random() * 0.8 + 0.1
will give you a random real number between 0.1 and 0.9. More generally:
math.random() * (hi - lo) + lo
which you can wrap in your own function if you like.
But I'll note that that's a fairly peculiar range. If you really want a random number selected from 0.1, 0.2, 0.3, 0.4, ..., 0.9, then you should generate an integer in the range 1 to 9 and then divide it by 10.0:
math.random(1, 9) / 10.0
Keep in mind that most real numbers cannot be represented exactly in floating-point.
You can use math.random() (no args) to generate a number between 0 and 1, and use that to blend between your two target numbers.
-- generates a random real number between a (inclusive) and b (exclusive)
function rand_real(a, b)
return a + (b - a) * math.random()
end
(math.random(10,90)) / 100
This generates a number from 10 to 90 and the division gives you a number from 0.1 to 0.9.
Related
I tried generating a random decimal number between (lets say between 0.3 and 0.8) in love2d using the following code:
x=math.random(0.3, 0.8)
print(x)
but what happens is it generates 0.3 every single time I run the program and the 0 in 0.4 kind of flickers (in the sense like it changes to 1).
If it helps, here's a screen record of what happens https://vimeo.com/632949687
Your problem is underspecified. Here are two simple solutions; they're not equivalent.
This generates random numbers in the set {0.3,0.4,0.5,0.6,0.7,0.8}:
math.random(3,8)/10
This generates random numbers in the interval [0.3,0.8):
0.3+(0.8-0.3)*math.random()
In LÖVE there is a platform independent version of random() present.
https://love2d.org/wiki/love.math.random
With no need to use of math.randomseed() or love.math.setRandomSeed().
For float numbers in the range 0 and 1 simply use...
love.math.random()
'but what happens is it generates 0.3 every single time'
Same here, so the simpliest way seems to be #lhf' example.
check the function
function random(min, max, precision)
local precision = precision or 0
local num = math.random()
local range = math.abs(max - min)
local offset = range * num
local randomnum = min + offset
return math.floor(randomnum * math.pow(10, precision) + 0.5) / math.pow(10, precision)
end
So doing a print(0.3 - 0.2); will print 0.09999999999999998.
I know floating point arithmetic is impossible for a binary processor to get correct, but I was hoping for something built into Dart that would at least try to kill off the rounding errors.
Getting the above to show 0.1 takes some conversions back and forth, which I'd rather not do:
print(num.parse((0.3 - 0.2).toStringAsPrecision(8)));
-> 0.1
What are my options for not going crazy working with decimals? Is there anything built into Dart to help with this? There seems to be only one library that does the above: https://pub.dev/packages/decimal?
You can round a value to a multiple of ten (or of any other number) by:
double roundTo(double value, double precision) => (value * precision).round() / precision;
You can then do that to either the initial values or the final result:
int precision = 10;
final result = roundTo(0.3 - 0.2, precision);
// or inlined:
final result = ((0.3 - 0.2) * precision).round() / precision;
This ensures that the computation is done on the original values, and you only round the final result.
If you know that your input values all have the same scale, you can do as #brian-gorman suggests and scale the values first, and then round and down-scale the result at the end. For that use, I would recommend rounding early, on the incoming values, so that the computation will not accumulate imprecision.
(That doesn't matter for a single subtraction, but for a more complicated computation, it might).
final difference = (0.3 * precision).round() - (0.2 * precision).round();
final result = difference / precision;
For the record: The result you are seeing is not a rounding error, it is the correct result—for 64-bit floating point numbers as defined by IEEE-754.
You will see the same result in any other language using normal doubles, including JavaScript and Java. The result of 0.3 - 0.2 is not the double represented as 0.1. It is a different number, so it must have a different toString representation.
Neither of 0.1, 0.2 or 0.3 can be represented exactly as doubles. The actual values are:
0.1 : 0.1000000000000000055511151231257827021181583404541015625
0.2 : 0.200000000000000011102230246251565404236316680908203125
0.3 : 0.299999999999999988897769753748434595763683319091796875
0.3 - 0.2 : 0.09999999999999997779553950749686919152736663818359375
So when you write the syntax 0.3 and 0.2, you are really specifying the precise double values above.
The result becomes what it is because the precise mathematical calculation of 0.3 - 0.2 is
0.299999999999999988897769753748434595763683319091796875
- 0.200000000000000011102230246251565404236316680908203125
= 0.099999999999999977795539507496869191527366638183593750
and the result is an exact double value. So the result 0.09999... is precisely the difference between "0.3" and "0.2" and therefore the correct result for the subtraction. (The mistake is assuming that you actually have 0.3 and 0.2 as values, you never did). It is also not the same as the number represented by 0.1.
What most people do is multiply, do math, round, then divide by the desired precision
final precision = 10;
final difference = (0.3 * precision) - (0.2 * precision);
final result = difference.round() / precision; // will give you 0.1
The .round() ensures that any trailing decimals after math gets rounded off before you return to the desired precision.
I am currently converting some python statistics library that needs to produce a number with high decimal precision. For example, I did this:
i = 1
n = 151
sum = (i - 3/8) / (n + 1/4)
it will result to 0.
My question is how to always show decimal precision automatically when I do this kind of computation?
My desired output is:
0.004132231404958678
In ruby all the arithmetic operations result in the value of the same type as operands (the one having better precision.)
That said, 3/4 is an integer division, resulting in 0.
To make your example working, you are to ensure you are not losing precision anywhere:
i = 1.0
n = 151.0
sum = (i - 3.0/8) / (n + 1/4.0)
Please note, that as in most (if not all) languages, Float is tainted:
0.1 + 0.2 #⇒ 0.30000000000000004
If you need an exact value, you might use BigDecimal or Rational.
Can someone explain why in lua running:
return 256.65 * 1000000 + .000000005 - 256 * 1000000 gives 649999.99999997
whereas
return 255.65 * 1000000 + .000000005 - 255 * 1000000 and
return 268.65 * 1000000 + .000000005 - 268 * 1000000 give 650000.0 ?
From what i can see it seems to be an issue strictly for decimal 65 (and it seems also 15) and for whole numbers within the range 256 - 267. I know this is related to doing these calculations with floating points, but I'm still curious as to what is special about these values in particular
What is special about these values is that 0.65 is not a binary fraction (even though it is a decimal fraction), and so cannot be represented exactly in floating point.
For the record, this is not specific to Lua. The same thing will happen in C.
For the same reason that 10/3 is a repeating fraction in base 10. In base 3, dividing by 3 would result in whole numbers. In base 2 -- which is used to represent numbers in a computer -- the numbers you're producing similarly result in fractions that can be exactly represented.
Further reading.
How to do % to negative number in VF?
MOD(10,-3) = -2
MOD(-10,3) = 2
MODE(-10,-3) = -1
Why?
It is a regular modulo:
The mod function is defined as the amount by which a number exceeds
the largest integer multiple of the divisor that is not greater than
that number.
You can think of it like this:
10 % -3:
The largest multiple of 10 that is less than -3 is -2.
So 10 % -3 is -2.
-10 % 3:
Now, why -10 % 3 is 2?
The easiest way to think about it is to add to the negative number a multiple of 2 so that the number becomes positive.
-10 + (4*3) = 2 so -10 % 3 = (-10 + 12) % 3 = 2 % 3 = 3
Here's what we said about this in The Hacker's Guide to Visual FoxPro:
MOD() and % are pretty straightforward when dealing with positive numbers, but they get interesting when one or both of the numbers is negative. The key to understanding the results is the following equation:
MOD(x,y) = x - (y * FLOOR(x/y))
Since the mathematical modulo operation isn't defined for negative numbers, it's a pleasure to see that the FoxPro definitions are mathematically consistent. However, they may be different from what you'd initially expect, so you may want to check for negative divisors or dividends.
A little testing (and the manuals) tells us that a positive divisor gives a positive result while a negative divisor gives a negative result.