Generate a random float between 0 and 1 - ios

I'm trying to generate a random number that's between 0 and 1. I keep reading about arc4random(), but there isn't any information about getting a float from it. How do I do this?

Random value in [0, 1[ (including 0, excluding 1):
double val = ((double)arc4random() / UINT32_MAX);
A bit more details here.
Actual range is [0, 0.999999999767169356], as upper bound is (double)0xFFFFFFFF / 0x100000000.

// Seed (only once)
srand48(time(0));
double x = drand48();
// Swift version
// Seed (only once)
srand48(Int(Date().timeIntervalSince1970))
let x = drand48()
The drand48() and erand48() functions return non-negative, double-precision, floating-point values, uniformly distributed over the interval [0.0 , 1.0].

For Swift 4.2+ see: https://stackoverflow.com/a/50733095/1033581
Below are recommendations for correct uniformity and optimal precision for ObjC and Swift 4.1.
32 bits precision (Optimal for Float)
Uniform random value in [0, 1] (including 0.0 and 1.0), up to 32 bits precision:
Obj-C:
float val = (float)arc4random() / UINT32_MAX;
Swift:
let val = Float(arc4random()) / Float(UInt32.max)
It's optimal for:
a Float (or Float32) which has a significand precision of 24 bits for its mantissa
48 bits precision (discouraged)
It's easy to achieve 48 bits precision with drand48 (which uses arc4random_buf under the hood). But note that drand48 has flaws because of the seed requirement and also for being suboptimal to randomize all 52 bits of Double mantissa.
Uniform random value in [0, 1], 48 bits precision:
Swift:
// seed (only needed once)
srand48(Int(Date.timeIntervalSinceReferenceDate))
// random Double value
let val = drand48()
64 bits precision (Optimal for Double and Float80)
Uniform random value in [0, 1] (including 0.0 and 1.0), up to 64 bits precision:
Swift, using two calls to arc4random:
let arc4random64 = UInt64(arc4random()) << 32 &+ UInt64(arc4random())
let val = Float80(arc4random64) / Float80(UInt64.max)
Swift, using one call to arc4random_buf:
var arc4random64: UInt64 = 0
arc4random_buf(&arc4random64, MemoryLayout.size(ofValue: arc4random64))
let val = Float80(arc4random64) / Float80(UInt64.max)
It's optimal for:
a Double (or Float64) which has a significand precision of 52 bits for its mantissa
a Float80 which has a significand precision of 64 bits for its mantissa
Notes
Comparisons with other methods
Answers where the range is excluding one of the bounds (0 or 1) likely suffer from a uniformity bias and should be avoided.
using arc4random(), best precision is 1 / 0xFFFFFFFF (UINT32_MAX)
using arc4random_uniform(), best precision is 1 / 0xFFFFFFFE (UINT32_MAX-1)
using rand() (secretly using arc4random), best precision is 1 / 0x7FFFFFFF (RAND_MAX)
using random() (secretly using arc4random), best precision is 1 / 0x7FFFFFFF (RAND_MAX)
It's mathematically impossible to achieve better than 32 bits precision with a single call to arc4random, arc4random_uniform, rand or random. So our above 32 bits and 64 bits solutions should be the best we can achieve.

This function works for negative float ranges as well:
float randomFloat(float Min, float Max){
return ((arc4random()%RAND_MAX)/(RAND_MAX*1.0))*(Max-Min)+Min;
}

Swift 4.2+
Swift 4.2 adds native support for a random value in a Range:
let x = Float.random(in: 0.0...1.0)
let y = Double.random(in: 0.0...1.0)
let z = Float80.random(in: 0.0...1.0)
Doc:
random(in range: ClosedRange<Float>)
random(in range: Range<Float>)
random(in range: ClosedRange<Double>)
random(in range: Range<Double>)
random(in range: ClosedRange<Float80>)
random(in range: Range<Float80>)

(float)rand() / RAND_MAX
The previous post stating "rand()" alone was incorrect.
This is the correct way to use rand().
This will create a number between 0 -> 1
BSD docs:
The rand() function computes a sequence of pseudo-random integers in the
range of 0 to RAND_MAX (as defined by the header file "stdlib.h").

This is extension for Float random number Swift 3.1
// MARK: Float Extension
public extension Float {
/// Returns a random floating point number between 0.0 and 1.0, inclusive.
public static var random: Float {
return Float(arc4random()) / Float(UInt32.max))
}
/// Random float between 0 and n-1.
///
/// - Parameter n: Interval max
/// - Returns: Returns a random float point number between 0 and n max
public static func random(min: Float, max: Float) -> Float {
return Float.random * (max - min) + min
}
}

Swift 4.2
Swift 4.2 has included a native and fairly full-featured random number API in the standard library. (Swift Evolution proposal SE-0202)
let intBetween0to9 = Int.random(in: 0...9)
let doubleBetween0to1 = Double.random(in: 0...1)
All number types have the static random(in:) function which takes the range and returns the random number in the given range.

Use this to avoid problems with upper bound of arc4random()
u_int32_t upper_bound = 1000000;
float r = arc4random_uniform(upper_bound)*1.0/upper_bound;
Note that it is applicable for MAC_10_7, IPHONE_4_3 and higher.

arc4random has a range up to 0x100000000 (4294967296)
This is another good option to generate random numbers between 0 to 1:
srand48(time(0)); // pseudo-random number initializer.
double r = drand48();

float x = arc4random() % 11 * 0.1;
That produces a random float bewteen 0 and 1.
More info here

rand()
by default produces a random number(float) between 0 and 1.

Related

iOS UInt64 number to float error

UInt64 intValue = 999999900;
float tt = intValue;
NSLog(#"float tt = %f", tt);
the output result is "float tt = 999999872", as you can see the UInt64 convert to float lose something, the Max float is bigger than 999999900, so I think the value 999999900 can be cast to float, so my question is why lose 28 in iOS?
float has a limited amount of precision. It's not the size of the number, it's the number of significant digits (9 in this case).
Use double instead of float to get more precision.
UInt64 intValue = 999999900;
double tt = intValue;
NSLog(#"double tt = %f", tt);
Why are you using float and not double? Has nobody told you that float has very limited precision (around 7 digits) while double has about 15 digits?
As a rule, you should NEVER use float instead of double unless you yourself can give a reasonable explanation why float would be more suitable than double.
So your question is: Why do I lose precision when I intentionally throw away 8 digits and precision, and what can I do? The answer is very simple: You lost precision because you threw it away yourself. Use double instead of float.

Random number between two decimals in Swift

I'd like to get a random number between two small decimal numbers.
Between maybe 0.8 and 1.3.
var duration = CGFloat(arc4random() % 0.8) / 1.3
or
var duration = CGFloat(arc4random() % 0.5) + 0.8
Thanks!
Here's a generic function I just wrote up quickly to get a random number within a range.
func randomBetween(_ firstNum: CGFloat, _ secondNum: CGFloat) -> CGFloat{
return CGFloat(arc4random()) / CGFloat(UINT32_MAX) * abs(firstNum - secondNum) + min(firstNum, secondNum)
}
It takes a random number, finds the remainder of that number divided by the difference between the two parameters, then adds by the smaller number. This guarantees the random number to be between the two numbers.
Disclaimer: I have not tested this out yet.
EDIT: Now this function does what you want.
Swift 5:
Using random(in:) which returns a random value within the specified range:
var duration = CGFloat.random(in: 0.8 ... 1.3)
Per Apple:
The random() static method chooses a random value from a continuous
uniform distribution in range, and then converts that value to the
nearest representable value in this type.
See random(in: using: ) to specify a random generator other than the default.

Rounding when [int] = [float] + [int] (Obj-C, iOS?)

In this case:
float a = 0.99999f;
int b = 1000;
int c = a + b;
In result c = 1001. I discovered that it happens because b is converted to float (specific for iOS), then a + b doesn't have enough precision for 1000.9999 and (why?) is rounded to higher value. If a is 0.999f we get c = 1000 - theoretically correct behavior.
So my question is why float number is rounded to higher value? Where this behavior (or convention) is described?
I tested this on iPhone Simulator, Apple LLVM 4.2 compiler.
In int c = a + b, the integer b is converted to a float first, then 2 floating point
numbers are added, and the result is truncated to an integer.
The default floating point rounding mode is FE_TONEAREST, which means that the result
of the addition
0.99999f + 1000f
is the nearest number that can be represented as a float, and that is the number 1001f. This float is then truncated to the integer c = 1001.
If you change the rounding mode
#include <fenv.h>
fesetround(FE_DOWNWARD);
then the result of the addition is rounded downward (approximately 1000.99993f) and you would get c = 1000.
The reason is that when you add 1000 you get 8 total decimal digits of precision, but IEEE float is only supports 7 digits.

Truncating Actionscript Number

I have a number in actionscript, arrived at via some arbitrary math:
var value:Number = 45 * (1 - (1 /3));
trace(value);//30.00000000004
Now, I would like to take the ceiling of this number, except in cases where the amount it is greater than the next lower integer is smaller than some epsilon. In the above example, I really want to round to 30, but only in the case where I know I'm getting a rounding error:
Math.ceil(value); //I want 30, but get 31
Math.ceil(30.1); //In this case, it's reasonable to get 31
Is there an elegant way to truncate a Number in actionscript? Or easily discard any part of the number that is less than some epsilon?
Is this method is of any help to you?
var precision:int = 4;
var isActualCeilingValRequred:Boolean;
var thresholdValForCeiling:int = 100;
private function getCeilingValue(num:Number):Number
{
var tempNum = num * Math.pow(10, precision);
var decimalVal = tempNum % Math.pow(10, precision);
if(decimalVal < thresholdValForCeiling) {
return Math.floor(num);
} else {
return Math.Ceil(num);
}
}
var value:Number = 45 * (1 - (1 /3));
trace(value);//30.00000000004
// Play with arbitraryPrecision until you are satisfied with
// the accuracy of your results
var arbitraryPrecision:int = 3;
var fixed:Number = value.toFixed(arbitraryPrecision);
trace(Math.ceil(fixed));
The basic way to round a number to a specified number of fractional digits is to multiply the number to 10^DIGITS to shift the decimal point DIGITS digits to the left, perform the rounding, and divide by the same 10^DIGITS to shift the decimal point back to the right.
var value:Number = 45 * (1 - (1 / 3));
trace(value); // 30.000000000000004
trace(Math.ceil(value)); // 31
// Round the number to 13 decimal digits.
const POWER:Number = 1e13;
value = Math.round(value * POWER) / POWER;
trace(value); // 30
// Compute number's ceiling.
value = Math.ceil(value);
trace(value); // 30`
It works for your example, but there's a big gotcha. If you change your value to be 450 * (1 - (1 / 3));, your original problem will appear again. Now to get rid of it, you would have to round to 12 decimal digits. Basically, the significand of a double-precision format (Number) can hold about 15 significant digits. This means as the value increases by a factor of ten, the decimal points moves to the left and that last "4" digit you want to get rid of becomes closer and closer to the decimal point. So the code becomes more complicated.
var value:Number = 450 * (1 - (1 / 3));
trace(value); // 30.000000000000004
trace(Math.ceil(value)); // 31
var exp:Number = Math.floor(Math.log(Math.abs(value)) * Math.LOG10E);
trace('exp=' + exp); // exp=2
const POWER:Number = Math.pow(10, 14 - exp);
value *= POWER;
trace(value); // 300000000000000.06
value = Math.round(value);
trace(value); // 300000000000000
value /= POWER;
trace(value); // 300
As you can see, it now works regardless of the value's magnitude.
First, I find the number's exponent by taking a base-10 logarithm of the number's absolute value, then rounding it down. If you calculate a = value * Math.pow(10, exp);, then value could be represented as a * 10^b, where (1 ≤ |a| < 10), known as normalized scientific notation. But that's not what we're doing here. Now that we know how many digits are on the left of the decimal point, we will shift the decimal point right, but not too far, to keep one 0 and this error digit we want to get rid of, on the right side of the decimal point. So, multiply by 10^(14-exp), round, then divide by the same power.

Arc4random float?

Doing this:
float x = arc4random() % 100;
returns a decent result of a number between 0 and 100.
But doing this:
float x = (arc4random() % 100)/100;
Returns 0. How can I get it to return a float value?
Simply, you are doing integer division, instead of floating point division, so you are just getting a truncated result (.123 is truncated to 0, for example). Try
float x = (arc4random() % 100)/100.0f;
You are dividing an int by an int, which gives an int. You need to cast either to a float:
float x = (arc4random() % 100)/(float)100;
Also see my comment about the modulo operator.
To get a float division instead of an integer division:
float x = arc4random() % 100 / 100.f;
But be careful, using % 100 will only give you a value between 0 and 99, so dividing it by 100.f will only produce a random value between 0.00f and 0.99f.
Better, to get a random float between 0 and 1:
float x = arc4random() % 101 / 100.f;
Even better, to avoid modulus bias:
float x = arc4random_uniform(101) / 100.f;
Alternatively, to avoid a two digits precision bias:
float x = (float)arc4random() / UINT32_MAX;
And in Swift 4.2+, you get built-in support for ranges:
let x = Float.random(in: 0.0...1.0)
In Swift:
Float(arc4random() % 100) / 100

Resources