Get floating/decimal portion of a float - erlang

I would like:
unknown_function(123.456) -> 456
unknown_function(1234.56) -> 56
Or
unknown_function(123.456) -> "456"
Is there a builtin for this? The builtin trunc/1 does the opposite:
2> trunc(123.456).
123
There is this answer for C: Extract decimal part from a floating point number in C and this for Java: How to get the decimal part of a float?

No there is no BIF for this, but you can do this:
decimal_point(X, DecimalDigits) when X < 0 ->
decimal_point(-X, DecimalDigits);
decimal_point(X, DecimalDigits)->
(X - trunc(X)) * math:pow(10,DecimalDigits).
> decimal_point(2.33, 2).
33
> decimal_point(-2.33, 2).
33

This is inspired by #Dogbert's comment
The algorithm doesnt work using native floats due to floating point representation limits and rounding errors.
However, using https://github.com/tim/erlang-decimal:
frac_to_denom_int(Num, Denom, Precison) ->
{X, _} = string:to_integer(lists:nth(2, string:tokens(decimal:format(decimal:divide(Num, Denom, [{precision, Precison}])), "."))),
X.
E.g.,
frac_to_denom_int("1.0", "3.0", 1000).
> 3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333
If you don't have a frac,
d_to_denom_int(D_Tup)->
string:to_integer(lists:nth(2, string:tokens(decimal:format(D_Tup), "."))).
d_to_denom_int({0, 123456, -3}).
> 456

Based on #dogbert's comment, passing in one more flag compact on the float_to_list/2 call will help:
lists:nth(2, string:tokens(float_to_list(123.456, [{decimals, 10}, compact]), ".")).
% "456"
If you go over decimals 14, you'll start to see those rounding errors.

Related

Creating Bitmask for keyboard modifiers + ASCII Code

I would like to encode three keyboard modifiers (CTRL, ALT, SHIFT) + the ASCII code of the pressed key into a single value. This falls naturally into the category of bitmasks.
One way I could do this is that the sender encodes each key as the following:
CTRL: 1000
ALT: 10000
SHIFT: 100000
KeyCode: 1-255
For example, if I were to click all modifiers + the last key in the ascii table, I would get:
100000 + 10000 + 1000 + 255 = 111255. The receiver side it would then be possible to do substraction and check if the number goes below 0 as such:
has_shift = X - 100000 < 0
has_alt = X - 10000 < 0
has_ctrl = X - 1000 < 0
if has_shift
X -= 100000
if has_alt
X -= 10000
if has_ctrl
X -= 1000
keyCode = X (the remainder)
Surely enough, I find this horrible and would assume that this could be done in a far better using bit-shift or something in that ballpark. How could this possibly be done better?
Instead add 256, 512, and 1024 respectively for ctrl, alt, shift. Then use the and operator in whatever language you're using (missing from question tags) to extract the modifiers and code. In C and many languages, that operator is &. So X & 1024 is not zero if shift was pressed. X & 255 is the character code.

How can I split a float (1,2) into 1 and 2 integers?

I'm learning F# and have an assignment where I have to treat a float as a coordinate. For example float 2.3 would be treated as a coordinate (2.3) where x is 2 and y is 3.
How can I split the float to calculate with it?
I am trying to make a function to calculate the length of a vector:
let lenOfVec (1.2, 2.3) and using pythagoras' method to get the length of hypotenuse.
But I am already stuck at splitting up the float.
Hope some can help!
Having at your disposal libraries as rich as F#/.NET offer the task of splitting a float into two can be done with one short line of code:
let splitFloat n = n.ToString().Split('.') |> Array.map float
library function ToString() converts the argument n (supposedly float) to a string
library functionSplit('.') applied to this string converts it into an array of two strings representing the first number before decimal dot and the second number after the dot
finally this array of 2 strings is converted by applying library function float to the each array element with the help of just another library function Array.map, producing the array of two sought floats
Being applied to a random float number the outlined chain of conversions looks like
123.456 --> "123.456" --> [|123;456|] --> [|123.0;456.0|]
Stealing from a few other answers on here, something like this seems to work for a few examples:
open System
///Takes in a float and returns a tuple of the the two parts.
let split (n: float) =
let x = Math.Truncate(n)
let bits = Decimal.GetBits(decimal n)
let count = BitConverter.GetBytes(bits.[3]).[2]
let dec = n - x
let y = dec * Math.Pow(10., float count)
x, y
Examples:
2.3 -> (2.0, 3.0)
200.123 -> (200.0, 123.0)
5.23 -> (5.0, 23.0)
Getting the X is easy, as you can just truncate the decimal part.
Getting the Y took input from this answer and this one.

Format string to number with minimum length in lua

For example I need number with minimum 3 digit
"512" --> 512
"24" --> 24.0
"5" --> 5.00
One option is write small function. Using answers here for my case it will be something like this
function f(value, w)
local p = math.ceil(math.log10(value))
local prec = value <= 1 and w - 1 or p > w and 0 or w - p
return string.format('%.' .. prec .. 'f', value)
end
print(f(12, 3))
But may be it is possible just using string.format() or any other simple way?
Ok, it seems this case beyond the string.format power. Thanks to #Schollii, this is my current variant
function f(value, w)
local p = math.ceil(math.log10(value))
local prec = value <= 1 and w - 1 or p > w and 0 or w - p
return string.format('%.' .. prec .. 'f', value)
end
print(f(12, 3))
There is no format code specifically for this since string.format uses printf minus a few codes (like * which would hace simplified the solution I give below). So you have to implement yourself, for example:
function f(num, w)
-- get number of digits before decimal
local intWidth = math.ceil(math.log10(num))
-- if intWidth > w then ... end -- may need this
local fmt='%'..w..'.' .. (w-intWidth) .. 'f'
return string.format(fmt, num)
end
print(f(12, 4))
print(f(12, 3))
print(f(12, 2))
print(f(512, 3))
print(f(24, 3))
print(f(5, 3))
You should probably handle case where integer part doesn't fit in field width given (return ceil or floor?).
You can't. Maximum you can reach - specify floating point precision or digit number, but you can't force output to be like your example. Lua uses C like printf with few limitations reference. Look here for full specifiers list link. Remember unsupported ones.
Writing a function would be the best and only solution, especially as your task looks strange, as it doesn't count decimal dot.

system hangs when factorizing a float instead of an integer

I am struggling to understand the cause of this issue. To the point:
1) Passing an integer ( 10 ) to the following factorization function works immediately:
test() ->
X = 10,
F = factorize(X).
factorize(0) -> 1;
factorize(N) -> N * factorize(N-1).
2) Passing a float ( 10.0 ) will cause the beam process to hang, taking high CPU and not even terminating. Notice this is a small value. I can factorize a high integer number and get an almost immediate response, but a small float number 10.0 will cause it hang.
test() ->
X = 10.0, <-- NOTICE THE DOT ZERO 10.0
F = factorize(X).
factorize(0) -> 1;
factorize(N) -> N * factorize(N-1).
Question: why on Erl Earth would this hanging occur with some mere multiplication recurrency of floats ?
As documentation says, there are two operations to compare equality of terms in Erlang and they differ only in handling integer and floats:
=:= - exactly equal - which counts numbers equal if the types are the same, and their values are the same too false = (0.0 =:= 0)
== - equal - counts numbers equal if their values are the same but their types may not be equal true = (0.0 == 0)
Pattern matching uses the first one - exactly equal - operator, that's why your function hanged in the second clause.
Another problem with floats is thier approximate value. You can never be sure you have some exact value especially after arithmetic operation. There is a common practice to use small value epsilon in floats equality tests.
is_zero(F) -> (F < 1.0e-10) andalso (F > -1.0e-10).

F# Floating point ranges are experimental and may be deprecated

I'm trying to make a little function to interpolate between two values with a given increment.
[ 1.0 .. 0.5 .. 20.0 ]
The compiler tells me that this is deprecated, and suggests using ints then casting to float. But this seems a bit long-winded if I have a fractional increment - do I have to divide my start and end values by my increment, then multiple again afterwards? (yeuch!).
I saw something somewhere once about using sequence comprehensions to do this, but I can't remember how.
Help, please.
TL;DR: F# PowerPack's BigRational type is the way to go.
What's Wrong with Floating-point Loops
As many have pointed out, float values are not suitable for looping:
They do have Round Off Error, just like with 1/3 in decimal, we inevitably lose all digits starting at a certain exponent;
They do experience Catastrophic Cancellation (when subtracting two almost equal numbers, the result is rounded to zero);
They always have non-zero Machine epsilon, so the error is increased with every math operation (unless we are adding different numbers many times so that errors mutually cancel out -- but this is not the case for the loops);
They do have different accuracy across the range: the number of unique values in a range [0.0000001 .. 0.0000002] is equivalent to the number of unique values in [1000000 .. 2000000];
Solution
What can instantly solve the above problems, is switching back to integer logic.
With F# PowerPack, you may use BigRational type:
open Microsoft.FSharp.Math
// [1 .. 1/3 .. 20]
[1N .. 1N/3N .. 20N]
|> List.map float
|> List.iter (printf "%f; ")
Note, I took my liberty to set the step to 1/3 because 0.5 from your question actually has an exact binary representation 0.1b and is represented as +1.00000000000000000000000 * 2-1; hence it does not produce any cumulative summation error.
Outputs:
1.000000; 1.333333; 1.666667; 2.000000; 2.333333; 2.666667; 3.000000; (skipped) 18.000000; 18.333333; 18.666667; 19.000000; 19.333333; 19.666667; 20.000000;
// [0.2 .. 0.1 .. 3]
[1N/5N .. 1N/10N .. 3N]
|> List.map float
|> List.iter (printf "%f; ")
Outputs:
0.200000; 0.300000; 0.400000; 0.500000; (skipped) 2.800000; 2.900000; 3.000000;
Conclusion
BigRational uses integer computations, which are not slower than for floating-points;
The round-off occurs only once for each value (upon conversion to a float, but not within the loop);
BigRational acts as if the machine epsilon were zero;
There is an obvious limitation: you can't use irrational numbers like pi or sqrt(2) as they have no exact representation as a fraction. It does not seem to be a very big problem because usually, we are not looping over both rational and irrational numbers, e.g. [1 .. pi/2 .. 42]. If we do (like for geometry computations), there's usually a way to reduce the irrational part, e.g. switching from radians to degrees.
Further reading:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Numeric types in PowerPack
Interestingly, float ranges don't appear to be deprecated anymore. And I remember seeing a question recently (sorry, couldn't track it down) talking about the inherent issues which manifest with float ranges, e.g.
> let xl = [0.2 .. 0.1 .. 3.0];;
val xl : float list =
[0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9; 1.0; 1.1; 1.2; 1.3; 1.4; 1.5; 1.6;
1.7; 1.8; 1.9; 2.0; 2.1; 2.2; 2.3; 2.4; 2.5; 2.6; 2.7; 2.8; 2.9]
I just wanted to point out that you can use ranges on decimal types with a lot less of these kind of rounding issues, e.g.
> [0.2m .. 0.1m .. 3.0m];;
val it : decimal list =
[0.2M; 0.3M; 0.4M; 0.5M; 0.6M; 0.7M; 0.8M; 0.9M; 1.0M; 1.1M; 1.2M; 1.3M;
1.4M; 1.5M; 1.6M; 1.7M; 1.8M; 1.9M; 2.0M; 2.1M; 2.2M; 2.3M; 2.4M; 2.5M;
2.6M; 2.7M; 2.8M; 2.9M; 3.0M]
And if you really do need floats in the end, then you can do something like
> {0.2m .. 0.1m .. 3.0m} |> Seq.map float |> Seq.toList;;
val it : float list =
[0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9; 1.0; 1.1; 1.2; 1.3; 1.4; 1.5; 1.6;
1.7; 1.8; 1.9; 2.0; 2.1; 2.2; 2.3; 2.4; 2.5; 2.6; 2.7; 2.8; 2.9; 3.0]
As Jon and others pointed out, floating point range expressions are not numerically robust. For example [0.0 .. 0.1 .. 0.3] equals [0.0 .. 0.1 .. 0.2]. Using Decimal or Int Types in the range expression is probably better.
For floats I use this function, it first increases the total range 3 times by the smallest float step. I am not sure if this algorithm is very robust now. But it is good enough for me to insure that the stop value is included in the Seq:
let floatrange start step stop =
if step = 0.0 then failwith "stepsize cannot be zero"
let range = stop - start
|> BitConverter.DoubleToInt64Bits
|> (+) 3L
|> BitConverter.Int64BitsToDouble
let steps = range/step
if steps < 0.0 then failwith "stop value cannot be reached"
let rec frange (start, i, steps) =
seq { if i <= steps then
yield start + i*step
yield! frange (start, (i + 1.0), steps) }
frange (start, 0.0, steps)
Try the following sequence expression
seq { 2 .. 40 } |> Seq.map (fun x -> (float x) / 2.0)
You can also write a relatively simple function to generate the range:
let rec frange(from:float, by:float, tof:float) =
seq { if (from < tof) then
yield from
yield! frange(from + by, tof) }
Using this you can just write:
frange(1.0, 0.5, 20.0)
Updated version of Tomas Petricek's answer, which compiles, and works for decreasing ranges (and works with units of measure):
(but it doesn't look as pretty)
let rec frange(from:float<'a>, by:float<'a>, tof:float<'a>) =
// (extra ' here for formatting)
seq {
yield from
if (float by > 0.) then
if (from + by <= tof) then yield! frange(from + by, by, tof)
else
if (from + by >= tof) then yield! frange(from + by, by, tof)
}
#r "FSharp.Powerpack"
open Math.SI
frange(1.0<m>, -0.5<m>, -2.1<m>)
UPDATE I don't know if this is new, or if it was always possible, but I just discovered (here), that this - simpler - syntax is also possible:
let dl = 9.5 / 11.
let min = 21.5 + dl
let max = 40.5 - dl
let a = [ for z in min .. dl .. max -> z ]
let b = a.Length
(Watch out, there's a gotcha in this particular example :)

Resources