I am struggling to understand the cause of this issue. To the point:
1) Passing an integer ( 10 ) to the following factorization function works immediately:
test() ->
X = 10,
F = factorize(X).
factorize(0) -> 1;
factorize(N) -> N * factorize(N-1).
2) Passing a float ( 10.0 ) will cause the beam process to hang, taking high CPU and not even terminating. Notice this is a small value. I can factorize a high integer number and get an almost immediate response, but a small float number 10.0 will cause it hang.
test() ->
X = 10.0, <-- NOTICE THE DOT ZERO 10.0
F = factorize(X).
factorize(0) -> 1;
factorize(N) -> N * factorize(N-1).
Question: why on Erl Earth would this hanging occur with some mere multiplication recurrency of floats ?
As documentation says, there are two operations to compare equality of terms in Erlang and they differ only in handling integer and floats:
=:= - exactly equal - which counts numbers equal if the types are the same, and their values are the same too false = (0.0 =:= 0)
== - equal - counts numbers equal if their values are the same but their types may not be equal true = (0.0 == 0)
Pattern matching uses the first one - exactly equal - operator, that's why your function hanged in the second clause.
Another problem with floats is thier approximate value. You can never be sure you have some exact value especially after arithmetic operation. There is a common practice to use small value epsilon in floats equality tests.
is_zero(F) -> (F < 1.0e-10) andalso (F > -1.0e-10).
Related
I have a finite set of pairs of type (int a, int b). The exact values of the pairs are explicitly present in the knowledge base. For example it could be represented by a function (int a, int b) -> (bool exists) which is fully defined on a finite domain.
I would like to write a function f with signature (int b) -> (int count), representing the number of pairs containing the specified b value as its second member. I would like to do this in z3 python, though it would also be useful to know how to do this in the z3 language
For example, my pairs could be:
(0, 0)
(0, 1)
(1, 1)
(1, 2)
(2, 1)
then f(0) = 1, f(1) = 3, f(2) = 1
This is a bit of an odd thing to do in z3: If the exact values of the pairs are in your knowledge base, then why do you need an SMT solver? You can just search and count using your regular programming techniques, whichever language you are in.
But perhaps you have some other constraints that come into play, and want a generic answer. Here's how one would code this problem in z3py:
from z3 import *
pairs = [(0, 0), (0, 1), (1, 1), (1, 2), (2, 1)]
def count(snd):
return sum([If(snd == p[1], 1, 0) for p in pairs])
s = Solver()
searchFor = Int('searchFor')
result = Int('result')
s.add(Or(*[searchFor == d[0] for d in pairs]))
s.add(result == count(searchFor))
while s.check() == sat:
m = s.model()
print("f(" + str(m[searchFor]) + ") = " + str(m[result]))
s.add(searchFor != m[searchFor])
When run, this prints:
f(0) = 1
f(1) = 3
f(2) = 1
as you predicted.
Again; if your pairs are exactly known (i.e., they are concrete numbers), don't use z3 for this problem: Simply write a program to count as needed. If the database values, however, are not necessarily concrete but have other constraints, then above would be the way to go.
To find out how this is coded in SMTLib (the native language z3 speaks), you can insert print(s.sexpr()) in the program before the while loop starts. That's one way. Of course, if you were writing this by hand, you might want to code it differently in SMTLib; but I'd strongly recommend sticking to higher-level languages instead of SMTLib as it tends to be hard to read/write for anyone except machines.
I'm in the process of creating a cryptography package for Dart (https://pub.dev/packages/steel_crypt). Right now, most of what I've done is either exposed from PointyCastle or simple-ish algorithms where bitwise rotations are unnecessary or replaceable by >> and <<.
However, as I move toward complicated cryptography solutions, which I can do mathematically, I'm unsure of how to implement bitwise rotation in Dart with maximum efficiency. Because of the nature of cryptography, the speed part is emphasized and uncompromising, in that I need the absolute fastest implementation.
I've ported a method of bitwise rotation from Java. I'm pretty sure this is correct, but unsure of the efficiency and readability:
My tested implementation is below:
int INT_BITS = 64; //Dart ints are 64 bit
static int leftRotate(int n, int d) {
//In n<<d, last d bits are 0.
//To put first 3 bits of n at
//last, do bitwise-or of n<<d with
//n >> (INT_BITS - d)
return (n << d) | (n >> (INT_BITS - d));
}
static int rightRotate(int n, int d) {
//In n>>d, first d bits are 0.
//To put last 3 bits of n at
//first, we do bitwise-or of n>>d with
//n << (INT_BITS - d)
return (n >> d) | (n << (INT_BITS - d));
}
EDIT (for clarity): Dart has no unsigned right or left shift, meaning that >> and << are signed right shifts, which bears more significance than I might have thought. It poses a challenge that other languages don't in terms of devising an answer. The accepted answer below explains this and also shows the correct method of bitwise rotation.
As pointed out, Dart has no >>> (unsigned right shift) operator, so you have to rely on the signed shift operator.
In that case,
int rotateLeft(int n, int count) {
const bitCount = 64; // make it 32 for JavaScript compilation.
assert(count >= 0 && count < bitCount);
if (count == 0) return n;
return (n << count) |
((n >= 0) ? n >> (bitCount - count) : ~(~n >> (bitCount - count)));
}
should work.
This code only works for the native VM. When compiling to JavaScript, numbers are doubles, and bitwise operations are only done on 32-bit numbers.
I'm learning F# and have an assignment where I have to treat a float as a coordinate. For example float 2.3 would be treated as a coordinate (2.3) where x is 2 and y is 3.
How can I split the float to calculate with it?
I am trying to make a function to calculate the length of a vector:
let lenOfVec (1.2, 2.3) and using pythagoras' method to get the length of hypotenuse.
But I am already stuck at splitting up the float.
Hope some can help!
Having at your disposal libraries as rich as F#/.NET offer the task of splitting a float into two can be done with one short line of code:
let splitFloat n = n.ToString().Split('.') |> Array.map float
library function ToString() converts the argument n (supposedly float) to a string
library functionSplit('.') applied to this string converts it into an array of two strings representing the first number before decimal dot and the second number after the dot
finally this array of 2 strings is converted by applying library function float to the each array element with the help of just another library function Array.map, producing the array of two sought floats
Being applied to a random float number the outlined chain of conversions looks like
123.456 --> "123.456" --> [|123;456|] --> [|123.0;456.0|]
Stealing from a few other answers on here, something like this seems to work for a few examples:
open System
///Takes in a float and returns a tuple of the the two parts.
let split (n: float) =
let x = Math.Truncate(n)
let bits = Decimal.GetBits(decimal n)
let count = BitConverter.GetBytes(bits.[3]).[2]
let dec = n - x
let y = dec * Math.Pow(10., float count)
x, y
Examples:
2.3 -> (2.0, 3.0)
200.123 -> (200.0, 123.0)
5.23 -> (5.0, 23.0)
Getting the X is easy, as you can just truncate the decimal part.
Getting the Y took input from this answer and this one.
I would like:
unknown_function(123.456) -> 456
unknown_function(1234.56) -> 56
Or
unknown_function(123.456) -> "456"
Is there a builtin for this? The builtin trunc/1 does the opposite:
2> trunc(123.456).
123
There is this answer for C: Extract decimal part from a floating point number in C and this for Java: How to get the decimal part of a float?
No there is no BIF for this, but you can do this:
decimal_point(X, DecimalDigits) when X < 0 ->
decimal_point(-X, DecimalDigits);
decimal_point(X, DecimalDigits)->
(X - trunc(X)) * math:pow(10,DecimalDigits).
> decimal_point(2.33, 2).
33
> decimal_point(-2.33, 2).
33
This is inspired by #Dogbert's comment
The algorithm doesnt work using native floats due to floating point representation limits and rounding errors.
However, using https://github.com/tim/erlang-decimal:
frac_to_denom_int(Num, Denom, Precison) ->
{X, _} = string:to_integer(lists:nth(2, string:tokens(decimal:format(decimal:divide(Num, Denom, [{precision, Precison}])), "."))),
X.
E.g.,
frac_to_denom_int("1.0", "3.0", 1000).
> 3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333
If you don't have a frac,
d_to_denom_int(D_Tup)->
string:to_integer(lists:nth(2, string:tokens(decimal:format(D_Tup), "."))).
d_to_denom_int({0, 123456, -3}).
> 456
Based on #dogbert's comment, passing in one more flag compact on the float_to_list/2 call will help:
lists:nth(2, string:tokens(float_to_list(123.456, [{decimals, 10}, compact]), ".")).
% "456"
If you go over decimals 14, you'll start to see those rounding errors.
I'm trying to make a little function to interpolate between two values with a given increment.
[ 1.0 .. 0.5 .. 20.0 ]
The compiler tells me that this is deprecated, and suggests using ints then casting to float. But this seems a bit long-winded if I have a fractional increment - do I have to divide my start and end values by my increment, then multiple again afterwards? (yeuch!).
I saw something somewhere once about using sequence comprehensions to do this, but I can't remember how.
Help, please.
TL;DR: F# PowerPack's BigRational type is the way to go.
What's Wrong with Floating-point Loops
As many have pointed out, float values are not suitable for looping:
They do have Round Off Error, just like with 1/3 in decimal, we inevitably lose all digits starting at a certain exponent;
They do experience Catastrophic Cancellation (when subtracting two almost equal numbers, the result is rounded to zero);
They always have non-zero Machine epsilon, so the error is increased with every math operation (unless we are adding different numbers many times so that errors mutually cancel out -- but this is not the case for the loops);
They do have different accuracy across the range: the number of unique values in a range [0.0000001 .. 0.0000002] is equivalent to the number of unique values in [1000000 .. 2000000];
Solution
What can instantly solve the above problems, is switching back to integer logic.
With F# PowerPack, you may use BigRational type:
open Microsoft.FSharp.Math
// [1 .. 1/3 .. 20]
[1N .. 1N/3N .. 20N]
|> List.map float
|> List.iter (printf "%f; ")
Note, I took my liberty to set the step to 1/3 because 0.5 from your question actually has an exact binary representation 0.1b and is represented as +1.00000000000000000000000 * 2-1; hence it does not produce any cumulative summation error.
Outputs:
1.000000; 1.333333; 1.666667; 2.000000; 2.333333; 2.666667; 3.000000; (skipped) 18.000000; 18.333333; 18.666667; 19.000000; 19.333333; 19.666667; 20.000000;
// [0.2 .. 0.1 .. 3]
[1N/5N .. 1N/10N .. 3N]
|> List.map float
|> List.iter (printf "%f; ")
Outputs:
0.200000; 0.300000; 0.400000; 0.500000; (skipped) 2.800000; 2.900000; 3.000000;
Conclusion
BigRational uses integer computations, which are not slower than for floating-points;
The round-off occurs only once for each value (upon conversion to a float, but not within the loop);
BigRational acts as if the machine epsilon were zero;
There is an obvious limitation: you can't use irrational numbers like pi or sqrt(2) as they have no exact representation as a fraction. It does not seem to be a very big problem because usually, we are not looping over both rational and irrational numbers, e.g. [1 .. pi/2 .. 42]. If we do (like for geometry computations), there's usually a way to reduce the irrational part, e.g. switching from radians to degrees.
Further reading:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Numeric types in PowerPack
Interestingly, float ranges don't appear to be deprecated anymore. And I remember seeing a question recently (sorry, couldn't track it down) talking about the inherent issues which manifest with float ranges, e.g.
> let xl = [0.2 .. 0.1 .. 3.0];;
val xl : float list =
[0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9; 1.0; 1.1; 1.2; 1.3; 1.4; 1.5; 1.6;
1.7; 1.8; 1.9; 2.0; 2.1; 2.2; 2.3; 2.4; 2.5; 2.6; 2.7; 2.8; 2.9]
I just wanted to point out that you can use ranges on decimal types with a lot less of these kind of rounding issues, e.g.
> [0.2m .. 0.1m .. 3.0m];;
val it : decimal list =
[0.2M; 0.3M; 0.4M; 0.5M; 0.6M; 0.7M; 0.8M; 0.9M; 1.0M; 1.1M; 1.2M; 1.3M;
1.4M; 1.5M; 1.6M; 1.7M; 1.8M; 1.9M; 2.0M; 2.1M; 2.2M; 2.3M; 2.4M; 2.5M;
2.6M; 2.7M; 2.8M; 2.9M; 3.0M]
And if you really do need floats in the end, then you can do something like
> {0.2m .. 0.1m .. 3.0m} |> Seq.map float |> Seq.toList;;
val it : float list =
[0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9; 1.0; 1.1; 1.2; 1.3; 1.4; 1.5; 1.6;
1.7; 1.8; 1.9; 2.0; 2.1; 2.2; 2.3; 2.4; 2.5; 2.6; 2.7; 2.8; 2.9; 3.0]
As Jon and others pointed out, floating point range expressions are not numerically robust. For example [0.0 .. 0.1 .. 0.3] equals [0.0 .. 0.1 .. 0.2]. Using Decimal or Int Types in the range expression is probably better.
For floats I use this function, it first increases the total range 3 times by the smallest float step. I am not sure if this algorithm is very robust now. But it is good enough for me to insure that the stop value is included in the Seq:
let floatrange start step stop =
if step = 0.0 then failwith "stepsize cannot be zero"
let range = stop - start
|> BitConverter.DoubleToInt64Bits
|> (+) 3L
|> BitConverter.Int64BitsToDouble
let steps = range/step
if steps < 0.0 then failwith "stop value cannot be reached"
let rec frange (start, i, steps) =
seq { if i <= steps then
yield start + i*step
yield! frange (start, (i + 1.0), steps) }
frange (start, 0.0, steps)
Try the following sequence expression
seq { 2 .. 40 } |> Seq.map (fun x -> (float x) / 2.0)
You can also write a relatively simple function to generate the range:
let rec frange(from:float, by:float, tof:float) =
seq { if (from < tof) then
yield from
yield! frange(from + by, tof) }
Using this you can just write:
frange(1.0, 0.5, 20.0)
Updated version of Tomas Petricek's answer, which compiles, and works for decreasing ranges (and works with units of measure):
(but it doesn't look as pretty)
let rec frange(from:float<'a>, by:float<'a>, tof:float<'a>) =
// (extra ' here for formatting)
seq {
yield from
if (float by > 0.) then
if (from + by <= tof) then yield! frange(from + by, by, tof)
else
if (from + by >= tof) then yield! frange(from + by, by, tof)
}
#r "FSharp.Powerpack"
open Math.SI
frange(1.0<m>, -0.5<m>, -2.1<m>)
UPDATE I don't know if this is new, or if it was always possible, but I just discovered (here), that this - simpler - syntax is also possible:
let dl = 9.5 / 11.
let min = 21.5 + dl
let max = 40.5 - dl
let a = [ for z in min .. dl .. max -> z ]
let b = a.Length
(Watch out, there's a gotcha in this particular example :)