Converting Int to Float without Rounding in F# - f#

I have code like this in F#
let fun a b=
let mutable x1=float 0
<some stuff>
x1r<- -a/b
let main argv
let a=Int32.Parse(Console.ReadLine())
let b=Int32.Parse(Console.ReadLine())
fun a b
And i have problem with converting -a/b to float. I tried oldschool *1.0f, and few other tricks, but none of them worked. Is there any way to convert this like that, or do i have to write it another way?

Your question is not very clear but it looks like to me that you are dividing two int and expect that converting the result to a float would keep the fractional part but, when you divide two ints, you get back an int and converting it to a float cannot bring back the fractional part.
You should either convert both a and b to float before performing the division (float -a) / (float b) or, to begin with, read float values from the console with Console.ReadLine() |> float.

Related

Is integer division operator necessary in Dart?

Is the integer division operator any different that casting the result of normal division?
Consider the following Dart code:
void main() {
int a = 10;
int b = 3;
int result = a~/b;
int result2 = (a/b).toInt();
}
Do both of those expressions compile to the same machine code? I'm assuming the answer would be either:
A) Yes, but only because the compiler is smart enough to re-write the second expression.
B) Yes, because there is no actual difference and the first expression is shorthand for the second.
or C) No, because there is a difference in the way the calculation is performed at the CPU level and the compiler doesn't optimize this for you.
x ~/ y is not equivalent to (x / y).toInt(). Integer arithmetic (including division) is significantly faster than floating-point arithmetic.
If x ~/ y is faster, then couldn't the compiler automatically make that transformation? Perhaps, but such a transformation wouldn't always generate the same results, so it wouldn't necessarily be appropriate. Integer division typically is used when the operands are integers. (That is, when you start in the integer domain and want to stay in the integer domain.) If the operands are integers, then the two operations can generate different results due to the difference in precision between int (64-bit signed integers for the Dart VM) and double (a IEEE-754 double-precision floating-point number, which has 53 bits of precision for its significand):
void main() {
var x = 9223372036854775801; // Close to (but not exactly) 2^63 - 1
print(x); // Prints: 9223372036854775801
print(x ~/ 1); // Prints: 9223372036854775801
print((x / 1).toInt()); // Prints: 9223372036854775807
}
Finally, (x / y).toInt() is simply more cumbersome to write and to read than x ~/ y, especially if it's part of a more complex arithmetic expression.

F# - fsc.exe hangs up on huge file

I run some organic chemistry models. A model is described by a generated ModelData.fs file, e.g.: https://github.com/kkkmail/ClmFSharp/blob/master/Clm/Model/ModelData.fs . The file has a very simple structure and using a generated model file is the only way that it can possibly work.
The referenced file is just for tests, but the real models are huge and may go close to 60 - 70 MB / 1.5M LOC. When I try to compile such files, F# compiler,fsc.exe, just hangs up and never comes back. It "eats" about 1.5 GB of memory and then does something forever at near 100% processing capacity. It can clearly handle smaller models, which take about 10 MB in under about a minute. So somewhere between 10 MB and 70 MB something breaks down badly in fsc.
I wonder if there are some parameter tweaks that I could make to the way the fsc compiles the project in order to make it capable of handling such huge models.
The huge models that I am referring to have one parameter set as follows: let numberOfSubstances = 65643. This results in various generated arrays of that size. I wonder if this could be the source of the problem.
Thanks a lot!
I don't think you need to autogenerate all of that.
From your comments, I understand that the functions d0, d1, ... are generated from a big sparse matrix in a way that sums up all of the input array x (with coefficients), but crucially skips summing up zero coefficients, which gives you a great performance gain, because the matrix is huge. Would that be a correct assessment?
If so, I still don't think you need to generate code to do that.
Let's take a look. I will assume that your giant sparse matrix has an interface for obtaining cell values, and it looks something like this:
let getMatrixCell (i: int) (j: int) : double
let maxI: int
let maxJ: int
Then your autogeneration code might look something like this:
let generateDFunction (i: int) =
printfn "let d%d (x: double[]) =" i
printfn " [|"
for j in 0..maxJ do
let cell = getMatrixCell i j
if cell <> 0 then
printfn " %f * x.[%d]" cell j
printfn " |]"
printfn " |> Array.sum"
Which would result in something like this:
let d25 (x : array<double>) =
[|
-1.0 * x.[25]
1.0 * x.[3]
|]
|> Array.sum
Note that I am simplifying here: in your example file, it looks like the functions also multiply negative coefficients by x.[i]. But maybe I'm also overcomplicating, because it looks like all the coefficients are always either 1 or -1. But that is all nonessential to my point.
Now, in the comments, it has been proposed that you don't generate functions d0, d1, ... but instead work directly with the matrix. For example, this would be a naive implementation of such suggestion:
let calculateDFunction (i: int) (x: double[]) =
[| for j in 0..maxJ -> (getMatrixCell i j) * x.[j] |] |> Array.sum
You then argued that this solution would be prohibitively slow, because it always iterates over the whole array x, which is huge, but most of the coefficients are zero, so it doesn't have to.
And then your way of solving this issue was to use an intermediate step of generated code: you generate the functions that only touch non-zero indicies, and then you compile and use those functions.
But here's the point: yes, you do need that intermediate step to get rid of non-zero indicies, but it doesn't have to be generated-and-compiled code!
Instead, you can prepare lists/arrays of non-zero indicies ahead of time:
let indicies =
[| for i in 0..maxI ->
[ for j in 0..maxJ do
let cell = getMatrixCell i j
if cell <> 0 then yield (j, cell)
]
|]
This will yield an array indicies : Array<int list>, where each index k corresponds to your autogenerated function dk, and it contains a list of non-zero matrix indicies together with their values in the matrix. For example, the function d22 I gave above would be represented by the 22nd element of indicies:
indicies.[22] = [ (25, -1.0), (3, 1.0) ]
Based on this intermediate structure, you can then calculate any function dk:
let calculateDFunction (k: int) (x: double[]) =
[| for (j, coeff) in indicies.[k] -> coeff * x.[j] |] |> Array.sum
In fact, if performance is crucial to you (as it seems to be from the comments), you probably should do away with all those intermediate arrays: hundreds or thousands heap allocations on each iteration is definitely not helping. You can sum with a mutable variable instead:
let calculateDFunction (k: int) (x: double[]) =
let sum = 0.0
for (j, coeff) in indicies.[k] do
sum <- sum + coeff * x.[j]
sum

Does using currying result in lower performance in F#?

When writing a function that can accept currying, you can write it as a single-argument function that returns a function. For example,
let add x =
let inner y = x + y
inner
So you can either do:
add 3 4
or:
let add3 = add 3
add3 4
My question, is because you return a function, you are conceptually calling a function twice (the outer function and the inner function). Is this slower than:
let add x y = x + y
or does the compiler optimise invocations of add 3 4 in the curried definition?
let f x = fun y -> x + y
let g x y = x + y
Looking at these function definitions in dnSpy for an optimized build reveals them to be:
public static int f(int x, int y)
{
return x + y;
}
public static int g(int x, int y)
{
return x + y;
}
This is not that strange because g is actually a short-hand definition for f which is the general case. In F#-like languages function conceptually always take a single value returning a single value. Values might be functions. This is easier to see if one paranthese the function signature for f and g
val f: int -> int -> int
// Actually is
// val f: int -> (int -> int)
// ie f is a function that takes a single int and returns a function that takes a single int and returns an int.
In order to get F# to execute faster on .NET the physical representation of f in an assembly is:
public static int f(int x, int y)
While this is a more natural representation of the F# function.
public static Func<int, int> f(int x)
Would perform poorly though.
Usually F# is clever enough to avoid the overhead of the abstraction by optimization like above and on invocation. However, there are situations where F# can't optimize for you.
Imagine that you are implementing fold
let rec fold f s vs =
match vs with
| v::vs -> fold f (f s v) vs
| [] -> s
Here F# can't fully optimize f s v. The reason is that f might have a more complex implementation than above that might return a different function depending on s.
If you look in dnSpy you note that F# are invoking function using InvokeFast but this does an internal test to see if it can be invoked fast. In fold we then do this test for each value even though this is the same function.
This is the reason one might sometimes see fold written like this:
let fold f s vs =
let f = OptimizedClosures.FSharpFunc<_, _, _>.Adapt f
let rec loop s vs =
match vs with
| v::vs -> loop (f.Invoke (s, v)) vs
| [] -> s
loop s vs
Adapt here tests before the loop if f can indeed be optimized and then returns an efficient adapter. In the general case it might still be a bit slower but then this is what the caller intended.
Note; this potential performance degradation doesn't happen for simple function values like 'T -> 'U. This can always be invoked efficiently.
Hope this helps.
I tested this in LINQPad 5.
When compiler optimizations are turned off, the F# compiler will produce different IL for each snippet. In other words, if there are any optimizations going on, it's left up to the JITter, and it may very well be slower to call the first form.
However, when compiler optimizations are turned on, both forms produce identical IL outputs in every scenario I could think of to test it. In fact, with both forms, calling:
add 3 4
yields the IL equivalent of a hard-coded 7, with the entire function call optimized away:
ldc.i4.7
In other words, the F# compiler is pretty thorough when it comes to optimizing logically identical code blocks.
This is not an exhaustive answer, of course, and there could be some case where they are actually treated differently by the compiler.

F# function parameter datatype

This is the first line of F# that I've tried writing, so apologies because I probably just don't know the right Google keywords to search.
I tried defining a function like so:
let sigmoid x deriv = if deriv then x * (1 - x) else 1 / (1 + System.Math.Exp(-x))
This gives me an error on the System.Math.Exp(-x):
The type 'float' does not match the type 'int'
I guess I was expecting the compiler to do type inference on this function and define x as a float. What am I missing here?
Here is the whole thing that I'm trying to plug in:
let sigmoid x deriv = if deriv then x * (1 - x) else 1 / (1 + System.Math.Exp(-x))
[<EntryPoint>]
let main argv =
sigmoid 1.0 false |> printfn "%A"
0
The compiler infers x as int, because you used it in things like 1 - x. A simple 1 will always be an integer, and you can only use it in arithmetic expressions together with other integers. Your code compiles if you change all your usages of 1 to 1.0, which will make it a float and cause x to be inferred as a float as well.
This is different from C#, for example, which will coerce the types if necessary and thus allow for mixing integers and floating point numbers in the same expressions. That can lead to an accidental loss of precision under certain circumstances, though, while F# always forces you to state any necessary conversions explicitly.

f# idiomatic type resolution solution for resolving to the wrong type

I'm just getting started with F# but I have some code that is analogous to the following:
let square x = x*x
let result = square 5.1
let result' = square 12
Unfortunately, this results in the following error: This expression was expected to have type float but here has type int
Is there an idiomatic F# solution to this problem, or is my thinking being tainted by my C# experience?
Just write it like that:
let inline square x = x * x
Otherwise, after first time you used that square function, it type was inferred to be float -> float. Hence, and given that F# does not do automatic conversion from int to float, you receive an error.
So, if you don't want to use inline, the simplest solution is to write
let result' = square (float 12)
It is simple and yet readable.
For more advanced solutions please take a look at this: Does F# have generic arithmetic support?
But those solutions are (imho) incomprehensible.
let inline square x = x * x
let result = square 5.1
let result' = square 12
printfn "%f" result
printfn "%d" result'
There's a whole article by Tomas Petricek on this subject:
http://tomasp.net/blog/fsharp-generic-numeric.aspx/

Resources