Is the integer division operator any different that casting the result of normal division?
Consider the following Dart code:
void main() {
int a = 10;
int b = 3;
int result = a~/b;
int result2 = (a/b).toInt();
}
Do both of those expressions compile to the same machine code? I'm assuming the answer would be either:
A) Yes, but only because the compiler is smart enough to re-write the second expression.
B) Yes, because there is no actual difference and the first expression is shorthand for the second.
or C) No, because there is a difference in the way the calculation is performed at the CPU level and the compiler doesn't optimize this for you.
x ~/ y is not equivalent to (x / y).toInt(). Integer arithmetic (including division) is significantly faster than floating-point arithmetic.
If x ~/ y is faster, then couldn't the compiler automatically make that transformation? Perhaps, but such a transformation wouldn't always generate the same results, so it wouldn't necessarily be appropriate. Integer division typically is used when the operands are integers. (That is, when you start in the integer domain and want to stay in the integer domain.) If the operands are integers, then the two operations can generate different results due to the difference in precision between int (64-bit signed integers for the Dart VM) and double (a IEEE-754 double-precision floating-point number, which has 53 bits of precision for its significand):
void main() {
var x = 9223372036854775801; // Close to (but not exactly) 2^63 - 1
print(x); // Prints: 9223372036854775801
print(x ~/ 1); // Prints: 9223372036854775801
print((x / 1).toInt()); // Prints: 9223372036854775807
}
Finally, (x / y).toInt() is simply more cumbersome to write and to read than x ~/ y, especially if it's part of a more complex arithmetic expression.
Related
When writing a function that can accept currying, you can write it as a single-argument function that returns a function. For example,
let add x =
let inner y = x + y
inner
So you can either do:
add 3 4
or:
let add3 = add 3
add3 4
My question, is because you return a function, you are conceptually calling a function twice (the outer function and the inner function). Is this slower than:
let add x y = x + y
or does the compiler optimise invocations of add 3 4 in the curried definition?
let f x = fun y -> x + y
let g x y = x + y
Looking at these function definitions in dnSpy for an optimized build reveals them to be:
public static int f(int x, int y)
{
return x + y;
}
public static int g(int x, int y)
{
return x + y;
}
This is not that strange because g is actually a short-hand definition for f which is the general case. In F#-like languages function conceptually always take a single value returning a single value. Values might be functions. This is easier to see if one paranthese the function signature for f and g
val f: int -> int -> int
// Actually is
// val f: int -> (int -> int)
// ie f is a function that takes a single int and returns a function that takes a single int and returns an int.
In order to get F# to execute faster on .NET the physical representation of f in an assembly is:
public static int f(int x, int y)
While this is a more natural representation of the F# function.
public static Func<int, int> f(int x)
Would perform poorly though.
Usually F# is clever enough to avoid the overhead of the abstraction by optimization like above and on invocation. However, there are situations where F# can't optimize for you.
Imagine that you are implementing fold
let rec fold f s vs =
match vs with
| v::vs -> fold f (f s v) vs
| [] -> s
Here F# can't fully optimize f s v. The reason is that f might have a more complex implementation than above that might return a different function depending on s.
If you look in dnSpy you note that F# are invoking function using InvokeFast but this does an internal test to see if it can be invoked fast. In fold we then do this test for each value even though this is the same function.
This is the reason one might sometimes see fold written like this:
let fold f s vs =
let f = OptimizedClosures.FSharpFunc<_, _, _>.Adapt f
let rec loop s vs =
match vs with
| v::vs -> loop (f.Invoke (s, v)) vs
| [] -> s
loop s vs
Adapt here tests before the loop if f can indeed be optimized and then returns an efficient adapter. In the general case it might still be a bit slower but then this is what the caller intended.
Note; this potential performance degradation doesn't happen for simple function values like 'T -> 'U. This can always be invoked efficiently.
Hope this helps.
I tested this in LINQPad 5.
When compiler optimizations are turned off, the F# compiler will produce different IL for each snippet. In other words, if there are any optimizations going on, it's left up to the JITter, and it may very well be slower to call the first form.
However, when compiler optimizations are turned on, both forms produce identical IL outputs in every scenario I could think of to test it. In fact, with both forms, calling:
add 3 4
yields the IL equivalent of a hard-coded 7, with the entire function call optimized away:
ldc.i4.7
In other words, the F# compiler is pretty thorough when it comes to optimizing logically identical code blocks.
This is not an exhaustive answer, of course, and there could be some case where they are actually treated differently by the compiler.
I have code like this in F#
let fun a b=
let mutable x1=float 0
<some stuff>
x1r<- -a/b
let main argv
let a=Int32.Parse(Console.ReadLine())
let b=Int32.Parse(Console.ReadLine())
fun a b
And i have problem with converting -a/b to float. I tried oldschool *1.0f, and few other tricks, but none of them worked. Is there any way to convert this like that, or do i have to write it another way?
Your question is not very clear but it looks like to me that you are dividing two int and expect that converting the result to a float would keep the fractional part but, when you divide two ints, you get back an int and converting it to a float cannot bring back the fractional part.
You should either convert both a and b to float before performing the division (float -a) / (float b) or, to begin with, read float values from the console with Console.ReadLine() |> float.
This is the first line of F# that I've tried writing, so apologies because I probably just don't know the right Google keywords to search.
I tried defining a function like so:
let sigmoid x deriv = if deriv then x * (1 - x) else 1 / (1 + System.Math.Exp(-x))
This gives me an error on the System.Math.Exp(-x):
The type 'float' does not match the type 'int'
I guess I was expecting the compiler to do type inference on this function and define x as a float. What am I missing here?
Here is the whole thing that I'm trying to plug in:
let sigmoid x deriv = if deriv then x * (1 - x) else 1 / (1 + System.Math.Exp(-x))
[<EntryPoint>]
let main argv =
sigmoid 1.0 false |> printfn "%A"
0
The compiler infers x as int, because you used it in things like 1 - x. A simple 1 will always be an integer, and you can only use it in arithmetic expressions together with other integers. Your code compiles if you change all your usages of 1 to 1.0, which will make it a float and cause x to be inferred as a float as well.
This is different from C#, for example, which will coerce the types if necessary and thus allow for mixing integers and floating point numbers in the same expressions. That can lead to an accidental loss of precision under certain circumstances, though, while F# always forces you to state any necessary conversions explicitly.
The function I have:
let increment n = n+1;;
My call to the function:
let x = increment -5;;
The error I get:
let x = increment -5;;
^^^^^^^^^
Error: This expression has type int -> int
but an expression was expected of type int`
Why doesn't x = -4 after the function call?
I tried finding the answer myself and I ended up here:
http://caml.inria.fr/pub/docs/manual-ocaml-4.00/lex.html#prefix-symbol
The term "longest match rule" is used in the section Ambiguities, I assume that a clarification of that term would solve my issue?
The problem is easy, - is considered here as the binary operator minus so OCaml is reading it as increment minus 5 (increment and 5 are the two operands) when what you'd want is increment the number (minus 5) (increment being a function).
Just write increment (-5) and the job's done.
An alternative to using parentheses, as suggested in the other answer, is to use the ## application operator, e.g.:
let increment n = n + 1
let x = increment ## -5
Briefly, f ## x is the same as f x, but the ## operator has lower precdence than arithmetic operations, while function application has higher precedence. Thus, using the application operator allows you to omit parentheses.
Sometimes, the reverse application operator |> can be more idiomatic; it is typically used to "pipe" an expression through one or more function applications.
Example:
let increment n = n + 1
let x = -5 |> increment
let clamp x low high = x |> min high |> max low
What is the minimum value d such that...
f - d != f
f + d != f
...for any CGFloat f (except infinity)?
For example, it's not 1.
CGFloat f = CGFLOAT_MAX;
CGFloat d = 1;
NSLog(#"%d", f - d != f); // Prints 0
If “any CGFLOAT” truly means any value that a CGFLOAT can take, then the answer is infinity. Otherwise, f could be infinity (a legal value), and f-d would equal f for any finite value of d.
If f is limited to finite values, then d is 2970 if CGFLOAT_IS_DOUBLE is set and 2103 otherwise. This is because those are the smallest values required to cause a change when d is added to the largest finite values that a CGFLOAT could have.
These are undoubtedly not the values you are looking for. As Stephen Canon notes, what are you really trying to do?
The value such that
f - d != f
f + d != f
is not fixed for all possible floating point numbers. You've probably came to this conclusion your own because you can clearly see that for d=1 and f=5.0, f-d=4.0... but this doesn' work for CGFLOAT_MAX.
This is because of how floating point numbers are stored in memory, they are stored with a base and an exponent, so not all digits are being represented.
That would depend on the the definition of CGFLOAT on your platform.
Current platforms use either float (typically when compiling for 32-bit platforms) or double (typically when compiling for 64-bit platforms).
32 bit floats use 23 bits for the fraction, which means that d would probably be around CGFLOAT_MAX/2^23
64 bit doubles use 52 bits for the fraction, which means that d would probably be around CGFLOAT_MAX/2^52
The minimum value for d depends on the value of f.
The problem is that CGFloat only supports a certain number of significant digits and not all floating point number are represented exactly.
The following is not exact results but meant to illustrate the problem.
If f is 1.0 then d might need to be something like 0.0000001. But if f is 1000000.0 (one million) then d would need to be 0.1.
Basically the value of d must be within the significant digits of f for the result to be noticeable.
Consider using NSDecimalNumber to be able to fully represent large numbers eliminating this problem.