Difference between higher order and curried functions - f#

I'm reading a book, Functional Programming Using F#, which says (page 33), in the section Declaration of higher-order functions
We have seen higher-order built-in functions like (+) and (<<)
and at the end of the section
Higher-order functions may alternatively be defined by supplying the arguments as follows in the let-declaration:
let weight ro s = ro * s ** 3.0;;
However there were some helpful comments at the bottom of a question that I asked earlier today (which was originally titled "When should I write my functions as higher order functions") that seem to throw some doubt on whether these examples are in fact higher-order functions.
The wikipedia definition of higher-order function is:
a higher-order function (also functional form, functional or functor) is a function that does at least one of the following: (i) take one or more functions as an input; (ii) output a function.
On the one hand, I can see that functions like (+), and weight might be regarded as higher order functions because given a single argument they return a function. On the other hand I can see that they are properly regarded as curried functions. I'm learning F# as a self-study project and would like to get the concepts clear, so the answers and discussion on this site are particularly helpful.
My question is, what is the right term for these functions, and, perhaps more importantly, how do people normally use the terms "higher order functions" and "curried functions"?

I think you could say that curried function is a higher-order function that returns a function as the result.
A curried function is a function with a type that looks like a -> b -> c - and if you add parentheses (which does not change the type) a -> (b -> c), you can see that this is also higher-order.
However, you can write functions that are higher-order but not curried. For example, the following simple function takes some function f and calls it twice:
let runTwice f = f(); f();
This function has a type (unit -> unit) -> unit, so it is not curried (it just takes some input and returns unit value), but it is higher-order because the argument is a function.
Although functions like (+) are technically higher-order (the type is int -> (int -> int)), I do not think that they are good examples of higher-order, because you do not usually use them in the higher-order way (but it is occasionally useful). More typical examples of higher-order functions are functions like List.map that take functions as arguments.

Roughly speaking, the curried functions are a subset of the higher-order functions. Higher-order functions accept functions as arguments or return functions in their results. Curried functions are multivariate functions written in curried form as a function accepting the first argument and returning a function that accepts the second argument and so forth.
That is what Tomas says above. However, I think there is a subtlety here. I don't think all functions that return a function are curried and I think that Tomas' statement "if you add parentheses (which does not change the type)" is inaccurate in F#.
Specically, consider a function that takes an argument, has a side-effect and then returns another function that takes another argument and returns a result:
let f x =
printfn "%d" x
fun y -> x+y
F# infers the type:
val f : int -> (int -> int)
Note that it has put seemingly-redundant parentheses in there, I believe, precisely because there is a subtle difference between those types.
Furthermore, although this function returns a function as its result I do not think it qualifies as a curried function because of that side effect. This is not a multivariate function rewritten in curried form...

Related

When are F# function calls evaluated; lazily or immediately?

Curried functions in F#. I get the bit where passing in a subset of parameters yields a function with presets. I just wondered if passing all of the parameters is any different. For example:
let addTwo x y = x + y
let incr a = addTwo 1
let added = addTwo 2 2
incr is a function taking one argument.
Is added an int or a function? I can imagine an implementation where "added" is evaluated lazily only on use (like Schroedinger's Cat on opening the box). Is there any guarantee of when the addition is performed?
added is not a function; it is just a value that is calculated and bound to the name on the spot. A function always needs at least one parameter; if there is nothing useful to pass, that would be the unit value ():
let added () = addTwo 2 2
F# is an eagerly evaluated language, so an expression like addTwo 2 2 will immediately be evaluated to a value of the int type.
Haskell, by contrast, is lazily evaluated. An expression like addTwo 2 2 will not be evaluated until the value is needed. The type of the expression would still be a single integer, though. Even so, such an expression is, despite its laziness, not regarded as a function; in Haskell, such an unevaluated expression is called a thunk. That basically just means 'an arbitrarily complex expression that's not yet evaluated'.
incr is a function taking one argument. Is added an int or a function?
added, in this case, is a named binding that evaluates to an int. It is not a function.
I can imagine an implementation where "added" is evaluated lazily only on use (like Schroedinger's Cat on opening the box). Is there any guarantee of when the addition is performed?
The addition will be performed immediately when the binding is generated. There is no laziness involved.
As explained by TeaDrivenDev, you can change added to be a bound function instead of a bound value by adding a parameter, which can be unit:
let added () = addTwo 2 2
In this case, it will be a function, so the addition wouldn't happen until you call it:
let result = added () // Call the function, bind output to result
No. But kind of yes. But really, no.
You can construct a pure functional language that only has functions and nothing else. Lambda calculus is a complete algebra, so the theory is there. In this model, added can be considered a parameter-less function (in contrast to e.g. random(), where there's one parameter of type unit).
But F# is different. Since it's a rather pragmatic mix of imperative and functional programming, the result is not a function[1]. Instead, it's a value, just like a local in C#. This is no implementation detail - it's actually part of the F# specification. This does have disadvantages - it means its possible to have an ambiguous definition, where a definition could be either a value or a function definition (14.6.1).
[1] - Though in a pure functional program, you can't tell the difference - it's the same as just doing a substitution of the function with a cached value, which is perfectly legal.

Why are there two kinds of functions in Elixir?

I'm learning Elixir and wonder why it has two types of function definitions:
functions defined in a module with def, called using myfunction(param1, param2)
anonymous functions defined with fn, called using myfn.(param1, param2)
Only the second kind of function seems to be a first-class object and can be passed as a parameter to other functions. A function defined in a module needs to be wrapped in a fn. There's some syntactic sugar which looks like otherfunction(&myfunction(&1, &2)) in order to make that easy, but why is it necessary in the first place? Why can't we just do otherfunction(myfunction))? Is it only to allow calling module functions without parenthesis like in Ruby? It seems to have inherited this characteristic from Erlang which also has module functions and funs, so does it actually comes from how the Erlang VM works internally?
It there any benefit having two types of functions and converting from one type to another in order to pass them to other functions? Is there a benefit having two different notations to call functions?
Just to clarify the naming, they are both functions. One is a named function and the other is an anonymous one. But you are right, they work somewhat differently and I am going to illustrate why they work like that.
Let's start with the second, fn. fn is a closure, similar to a lambda in Ruby. We can create it as follows:
x = 1
fun = fn y -> x + y end
fun.(2) #=> 3
A function can have multiple clauses too:
x = 1
fun = fn
y when y < 0 -> x - y
y -> x + y
end
fun.(2) #=> 3
fun.(-2) #=> 3
Now, let's try something different. Let's try to define different clauses expecting a different number of arguments:
fn
x, y -> x + y
x -> x
end
** (SyntaxError) cannot mix clauses with different arities in function definition
Oh no! We get an error! We cannot mix clauses that expect a different number of arguments. A function always has a fixed arity.
Now, let's talk about the named functions:
def hello(x, y) do
x + y
end
As expected, they have a name and they can also receive some arguments. However, they are not closures:
x = 1
def hello(y) do
x + y
end
This code will fail to compile because every time you see a def, you get an empty variable scope. That is an important difference between them. I particularly like the fact that each named function starts with a clean slate and you don't get the variables of different scopes all mixed up together. You have a clear boundary.
We could retrieve the named hello function above as an anonymous function. You mentioned it yourself:
other_function(&hello(&1))
And then you asked, why I cannot simply pass it as hello as in other languages? That's because functions in Elixir are identified by name and arity. So a function that expects two arguments is a different function than one that expects three, even if they had the same name. So if we simply passed hello, we would have no idea which hello you actually meant. The one with two, three or four arguments? This is exactly the same reason why we can't create an anonymous function with clauses with different arities.
Since Elixir v0.10.1, we have a syntax to capture named functions:
&hello/1
That will capture the local named function hello with arity 1. Throughout the language and its documentation, it is very common to identify functions in this hello/1 syntax.
This is also why Elixir uses a dot for calling anonymous functions. Since you can't simply pass hello around as a function, instead you need to explicitly capture it, there is a natural distinction between named and anonymous functions and a distinct syntax for calling each makes everything a bit more explicit (Lispers would be familiar with this due to the Lisp 1 vs. Lisp 2 discussion).
Overall, those are the reasons why we have two functions and why they behave differently.
I don't know how useful this will be to anyone else, but the way I finally wrapped my head around the concept was to realize that elixir functions aren't Functions.
Everything in elixir is an expression. So
MyModule.my_function(foo)
is not a function but the expression returned by executing the code in my_function. There is actually only one way to get a "Function" that you can pass around as an argument and that is to use the anonymous function notation.
It is tempting to refer to the fn or & notation as a function pointer, but it is actually much more. It's a closure of the surrounding environment.
If you ask yourself:
Do I need an execution environment or a data value in this spot?
And if you need execution use fn, then most of the difficulties become much
clearer.
I may be wrong since nobody mentioned it, but I was also under the impression that the reason for this is also the ruby heritage of being able to call functions without brackets.
Arity is obviously involved but lets put it aside for a while and use functions without arguments. In a language like javascript where brackets are mandatory, it is easy to make the difference between passing a function as an argument and calling the function. You call it only when you use the brackets.
my_function // argument
(function() {}) // argument
my_function() // function is called
(function() {})() // function is called
As you can see, naming it or not does not make a big difference. But elixir and ruby allow you to call functions without the brackets. This is a design choice which I personally like but it has this side effect you cannot use just the name without the brackets because it could mean you want to call the function. This is what the & is for. If you leave arity appart for a second, prepending your function name with & means that you explicitly want to use this function as an argument, not what this function returns.
Now the anonymous function is bit different in that it is mainly used as an argument. Again this is a design choice but the rational behind it is that it is mainly used by iterators kind of functions which take functions as arguments. So obviously you don't need to use & because they are already considered arguments by default. It is their purpose.
Now the last problem is that sometimes you have to call them in your code, because they are not always used with an iterator kind of function, or you might be coding an iterator yourself. For the little story, since ruby is object oriented, the main way to do it was to use the call method on the object. That way, you could keep the non-mandatory brackets behaviour consistent.
my_lambda.call
my_lambda.call()
my_lambda_with_arguments.call :h2g2, 42
my_lambda_with_arguments.call(:h2g2, 42)
Now somebody came up with a shortcut which basically looks like a method with no name.
my_lambda.()
my_lambda_with_arguments.(:h2g2, 42)
Again, this is a design choice. Now elixir is not object oriented and therefore call not use the first form for sure. I can't speak for José but it looks like the second form was used in elixir because it still looks like a function call with an extra character. It's close enough to a function call.
I did not think about all the pros and cons, but it looks like in both languages you could get away with just the brackets as long as you make brackets mandatory for anonymous functions. It seems like it is:
Mandatory brackets VS Slightly different notation
In both cases you make an exception because you make both behave differently. Since there is a difference, you might as well make it obvious and go for the different notation. The mandatory brackets would look natural in most cases but very confusing when things don't go as planned.
Here you go. Now this might not be the best explanation in the world because I simplified most of the details. Also most of it are design choices and I tried to give a reason for them without judging them. I love elixir, I love ruby, I like the function calls without brackets, but like you, I find the consequences quite misguiding once in a while.
And in elixir, it is just this extra dot, whereas in ruby you have blocks on top of this. Blocks are amazing and I am surprised how much you can do with just blocks, but they only work when you need just one anonymous function which is the last argument. Then since you should be able to deal with other scenarios, here comes the whole method/lambda/proc/block confusion.
Anyway... this is out of scope.
I've never understood why explanations of this are so complicated.
It's really just an exceptionally small distinction combined with the realities of Ruby-style "function execution without parens".
Compare:
def fun1(x, y) do
x + y
end
To:
fun2 = fn
x, y -> x + y
end
While both of these are just identifiers...
fun1 is an identifier that describes a named function defined with def.
fun2 is an identifier that describes a variable (that happens to contain a reference to function).
Consider what that means when you see fun1 or fun2 in some other expression? When evaluating that expression, do you call the referenced function or do you just reference a value out of memory?
There's no good way to know at compile time. Ruby has the luxury of introspecting the variable namespace to find out if a variable binding has shadowed a function at some point in time. Elixir, being compiled, can't really do this. That's what the dot-notation does, it tells Elixir that it should contain a function reference and that it should be called.
And this is really hard. Imagine that there wasn't a dot notation. Consider this code:
val = 5
if :rand.uniform < 0.5 do
val = fn -> 5 end
end
IO.puts val # Does this work?
IO.puts val.() # Or maybe this?
Given the above code, I think it's pretty clear why you have to give Elixir the hint. Imagine if every variable de-reference had to check for a function? Alternatively, imagine what heroics would be necessary to always infer that variable dereference was using a function?
There's an excellent blog post about this behavior: link
Two types of functions
If a module contains this:
fac(0) when N > 0 -> 1;
fac(N) -> N* fac(N-1).
You can’t just cut and paste this into the shell and get the same
result.
It’s because there is a bug in Erlang. Modules in Erlang are sequences
of FORMS. The Erlang shell evaluates a sequence of
EXPRESSIONS. In Erlang FORMS are not EXPRESSIONS.
double(X) -> 2*X. in an Erlang module is a FORM
Double = fun(X) -> 2*X end. in the shell is an EXPRESSION
The two are not the same. This bit of silliness has been Erlang
forever but we didn’t notice it and we learned to live with it.
Dot in calling fn
iex> f = fn(x) -> 2 * x end
#Function<erl_eval.6.17052888>
iex> f.(10)
20
In school I learned to call functions by writing f(10) not f.(10) -
this is “really” a function with a name like Shell.f(10) (it’s a
function defined in the shell) The shell part is implicit so it should
just be called f(10).
If you leave it like this expect to spend the next twenty years of
your life explaining why.
Elixir has optional braces for functions, including functions with 0 arity. Let's see an example of why it makes a separate calling syntax important:
defmodule Insanity do
def dive(), do: fn() -> 1 end
end
Insanity.dive
# #Function<0.16121902/0 in Insanity.dive/0>
Insanity.dive()
# #Function<0.16121902/0 in Insanity.dive/0>
Insanity.dive.()
# 1
Insanity.dive().()
# 1
Without making a difference between 2 types of functions, we can't say what Insanity.dive means: getting a function itself, calling it, or also calling the resulting anonymous function.
fn -> syntax is for using anonymous functions. Doing var.() is just telling elixir that I want you to take that var with a func in it and run it instead of referring to the var as something just holding that function.
Elixir has a this common pattern where instead of having logic inside of a function to see how something should execute, we pattern match different functions based on what kind of input we have. I assume this is why we refer to things by arity in the function_name/1 sense.
It's kind of weird to get used to doing shorthand function definitions (func(&1), etc), but handy when you're trying to pipe or keep your code concise.
In elixir we use def for simply define a function like we do in other languages.
fn creates an anonymous function refer to this for more clarification
Only the second kind of function seems to be a first-class object and can be passed as a parameter to other functions. A function defined in a module needs to be wrapped in a fn. There's some syntactic sugar which looks like otherfunction(myfunction(&1, &2)) in order to make that easy, but why is it necessary in the first place? Why can't we just do otherfunction(myfunction))?
You can do otherfunction(&myfunction/2)
Since elixir can execute functions without the brackets (like myfunction), using otherfunction(myfunction)) it will try to execute myfunction/0.
So, you need to use the capture operator and specify the function, including arity, since you can have different functions with the same name. Thus, &myfunction/2.

What are the benefits of type inference?

I'm started to learn F#, and I noticed that one of the major differences in syntax from C# is that type inference is used much more than in C#. This is usually presented as one of the benefits of F#. Why is type inference presented as benefit?
Imagine, you have a class hierarchy and code that uses different classes from it. Strong typing allows you quickly detect which classes are used in any method.
With type inference it will not be so obvious and you have to use hints to understand, which class is used. Are there any techniques that exist to make F# code more readable with type inference?
This question assumes that you are using object-oriented programming (e.g. complex class hierarchies) in F#. While you can certainly do that, using OO concepts is mainly useful for interoperability or for wrapping some F# functionality in a .NET library.
Understanding code. Type inference becomes much more useful when you write code in the functional style. It makes code shorter, but also helps you understand what is going on. For example, if you write map function over list (the Select method in LINQ):
let map f list =
seq { for el in list -> f el }
The type inference tells you that the function type is:
val map : f:('a -> 'b) -> list:seq<'a> -> seq<'b>
This matches our expectations about what we wanted to write - the argument f is a function turning values of type 'a into values of type 'b and the map function takes a list of 'a values and produces a list of 'b values. So you can use the type inference to easily check that your code does what you would expect.
Generalization. Automatic generalization (mentioned in the comments) means that the above code is automatically as reusable as possible. In C#, you might write:
IEnumerable<int> Select(IEnumerable<int> list, Func<int, int> f) {
foreach(int el in list)
yield return f(el);
}
This method is not generic - it is Select that works only on collections of int values. But there is no reason why it should be restricted to int - the same code would work for any types. The type inference mechanism helps you discover such generalizations.
More checking. Finally, thanks to the inference, the F# language can more easily check more things than you could if you had to write all types explicitly. This applies to many aspects of the language, but it is best demonstrated using units of measure:
let l = 1000.0<meter>
let s = 60.0<second>
let speed = l/s
The F# compiler infers that speed has a type float<meter/second> - it understands how units of measure work and infers the type including unit information. This feature is really useful, but it would be hard to use if you had to write all units by hand (because the types get long). In general, you can use more precise types, because you do not have to (always) type them.

When should we use FSharpFunc.Adapt?

Looking at the source in FSharp.Core and PowerPack, I see that a lot of higher-order functions that accept a function with two or more parameters use FSharpFunc.Adapt. For example:
let mapi f (arr: ResizeArray<_>) =
let f = FSharpFunc<_,_,_>.Adapt(f)
let len = length arr
let res = new ResizeArray<_>(len)
for i = 0 to len - 1 do
res.Add(f.Invoke(i, arr.[i]))
res
The documentation on FSharpFunc.Adapt is fairly thin. Is this a general best practice that we should be using any time we have a higher-order function with a similar signature? Only if the passed-in function is called multiple times? How much of an optimization is it? Should we be using Adapt everywhere we can, or only rarely?
Thanks for your time.
That's quite interesting! I don't have any official information (and I didn't see this documented anywhere), but here are some thoughts on how the Adapt function might work.
Functions like mapi take curried form of a function, which means that the type of the argument is compiled to something like FSharpFunc<int, FSharpFunc<T, R>>. However, many functions are actually compiled directly as functions of two arguments, so the actual value would typically be FSharpFunc<int, T, R> which inherits from FSharpFunc<int, FSharpFunc<T, R>>.
If you call this function (e.g. f 1 "a") the F# compiler generates something like this:
FSharpFunc<int, string>.InvokeFast<a>(f, 1, "a");
If you look at the InvokeFast function using Reflector, you'll see that it tests if the function is compiled as the optimized version (f :? FSharpFunc<int, T, R>). If yes, then it directly calls Invoke(1, "a") and if not then it needs to make two calls Invoke(1).Invoke("a").
This check is done each time you call a function passed as an argument (it is probably faster to do the check and then use the optimized call, because that's more common).
What the Adapt function does is that it converts any function to FSharpFunc<T1, T2, R> (if the function is not optimized, it creates a wrapper for it, but that's not the case most of the time). The calls to the adapted function will be faster, because they don't need to do the dynamic check every time (the check is done only once inside Adapt).
So, the summary is that Adapt could improve the performance if you're calling a function passed as an argument that takes more than 1 argument a large number of times. As with any optimizations, I wouldn't use this blindly, but it is an interesting thing to be aware of when tuning the performance!
(BTW: Thanks for a very interesting question, I didn't know the compiler does this :-))

Why are functions in OCaml/F# not recursive by default?

Why is it that functions in F# and OCaml (and possibly other languages) are not by default recursive?
In other words, why did the language designers decide it was a good idea to explicitly make you type rec in a declaration like:
let rec foo ... = ...
and not give the function recursive capability by default? Why the need for an explicit rec construct?
The French and British descendants of the original ML made different choices and their choices have been inherited through the decades to the modern variants. So this is just legacy but it does affect idioms in these languages.
Functions are not recursive by default in the French CAML family of languages (including OCaml). This choice makes it easy to supercede function (and variable) definitions using let in those languages because you can refer to the previous definition inside the body of a new definition. F# inherited this syntax from OCaml.
For example, superceding the function p when computing the Shannon entropy of a sequence in OCaml:
let shannon fold p =
let p x = p x *. log(p x) /. log 2.0 in
let p t x = t +. p x in
-. fold p 0.0
Note how the argument p to the higher-order shannon function is superceded by another p in the first line of the body and then another p in the second line of the body.
Conversely, the British SML branch of the ML family of languages took the other choice and SML's fun-bound functions are recursive by default. When most function definitions do not need access to previous bindings of their function name, this results in simpler code. However, superceded functions are made to use different names (f1, f2 etc.) which pollutes the scope and makes it possible to accidentally invoke the wrong "version" of a function. And there is now a discrepancy between implicitly-recursive fun-bound functions and non-recursive val-bound functions.
Haskell makes it possible to infer the dependencies between definitions by restricting them to be pure. This makes toy samples look simpler but comes at a grave cost elsewhere.
Note that the answers given by Ganesh and Eddie are red herrings. They explained why groups of functions cannot be placed inside a giant let rec ... and ... because it affects when type variables get generalized. This has nothing to do with rec being default in SML but not OCaml.
One crucial reason for the explicit use of rec is to do with Hindley-Milner type inference, which underlies all staticly typed functional programming languages (albeit changed and extended in various ways).
If you have a definition let f x = x, you'd expect it to have type 'a -> 'a and to be applicable on different 'a types at different points. But equally, if you write let g x = (x + 1) + ..., you'd expect x to be treated as an int in the rest of the body of g.
The way that Hindley-Milner inference deals with this distinction is through an explicit generalisation step. At certain points when processing your program, the type system stops and says "ok, the types of these definitions will be generalised at this point, so that when someone uses them, any free type variables in their type will be freshly instantiated, and thus won't interfere with any other uses of this definition."
It turns out that the sensible place to do this generalisation is after checking a mutually recursive set of functions. Any earlier, and you'll generalise too much, leading to situations where types could actually collide. Any later, and you'll generalise too little, making definitions that can't be used with multiple type instantiations.
So, given that the type checker needs to know about which sets of definitions are mutually recursive, what can it do? One possibility is to simply do a dependency analysis on all the definitions in a scope, and reorder them into the smallest possible groups. Haskell actually does this, but in languages like F# (and OCaml and SML) which have unrestricted side-effects, this is a bad idea because it might reorder the side-effects too. So instead it asks the user to explicitly mark which definitions are mutually recursive, and thus by extension where generalisation should occur.
There are two key reasons this is a good idea:
First, if you enable recursive definitions then you can't refer to a previous binding of a value of the same name. This is often a useful idiom when you are doing something like extending an existing module.
Second, recursive values, and especially sets of mutually recursive values, are much harder to reason about then are definitions that proceed in order, each new definition building on top of what has been already defined. It is nice when reading such code to have the guarantee that, except for definitions explicitly marked as recursive, new definitions can only refer to previous definitions.
Some guesses:
let is not only used to bind functions, but also other regular values. Most forms of values are not allowed to be recursive. Certain forms of recursive values are allowed (e.g. functions, lazy expressions, etc.), so it needs an explicit syntax to indicate this.
It might be easier to optimize non-recursive functions
The closure created when you create a recursive function needs to include an entry that points to the function itself (so the function can recursively call itself), which makes recursive closures more complicated than non-recursive closures. So it might be nice to be able to create simpler non-recursive closures when you don't need recursion
It allows you to define a function in terms of a previously-defined function or value of the same name; although I think this is bad practice
Extra safety? Makes sure that you are doing what you intended. e.g. If you don't intend it to be recursive but you accidentally used a name inside the function with the same name as the function itself, it will most likely complain (unless the name has been defined before)
The let construct is similar to the let construct in Lisp and Scheme; which are non-recursive. There is a separate letrec construct in Scheme for recursive let's
Given this:
let f x = ... and g y = ...;;
Compare:
let f a = f (g a)
With this:
let rec f a = f (g a)
The former redefines f to apply the previously defined f to the result of applying g to a. The latter redefines f to loop forever applying g to a, which is usually not what you want in ML variants.
That said, it's a language designer style thing. Just go with it.
A big part of it is that it gives the programmer more control over the complexity of their local scopes. The spectrum of let, let* and let rec offer an increasing level of both power and cost. let* and let rec are in essence nested versions of the simple let, so using either one is more expensive. This grading allows you to micromanage the optimization of your program as you can choose which level of let you need for the task at hand. If you don't need recursion or the ability to refer to previous bindings, then you can fall back on a simple let to save a bit of performance.
It's similar to the graded equality predicates in Scheme. (i.e. eq?, eqv? and equal?)

Resources