Why is `functionArgs` implemented twice? (i.e, as a primop and in `lib`) - nix

Trying to understand callPackage, so looked up its implementation where it uses lib.functionArgs (source), but there is already a builtins.functionArgs primop, an alias of __functionArgs (implemented in C).
lib.functionArgs is defined as
/* Extract the expected function arguments from a function.
This works both with nix-native { a, b ? foo, ... }: style
functions and functions with args set with 'setFunctionArgs'. It
has the same return type and semantics as builtins.functionArgs.
setFunctionArgs : (a → b) → Map String Bool.
*/
functionArgs = f: f.__functionArgs or (builtins.functionArgs f);
and the __functionArgs attribute above is coming from setFunctionArgs (source):
/* Add metadata about expected function arguments to a function.
The metadata should match the format given by
builtins.functionArgs, i.e. a set from expected argument to a bool
representing whether that argument has a default or not.
setFunctionArgs : (a → b) → Map String Bool → (a → b)
This function is necessary because you can't dynamically create a
function of the { a, b ? foo, ... }: format, but some facilities
like callPackage expect to be able to query expected arguments.
*/
setFunctionArgs = f: args:
{
__functor = self: f;
__functionArgs = args;
};
I understand what setFunctionArgs does, and the comment above its declaration tells why it is necessary, but I can't understand it; both clauses of that sentence are clear but not sure how the first statement prevents the second one to be achieved (without setFunctionArgs, that is).
danbst also tried to elucidate this further,
lib.nix adds __functionArgs attr to mimic __functionArgs builtin. It
used to "pass" actual __functionArgs result down to consumers, because
builtin __functionArgs only works on top-most function args
but not sure what the "consumers" are, and couldn't unpack the last clause (i.e., "builtin __functionArgs only works on top-most function args"). Is this a reference to the fact that Nix functions are curried, and
nix-repl> g = a: { b, c }: "lofa"
nix-repl> builtins.functionArgs g
{ }
?
lib.functionArgs also doesn't solve this problem, but I'm probably off the tracks at this point.
Notes to self
__functor is documented in the Nix manual under Sets.
$ nix repl '<nixpkgs>'
Welcome to Nix version 2.3.6. Type :? for help.
Loading '<nixpkgs>'...
Added 11530 variables.
nix-repl> f = { a ? 7, b }: a + b
nix-repl> set_f = lib.setFunctionArgs f { b = 9; }
nix-repl> set_f
{ __functionArgs = { ... }; __functor = «lambda # /nix/store/16blhmppp9k6apz41gjlgr0arp88awyb-nixos-20.03.3258.86fa45b0ff1/nixos/lib/trivial.nix:318:19»; }
nix-repl> set_f.__functionArgs
{ b = 9; }
nix-repl> set_f set_f.__functionArgs
16
nix-repl> set_f { a = 27; b = 9; }
36

lib.functionArgs wraps builtins.functionArgs in order to provide reflective access to generic functions.
This supports reflection with builtins.functionArgs:
f = { a, b, c }: #...
Now consider the eta abstraction of the same function:
f' = attrs: f attrs
This does not support reflection with builtins.functionArgs. With setFunctionArgs, you can restore that information, as long as you also use lib.functionArgs.
I recommend to avoid reflection because everything that I've seen implemented with it can be implemented without it. It expands the definition of a function to include what should normally be considered implementation details.
Anyway, the primary motivation seems to be callPackage, which can be implemented with normal attrset operations if you change all packages to add ... as in { lib, stdenv, ... }:. I do have a morbid interest in this misfeature that is function reflection, so if anyone finds another use case, please comment.

Related

How does Nix's "callPackage" call functions defined without an ellipsis?

To call a Nix function that uses set destructuring, you need to pass it a set with exactly the keys it requires, no more and no less:
nix-repl> ({ a }: a) { a = 4; b = 5; }
error: anonymous function at (string):1:2 called with unexpected argument ‘b’, at (string):1:1
The exception to this is if the function's argument list contains an ellipsis at the end:
nix-repl> ({ a, ... }: a) { a = 4; b = 5; }
4
However, most of the packages in nixpkgs consist of a default.nix file containing a function which is not defined with this ellipsis. Yet, somehow when you use callPackage, it manages to call these functions and pass them only the arguments that they need. How is this implemented?
There is a reflection primop, that can deconstruct function argument:
nix-repl> __functionArgs ( { x ? 1, y }: x )
{ x = true; y = false; }
callPackage then iterates over those attribute names, fetches required packages and constucts the attrset of packages, that is fed later to called function.
Here's a simple example:
nix-repl> callWithExtraArgs = f: args:
let
args' = __intersectAttrs (__functionArgs f) args;
in
f args'
nix-repl> callWithExtraArgs ({ x }: x + 1) { x = 4; y = 7; }
5
To browse Nix primops, go to 15.5. Built-in Functions in the Nix manual (or see the docs of the unstable branch).

Idiomatic way to declare static and instance member at once?

When I extend a type with a new function, I usually want it to be available from both dot-notation and free form. Either can be more readable depending on the situation, and the former helps with IntelliSense while the latter helps with currying.
In C#/VB.net, extension methods do this (although I cannot restrict the function to a static method of the extended static class, as in F#). I can write the function once and then invoke it both ways:
<Extension>
public function bounded(s as string, min as UShort, max as UShort) as string
if min > max then throw new ArgumentOutOfRangeException
if string.IsNullOrEmpty(s) then return new string(" ", min)
if s.Length < min then return s.PadRight(min, " ")
if s.Length > max then return s.Substring(0, max)
return s
end function
' usage
dim b1 = bounded("foo", 10, 15)
dim b2 = "foo".bounded(0, 2)
(That's not quite perfect yet, as I'd like bounded to be a static method of String, but C#/VB.Net can't do that. Point to F# in that regard.)
In F#, on the other side, I have to declare the function separatedly from the method:
// works fine
[<AutoOpen>]
module Utilities =
type List<'T> with
member this.tryHead = if this.IsEmpty then None else Some this.Head
module List =
let tryHead (l : List<'T>) = l.tryHead
Question: Is there a more elegant way to declare both methods at once?
I tried to use:
// doesn't quite work
type List<'T> with
member this.tryHead = if this.IsEmpty then None else Some this.Head
static member tryHead(l : List<'T>) = l.tryHead
which at least would let me skip the module declaration, but while the definition compiles, it doesn't quite work - someList.tryHead is OK, but List.tryHead someList results in a Property tryHead is not static error.
Bonus question: As you can see, the static member definition requires a type annotation. However, no other type could have access to the method that was just defined. Why, then, can't the type be inferred?
I don't know of a way to declare both APIs in a single line of code, but you can get rid of the type annotations by making the function the implementation, and then defining the method it terms of the function:
[<AutoOpen>]
module Utilities =
module List =
let tryHead l = if List.isEmpty l then None else Some (List.head l)
type List<'a> with
member this.tryHead = List.tryHead this

Constructing and deconstructing records

The msdn page documenting Records (F#) details record expressions for record construction and record patterns for deconstruction, the latter without naming them as such.
Here's an example which uses both techniques for an arithmetic operator:
// Simple two-dimensional generic vector defintion
type 'a UV =
{ U : 'a; V : 'a }
static member inline (+) ({ U = au; V = av }, { U = bu; V = bv }) =
{ U = au + bu; V = av + bv }
This appears unwieldy and not very readable. For deconstruction, there are dot-notation or functions as alternatives. Since the dot-notation operator has a special dispensation in section 8.4.2 Name Resolution and Record Field Labels of the spec (an expression’s type may be inferred from a record label), there's normally no need to annotate. Accessor functions like let u { U = u } = u wouldn't give us any advantages then.
For construction, I think a case can be made for a function as record constructor. Access to the original constructor might even be restricted:
type 'a UV =
internal { U : 'a; V : 'a }
let uv u v = { U = u; V = v }
type 'a UV with
static member inline (+) (a, b) =
uv (a.U + b.U) (a.V + b.V)
Is this an idiomatic thing to do? How to package such functions in modules and handle namespace issues?
Short answer: I don't think there is a general convention here at the moment so it will be a personal decision in the end.
To summarise what you get for free with records in F# is:
Construct: { U = u; V = v } (bracket-notation)
Deconstruct: let u = record.u (dot-notation) and let {U = u} = record (pattern matching)
Update: {record with U = u} (bracket-notation)
But you don't get first class functions for free, if you want you can code them by hand.
The following is what I would personally use as convention:
A static member New with curried arguments for record construction.
For update and deconstruction I would use some kind of Lenses abstraction.
Here's an example of the code I would have to add by hand:
// Somewhere as a top level definition or in a base library
type Lens<'T,'U> = {Get: 'T -> 'U; Set: 'U -> 'T -> 'T } with
member l.Update f a = l.Set (f (l.Get a)) a
type UV<'a> = {U : 'a; V : 'a } with
// add these static members to your records
static member New u v : UV<'a> = {U = u; V = v}
static member u = {Get = (fun (x: UV<'a>) -> x.U); Set = fun t x -> {x with U = t}}
static member v = {Get = (fun (x: UV<'a>) -> x.V); Set = fun t x -> {x with V = t}}
let uvRecord = UV.New 10 20
let u = UV.u.Get uvRecord
let uvRecord1 = UV.u.Set (u+1) uvRecord
let uvRecord2 = UV.u.Update ((+)1) uvRecord
This way I would have first class functions for construction, deconstruction but also for updates plus other very interesting Lenses properties as you can read in this post.
UPDATE (in response to your comments)
Of course they can be defined later, what does it change?
The same applies for the New constructor, it can be defined later but that's actually a good thing.
The accessor functions you defined can also be defined later, indeed any first-class getter, setter or updater value can be defined later.
Anyway the answer to your question is "no, there are no conventions" the rest it's a personal decision, which would be my decision and also many Haskellers are pushing to get some kind of automatic Lenses for Haskell records.
Why would I decide to go this way? Because in terms of lines of code the effort of adding a simple accessor function is almost the same as adding a get-Lens, so for the same price I get more functionality.
If you are not happy with the Lenses discussion please tell me, I can delete it and leave the short answer, or I can delete the whole answer too if it's confusing instead of clarifying.
Or may be I misunderstood your question, for me your question was about which convention is generally used to add first-class constructors, getters and setters values for records.
Composition is not the only advantage of Lenses, you can do many things, keep reading about them, they provide a very interesting abstraction and not only restricted to records.

Where/how to declare the unique key of variables in a compiler written in Ocaml?

I am writing a compiler of mini-pascal in Ocaml. I would like my compiler to accept the following code for instance:
program test;
var
a,b : boolean;
n : integer;
begin
...
end.
I have difficulties in dealing with the declaration of variables (the part following var). At the moment, the type of variables is defined like this in sib_syntax.ml:
type s_var =
{ s_var_name: string;
s_var_type: s_type;
s_var_uniqueId: s_uniqueId (* key *) }
Where s_var_uniqueId (instead of s_var_name) is the unique key of the variables. My first question is, where and how I could implement the mechanism of generating a new id (actually by increasing the biggest id by 1) every time I have got a new variable. I am wondering if I should implement it in sib_parser.mly, which probably involves a static variable cur_id and the modification of the part of binding, again don't know how to realize them in .mly. Or should I implement the mechanism at the next stage - the interpreter.ml? but in this case, the question is how to make the .mly consistent with the type s_var, what s_var_uniqueId should I provide in the part of binding?
Another question is about this part of statement in .mly:
id = IDENT COLONEQ e = expression
{ Sc_assign (Sle_var {s_var_name = id; s_var_type = St_void}, e) }
Here, I also need to provide the next level (the interpreter.ml) a variable of which I only know the s_var_name, so what could I do regarding its s_var_type and s_var_uniqueId here?
Could anyone help? Thank you very much!
The first question to ask yourself is whether you actually need an unique id. From my experience, they're almost never necessary or even useful. If what you're trying to do is making variables unique through alpha-equivalence, then this should happen after parsing is complete, and will probably involve some form of DeBruijn indices instead of unique identifiers.
Either way, a function which returns a new integer identifier every time it is called is:
let unique =
let last = ref 0 in
fun () -> incr last ; !last
let one = unique () (* 1 *)
let two = unique () (* 2 *)
So, you can simply assign { ... ; s_var_uniqueId = unique () } in your Menhir rules.
The more important problem you're trying to solve here is that of variable binding. Variable x is defined in one location and used in another, and you need to determine that it happens to be the same variable in both places. There are many ways of doing this, one of them being to delay the binding until the interpreter. I'm going to show you how to deal with this during parsing.
First, I'm going to define a context: it's a set of variables that allows you to easily retrieve a variable based on its name. You might want to create it with hash tables or maps, but to keep things simple I will be using List.assoc here.
type s_context = {
s_ctx_parent : s_context option ;
s_ctx_bindings : (string * (int * s_type)) list ;
s_ctx_size : int ;
}
let empty_context parent = {
s_ctx_parent = parent ;
s_ctx_bindings = [] ;
s_ctx_size = 0
}
let bind v_name v_type ctx =
try let _ = List.assoc ctx.s_ctx_bindings v_name in
failwith "Variable is already defined"
with Not_found ->
{ ctx with
s_ctx_bindings = (v_name, (ctx.s_ctx_size, v_type))
:: ctx.s_ctx_bindings ;
s_ctx_size = ctx.s_ctx_size + 1 }
let rec find v_name ctx =
try 0, List.assoc ctx.s_ctx_bindings v_name
with Not_found ->
match ctx.s_ctx_parent with
| Some parent -> let depth, found = find v_name parent in
depth + 1, found
| None -> failwith "Variable is not defined"
So, bind adds a new variable to the current context, find looks for a variable in the current context and its parents, and returns both the bound data and the depth at which it was found. So, you could have all global variables in one context, then all parameters of a function in another context that has the global context as its parent, then all local variables in a function (when you'll have them) in a third context that has the function's main context as the parent, and so on.
So, for instance, find 'x' ctx will return something like 0, (3, St_int) where 0 is the DeBruijn index of the variable, 3 is the position of the variable in the context identified by the DeBruijn index, and St_int is the type.
type s_var = {
s_var_deBruijn: int;
s_var_type: s_type;
s_var_pos: int
}
let find v_name ctx =
let deBruijn, (pos, typ) = find v_name ctx in
{ s_var_deBruijn = deBruijn ;
s_var_type = typ ;
s_var_pos = pos }
Of course, you need your functions to store their context, and make sure that the first argument is the variable at position 0 within the context:
type s_fun =
{ s_fun_name: string;
s_fun_type: s_type;
s_fun_params: context;
s_fun_body: s_block; }
let context_of_paramlist parent paramlist =
List.fold_left
(fun ctx (v_name,v_type) -> bind v_name v_type ctx)
(empty_context parent)
paramlist
Then, you can change your parser to take into account the context. The trick is that instead of returning an object representing part of your AST, most of your rules will return a function that takes a context as an argument and returns an AST node.
For instance:
int_expression:
(* Constant : ignore the context *)
| c = INT { fun _ -> Se_const (Sc_int c) }
(* Variable : look for the variable inside the contex *)
| id = IDENT { fun ctx -> Se_var (find id ctx) }
(* Subexpressions : pass the context to both *)
| e1 = int_expression o = operator e2 = int_expression
{ fun ctx -> Se_binary (o, e1 ctx, e2 ctx) }
;
So, you simply propagate the context "down" recursively through the expressions. The only clever parts are those when new contexts are created (you don't have this syntax yet, so I'm just adding a placeholder):
| function_definition_expression (args, body)
{ fun ctx -> let ctx = context_of_paramlist (Some ctx) args in
{ s_fun_params = ctx ;
s_fun_body = body ctx } }
As well as the global context (the program rule itself does not return a function, but the block rule does, and so a context is created from the globals and provided).
prog:
PROGRAM IDENT SEMICOLON
globals = variables
main = block
DOT
{ let ctx = context_of_paramlist None globals in
{ globals = ctx;
main = main ctx } }
All of this makes the implementation of your interpreter much easier due to the DeBruijn indices: you can have a "stack" which holds your values (of type value) defined as:
type stack = value array list
Then, reading and writing variable x is as simple as:
let read stack x =
(List.nth stack x.s_var_deBruijn).(x.s_var_pos)
let write stack x value =
(List.nth stack x.s_var_deBruijn).(x.s_var_pos) <- value
Also, since we made sure that function parameters are in the same order as their position in the function context, if you want to call function f and its arguments are stored in the array args, then constructing the stack is as simple as:
let inner_stack = args :: stack in
(* Evaluate f.s_fun_body with inner_stack here *)
But I'm sure you'll have a lot more questions to ask when you start working on your interpeter ;)
How to create a global id generator:
let unique =
let counter = ref (-1) in
fun () -> incr counter; !counter
Test:
# unique ();;
- : int = 0
# unique ();;
- : int = 1
Regarding your more general design question: it seems that your data representation does not faithfully represent the compiler phases. If you must return a type-aware data-type (with this field s_var_type) after the parsing phase, something is wrong. You have two choices:
devise a more precise data representation for the post-parsing AST, that would be different from the post-typing AST, and not have those s_var_type fields. Typing would then be a conversion from the untyped to the typed AST. This is a clean solution that I would recommend.
admit that you must break the data representation semantics because you don't have enough information at this stage, and try to be at peace with the idea of returning garbage such as St_void after the parsing phase, to reconstruct the correct information later. This is less typed (as you have an implicit assumption on your data which is not apparent in the type), more pragmatic, ugly but sometimes necessary. I don't think it's the right decision in this case, but you will encounter situation where it's better to be a bit less typed.
I think the specific choice of unique id handling design depends on your position on this more general question, and your concrete decisions about types. If you choose a finer-typed representation of post-parsing AST, it's your choice to decide whether to include unique ids or not (I would, because generating a unique ID is dead simple and doesn't need a separate pass, and I would rather slightly complexify the grammar productions than the typing phase). If you choose to hack the type field with a dummy value, it's also reasonable to do that for variable ids if you wish to, putting 0 as a dummy value and defining it later; but still I personally would do that in the parsing phase.

How does F# compile functions that can take multiple different parameter types into IL?

I know virtually nothing about F#. I don’t even know the syntax, so I can’t give examples.
It was mentioned in a comment thread that F# can declare functions that can take parameters of multiple possible types, for example a string or an integer. This would be similar to method overloads in C#:
public void Method(string str) { /* ... */ }
public void Method(int integer) { /* ... */ }
However, in CIL you cannot declare a delegate of this form. Each delegate must have a single, specific list of parameter types. Since functions in F# are first-class citizens, however, it would seem that you should be able to pass such a function around, and the only way to compile that into CIL is to use delegates.
So how does F# compile this into CIL?
This question is a little ambiguous, so I'll just ramble about what's true of F#.
In F#, methods can be overloaded, just like C#. Methods are always accessed by a qualified name of the form someObj.MethodName or someType.MethodName. There must be context which can statically resolve the overload at compile-time, just as in C#. Examples:
type T() =
member this.M(x:int) = ()
member this.M(x:string) = ()
let t = new T()
// these are all ok, just like C#
t.M(3)
t.M("foo")
let f : int -> unit = t.M
let g : string-> unit = t.M
// this fails, just like C#
let h = t.M // A unique overload for method 'M' could not be determined
// based on type information prior to this program point.
In F#, let-bound function values cannot be overloaded. So:
let foo(x:int) = ()
let foo(x:string) = () // Duplicate definition of value 'foo'
This means you can never have an "unqualified" identifier foo that has overloaded meaning. Each such name has a single unambiguous type.
Finally, the crazy case which is probably the one that prompts the question. F# can define inline functions which have "static member constraints" which can be bound to e.g. "all types T that have a member property named Bar" or whatnot. This kind of genericity cannot be encoded into CIL. Which is why the functions that leverage this feature must be inline, so that at each call site, the code specific-to-the-type-used-at-that-callsite is generated inline.
let inline crazy(x) = x.Qux(3) // elided: type syntax to constrain x to
// require a Qux member that can take an int
// suppose unrelated types U and V have such a Qux method
let u = new U()
crazy(u) // is expanded here into "u.Qux(3)" and then compiled
let v = new V()
crazy(v) // is expanded here into "v.Qux(3)" and then compiled
So this stuff is all handled by the compiler, and by the time we need to generate code, once again, we've statically resolved which specific type we're using at this callsite. The "type" of crazy is not a type that can be expressed in CIL, the F# type system just checks each callsite to ensure the necessary conditions are met and inlines the code into that callsite, a lot like how C++ templates work.
(The main purpose/justification for the crazy stuff is for overloaded math operators. Without the inline feature, the + operator, for instance, being a let-bound function type, could either "only work on ints" or "only work on floats" or whatnot. Some ML flavors (F# is a relative of OCaml) do exactly that, where e.g. the + operator only works on ints, and a separate operator, usually named +., works on floats. Whereas in F#, + is an inline function defined in the F# library that works on any type with a + operator member or any of the primitive numeric types. Inlining can also have some potential run-time performance benefits, which is also appealing for some math-y/computational domains.)
When you're writing C# and you need a function that can take multiple different parameter sets, you just create method overloads:
string f(int x)
{
return "int " + x;
}
string f(string x)
{
return "string " + x;
}
void callF()
{
Console.WriteLine(f(12));
Console.WriteLine(f("12"));
}
// there's no way to write a function like this:
void call(Func<int|string, string> func)
{
Console.WriteLine(func(12));
Console.WriteLine(func("12"));
}
The callF function is trivial, but my made-up syntax for the call function doesn't work.
When you're writing F# and you need a function that can take multiple different parameter sets, you create a discriminated union that can contain all the different parameter sets and you make a single function that takes that union:
type Either = Int of int
| String of string
let f = function Int x -> "int " + string x
| String x -> "string " + x
let callF =
printfn "%s" (f (Int 12))
printfn "%s" (f (String "12"))
let call func =
printfn "%s" (func (Int 12))
printfn "%s" (func (String "12"))
Being a single function, f can be used like any other value, so in F# we can write callF and call f, and both do the same thing.
So how does F# implement the Either type I created above? Essentially like this:
public abstract class Either
{
public class Int : Test.Either
{
internal readonly int item;
internal Int(int item);
public int Item { get; }
}
public class String : Test.Either
{
internal readonly string item;
internal String(string item);
public string Item { get; }
}
}
The signature of the call function is:
public static void call(FSharpFunc<Either, string> f);
And f looks something like this:
public static string f(Either _arg1)
{
if (_arg1 is Either.Int)
return "int " + ((Either.Int)_arg1).Item;
return "string " + ((Either.String)_arg1).Item;
}
Of course you could implement the same Either type in C# (duh!), but it's not idiomatic, which is why it wasn't the obvious answer to the previous question.
Assuming I understand the question, in F# you can define expressions which depend on the availability of members with particular signatures. For instance
let inline f x a = (^t : (member Method : ^a -> unit)(x,a))
This defines a function f which takes a value x of type ^t and a value a of type ^a where ^t has a method Method taking an ^a to unit (void in C#), and which calls that method. Because this function is defined as inline, the definition is inlined at the point of use, which is the only reason that it can be given such a type. Thus, although you can pass f as a first class function, you can only do so when the types ^t and ^a are statically known so that the method call can be statically resolved and inserted in place (and this is why the type parameters have the funny ^ sigil instead of the normal ' sigil).
Here's an example of passing f as a first-class function:
type T() =
member x.Method(i) = printfn "Method called with int: %i" i
List.iter (f (new T())) [1; 2; 3]
This runs the method Method against the three values in the list. Because f is inlined, this is basically equivalent to
List.iter ((fun (x:T) a -> x.Method(a)) (new T())) [1; 2; 3]
EDIT
Given the context that seems to have led to this question (C# - How can I “overload” a delegate?), I appear not to have addressed your real question at all. Instead, what Gabe appears to be talking about is the ease with which one can define and use discriminated unions. So the question posed on that other thread might be answered like this using F#:
type FunctionType =
| NoArgument of (unit -> unit)
| ArrayArgument of (obj[] -> unit)
let doNothing (arr:obj[]) = ()
let doSomething () = printfn "'doSomething' was called"
let mutable someFunction = ArrayArgument doNothing
someFunction <- NoArgument doSomething
//now call someFunction, regardless of what type of argument it's supposed to take
match someFunction with
| NoArgument f -> f()
| ArrayArgument f -> f [| |] // pass in empty array
At a low level, there's no CIL magic going on here; it's just that NoArgument and ArrayArgument are subclasses of FunctionType which are easy to construct and to deconstruct via pattern matching. The branches of the pattern matching expression are morally equivalent to a type test followed by property accesses, but the compiler makes sure that the cases have 100% coverage and don't overlap. You could encode the exact same operations in C# without any problem, but it would be much more verbose and the compiler wouldn't help you out with exhaustiveness checking, etc.
Also, there is nothing here which is particular to functions; F# discriminated unions make it easy to define types which have a fixed number of named alternatives, each one of which can have data of whatever type you'd like.
I'm not quite sure that understand your question correctly... F# compiler uses FSharpFunc type to represent functions. Usually in F# code you don't deal with this type directly, using fancy syntactic representation instead, but if you expose any members that returns or accepts function and use them from another language, line C# - you will see it.
So instead of using delegates - F# utilizes its special type with concrete or generic parameters.
If your question was about things like add something-i-don't-know-what-exactly-but-it-has-addition-operator then you need to use inline keyword and compiler will emit function body in the call site. #kvb's answer was describing exactly this case.

Resources