How to use closures in a nested map? - closures

I am trying to make a 2-dimensional matrix from a functor that creates each element, and store it as a flat Vec (each row concatenated).
I used nested map (actually a flat_map and a nested map) to create each row and concatenate it. Here is what I tried:
fn make<T, F>(n: usize, m: usize, f: F) -> Vec<T>
where
F: Fn(usize, usize) -> T,
{
(0..m).flat_map(|y| (0..n).map(|x| f(x, y))).collect()
}
fn main() {
let v = make(5, 5, |x, y| x + y);
println!("{:?}", v);
}
Unfortunately, I get an error during compilation:
error[E0597]: `y` does not live long enough
--> src/main.rs:5:45
|
5 | (0..m).flat_map(|y| (0..n).map(|x| f(x, y))).collect()
| --- ^ - - borrowed value needs to live until here
| | | |
| | | borrowed value only lives until here
| | borrowed value does not live long enough
| capture occurs here
How does one use closures in nested maps? I worked around this issue by using a single map on 0..n*m, but I'm still interested in the answer.

In your case the inner closure |x| f(x,y) is a borrowing closure, which takes its environment (y and f) by reference.
The way .flat_map(..) works, it forbids you to keep a reference to y, which is not from the outer scope. Thus we need to have your closure take its environment by value, which is not a problem for y being a usize which is Copy:
(0..m).flat_map(|y| (0..n).map(move |x| f(x, y))).collect()
However, now another problem arises:
error[E0507]: cannot move out of captured outer variable in an `FnMut` closure
--> src/main.rs:5:36
|
1 | fn make<T, F>(n: usize, m: usize, f: F) -> Vec<T>
| - captured outer variable
...
5 | (0..m).flat_map(|y| (0..n).map(move |x| f(x,y))).collect()
| ^^^^^^^^ cannot move out of captured outer variable in an `FnMut` closure
Here, we are trying to move f as well into the closure, which is definitely not possible (unless m is 1, but the compiler cannot know that).
Since f is a Fn(usize, usize) -> T, we could just as well explicitly pass a & reference to it, and & references are Copy:
fn make<T, F>(n: usize, m: usize, f: F) -> Vec<T>
where
F: Fn(usize, usize) -> T,
{
let f_ref = &f;
(0..m)
.flat_map(|y| (0..n).map(move |x| f_ref(x, y)))
.collect()
}
In this case, the closure takes its environment by value, and this environment is composed of y and f_ref, both of them being Copy, everything is well.

Adding to Levans's excellent answer, another way of defining the function would be
fn make<T, F>(n: usize, m: usize, f: F) -> Vec<T>
where
F: Fn(usize, usize) -> T + Copy,
{
(0..m).flat_map(|y| (0..n).map(move |x| f(x, y))).collect()
}
Since we know that |x, y| x + y is a Copy type, f would get copied for every callback that flat_map invokes. I would still prefer Levans's way as this would not be as efficient as copying a reference.

Related

Taking two streams and combining them in OCaml

I want to take two streams of integers in increasing order and combine them into one stream that contains no duplicates and should be in increasing order. I have defined the functionality for streams in the following manner:
type 'a susp = Susp of (unit -> 'a)
let force (Susp f) = f()
type 'a str = {hd : 'a ; tl : ('a str) susp }
let merge s1 s2 = (* must implement *)
The first function suspends computation by wrapping a computation within a function, and the second function evaluates the function and provides me with the result of the computation.
I want to emulate the logic of how you go about combining lists, i.e. match on both lists and check which elements are greater, lesser, or equal and then append (cons) the integers such that the resulting list is sorted.
However, I know I cannot just do this with streams of course as I cannot traverse it like a list, so I think I would need to go integer by integer, compare, and then suspend the computation and keep doing this to build the resulting stream.
I am at a bit of a loss how to implement such logic however, assuming it is how I should be going about this, so if somebody could point me in the right direction that would be great.
Thank you!
If the the input sequences are sorted, there is not much difference between merging lists and sequences. Consider the following merge function on lists:
let rec merge s t =
match s, t with
| x :: s , [] | [], x :: s -> x :: s
| [], [] -> s
| x :: s', y :: t' ->
if x < y then
x :: (merge s' t)
else if x = y then
x :: (merge s' t')
else
y :: (merge s t')
This function is only using two properties of lists:
the ability to split the potential first element from the rest of the list
the ability to add an element to the front of the list
This suggests that we could rewrite this function as a functor over the signature
module type seq = sig
type 'a t
(* if the seq is non-empty we split the seq into head and tail *)
val next: 'a t -> ('a * 'a t) option
(* add back to the front *)
val cons: 'a -> 'a t -> 'a t
end
Then if we replace the pattern matching on the list with a call to next, and the cons operation with a call to cons, the previous function is transformed into:
module Merge(Any_seq: seq ) = struct
open Any_seq
let rec merge s t =
match next s, next t with
| Some(x,s), None | None, Some (x,s) ->
cons x s
| None, None -> s
| Some (x,s'), Some (y,t') ->
if x < y then
cons x (merge s' t)
else if x = y then
cons x (merge s' t')
else
cons y (merge s t')
end
Then, with list, our implementation was:
module List_core = struct
type 'a t = 'a list
let cons = List.cons
let next = function
| [] -> None
| a :: q -> Some(a,q)
end
module List_implem = Merge(List_core)
which can be tested with
let test = List_implem.merge [1;5;6] [2;4;9]
Implementing the same function for your stream type is then just a matter of writing a similar Stream_core module for stream.

How to avoid stack overflow during CPS conversion?

I'm writing a transformation from Scheme subset to CPS language. It is implemented in F#. On big input programs conversion fails by stack overflow.
I'm using some sort of algorithm described in the paper Compiling with Continuations.
I've tried to increase maximum stack size of the working thread up to 50 MB, then it works.
Maybe there some way to modify the algorithm, so that I won't need to tune stack size?
For example, the algorithm transforms
(foo (bar 1) (bar 2))
to
(let ((c1 (cont (r1)
(let ((c2 (cont (r2)
(foo halt r1 r2))))
(bar c2 2)))))
(bar c1 1))
where halt is a final continuation which finishes the program.
Maybe your actual problems has simple solutions to avoid heavy stack consumption, so please don't mind adding details. However, without more knowledge about your particular code, here is a general approach to reduce the stack consumption in a recursive programs, based on trampolines and continuations.
Walker
Here is a typical recursive function that is not trivially tail-recursive, written in Common Lisp because I don't know F#:
(defun walk (form transform join)
(typecase form
(cons (funcall join
(walk (car form) transform join)
(walk (cdr form) transform join)))
(t (funcall transform form))))
The code is however quite simple, hopefully, and walks a tree made of cons cells:
if the form is a cons-cell, recursively walk on the car (resp. cdr) and join the results
Otherwise, apply a transform on the value
For example:
(walk '(a (b c d) 3 2 (a 2 1) 0)
(lambda (u) (and (numberp u) u))
(lambda (a b) (if a (cons a b) (or a b))))
=> (3 2 (2 1) 0)
The code walks the form, and retain only numbers, but preserves (non-empty) nesting.
Calling trace on walk with the above example shows a maximal depth of 8 nested calls.
Continuations and trampoline
Here is an adapted version, called
walk/then, that walks a form as previously, and when a result is
available, calls then on it. Here then is a continuation.
The function also returns a thunk, i.e. a parameterless closure.
What happens is that when we return the closure, the stack is unwound,
and when we apply the thunk it will
start from a fresh stack, but having advanced in the computation
(I usually picture someone walking up an escalator that goes down).
The fact that we return a thunk to reduce the number of stack frames is part of the trampoline.
The then function takes a value, namely
the result that the current walk eventually will return.
The result is thus passed down the stack, and what is
returned at each step is a thunk function.
Nesting continuations allows to capture the complex behaviour of transform/join, by pushing the remaining parts of the computation in nested continuations.
(defun walk/then (form transform join then)
(typecase form
(cons (lambda ()
(walk/then (car form) transform join
(lambda (v)
(walk/then (cdr form) transform join
(lambda (w)
(funcall then (funcall join v w))))))))
(t (funcall then (funcall transform form)))))
For example, (walk/then (car form) transform join (lambda (v) ...)) reads as follows: walk the car of form with
arguments transform and join, and eventually call (lambda (v) ...) on the result; namely, walk down the cdr, and then join both results; eventually, call the input then on the joined result.
What is missing is a way to continually call the returned thunk until exhaustion; here is it
with a loop, but this could easily be a tail-recursive function:
(loop for res =
(walk/then '(a (b c d) 3 2 (a 2 1) 0)
(lambda (u) (and (numberp u) u))
(lambda (a b) (if a (cons a b) (or a b)))
#'identity)
then (typecase res (function (funcall res)) (t res))
while (functionp res)
finally (return res))
The above returns (3 2 (2 1) 0), and the depth of the trace never goes over 2 when tracing walk/then.
See Eli Bendersky's article for another take at this, in Python.
I've converted algorithm to trampoline form. It looks like FSM.
There is a loop, which looks at the current state, makes some manipulations, and goes to another state. Also it uses two stacks for different kind of continuations.
Here is input language (it is a subset of the language I used originally) :
// Input language consists of only variables and function applications
type Expr =
| Var of string
| App of Expr * Expr list
Here is target language:
// CPS form - each function gets a continuation,
// added continuation definitions and continuation applications
type Norm =
| LetCont of name : string * args : string list * body : Norm * inner : Norm
| FuncCall of func : string * cont : string * args : string list
| ContCall of cont : string * args : string list
Here is original algorithm:
// Usual way to make CPS conversion.
let rec transform expr cont =
match expr with
| App(func, args) ->
transformMany (func :: args) (fun vars ->
let func' = List.head vars
let args' = List.tail vars
let c = fresh()
let r = fresh()
LetCont(c, [r], cont r, FuncCall(func', c, args')))
| Var(v) -> cont v
and transformMany exprs cont =
match exprs with
| e :: rest ->
transform e (fun e' ->
transformMany rest (fun rest' ->
cont (e' :: rest')))
| _ -> cont []
let transformTop expr =
transform expr (fun var -> ContCall("halt", [var]))
Here is modified version:
type Action =
| ContinuationVar of Expr * (string -> Action)
| ContinuationExpr of string * (Norm -> Action)
| TransformMany of string list * Expr list * (string list -> Action)
| Result of Norm
| Variable of string
// Make one action at time and return to top loop
let rec transform2 expr =
match expr with
| App(func, args) ->
TransformMany([], func :: args, (fun vars ->
let func' = List.head vars
let args' = List.tail vars
let c = fresh()
let r = fresh()
ContinuationExpr(r, fun expr ->
Result(LetCont(c, [r], expr, FuncCall(func', c, args'))))))
| Var(v) -> Variable(v)
// We have two stacks here:
// contsVar for continuations accepting variables
// contsExpr for continuations accepting expressions
let transformTop2 expr =
let rec loop contsVar contsExpr action =
match action with
| ContinuationVar(expr, cont) ->
loop (cont :: contsVar) contsExpr (transform2 expr)
| ContinuationExpr(var, contExpr) ->
let contVar = List.head contsVar
let contsVar' = List.tail contsVar
loop contsVar' (contExpr :: contsExpr) (contVar var)
| TransformMany(vars, e :: exprs, cont) ->
loop contsVar contsExpr (ContinuationVar(e, fun var ->
TransformMany(var :: vars, exprs, cont)))
| TransformMany(vars, [], cont) ->
loop contsVar contsExpr (cont (List.rev vars))
| Result(r) ->
match contsExpr with
| cont :: rest -> loop contsVar rest (cont r)
| _ -> r
| Variable(v) ->
match contsVar with
| cont :: rest -> loop rest contsExpr (cont v)
| _ -> failwith "must not be empty"
let initial = ContinuationVar(expr, fun var -> Result(ContCall("halt", [var])))
loop [] [] initial

Difference between "local" and "let" in SML

I couldn't find a beginner friendly answer to what the difference between the "local" and "let" keywords in SML is. Could someone provide a simple example please and explain when one is used over the other?
(TL;DR)
Use case ... of ... when you only have one temporary binding.
Use let ... in ... end for very specific helper functions.
Never use local ... in ... end. Use opaque modules instead.
Adding some thoughts on use-cases to sepp2k's fine answer:
(Summary) local ... in ... end is a declaration and let ... in ... end is an expression, so that effectively limits where they can be used: Where declarations are allowed (e.g. at the top level or inside a module), and inside value declarations (val and fun), respectively.
But so what? It often seems that either can be used. The Rosetta Stone QuickSort code, for example, could be structured using either, since the helper functions are only used once:
(* First using local ... in ... end *)
local
fun par_helper([], x, l, r) = (l, r)
| par_helper(h::t, x, l, r) =
if h <= x
then par_helper(t, x, l # [h], r)
else par_helper(t, x, l, r # [h])
fun par(l, x) = par_helper(l, x, [], [])
in
fun quicksort [] = []
| quicksort (h::t) =
let
val (left, right) = par(t, h)
in
quicksort left # [h] # quicksort right
end
end
(* Second using let ... in ... end *)
fun quicksort [] = []
| quicksort (h::t) =
let
fun par_helper([], x, l, r) = (l, r)
| par_helper(h::t, x, l, r) =
if h <= x
then par_helper(t, x, l # [h], r)
else par_helper(t, x, l, r # [h])
fun par(l, x) = par_helper(l, x, [], [])
val (left, right) = par(t, h)
in
quicksort left # [h] # quicksort right
end
So let's focus on when it is particularly useful to use one or the other.
local ... in ... end is mainly used when you have one or more temporary declarations (e.g. helper functions) that you want to hide after they're used, but they should be shared between multiple non-local declarations. E.g.
(* Helper function shared across multiple functions *)
local
fun par_helper ... = ...
fun par(l, x) = par_helper(l, x, [], [])
in
fun quicksort [] = []
| quicksort (h::t) = ... par(t, h) ...
fun median ... = ... par(t, h) ...
end
If there weren't multiple, you could have used a let ... in ... end instead.
You can always avoid using local ... in ... end in favor of opaque modules (see below).
let ... in ... end is mainly used when you want to compute temporary results, or deconstruct values of product types (tuples, records), one or more times inside a function. E.g.
fun quicksort [] = []
| quicksort (x::xs) =
let
val (left, right) = List.partition (fn y => y < x) xs
in
quicksort left # [x] # quicksort right
end
Here are some of the benefits of let ... in ... end:
A binding is computed once per function call (even when used multiple times).
A binding can simultaneously be deconstructed (into left and right here).
The declaration's scope is limited. (Same argument as for local ... in ... end.)
Inner functions may use the arguments of the outer function, or the outer function itself.
Multiple bindings that depend on each other may neatly be lined up.
And so on... Really, let-expressions are quite nice.
When a helper function is used once, you might as well nest it inside a let ... in ... end.
Especially if other reasons for using one applies, too.
Some additional opinions
(case ... of ... is awesome, too.)
When you have only one let ... in ... end you can instead write e.g.
fun quicksort [] = []
| quicksort (x::xs) =
case List.partition (fn y => y < x) xs of
(left, right) => quicksort left # [x] # quicksort right
These are equivalent. You might like the style of one or the other. The case ... of ... has one advantage, though, being that it also work for sum types ('a option, 'a list, etc.), e.g.
(* Using case ... of ... *)
fun maxList [] = NONE
| maxList (x::xs) =
case maxList xs of
NONE => SOME x
| SOME y => SOME (Int.max (x, y))
(* Using let ... in ... end and a helper function *)
fun maxList [] = NONE
| maxList (x::xs) =
let
val y_opt = maxList xs
in
Option.map (fn y => Int.max (x, y)) y_opt
end
The one disadvantage of case ... of ...: The pattern block does not stop, so nesting them often requires parentheses. You can also combine the two in different ways, e.g.
fun move p1 (GameState old_p) gameMap =
let val p' = addp p1 old_p in
case getMapPos p' gameMap of
Grass => GameState p'
| _ => GameState old_p
end
This isn't so much about not using local ... in ... end, though.
Hiding declarations that won't be used elsewhere is sensible. E.g.
(* if they're overly specific *)
fun handvalue hand =
let
fun handvalue' [] = 0
| handvalue' (c::cs) = cardvalue c + handvalue' cs
val hv = handvalue' hand
in
if hv > 21 andalso hasAce hand
then handvalue (removeAce hand) + 1
else hv
end
(* to cover over multiple arguments, e.g. to achieve tail-recursion, *)
(* or because the inner function has dependencies anyways (here: x). *)
fun par(ys, x) =
let fun par_helper([], l, r) = (l, r)
| par_helper(h::t, l, r) =
if h <= x
then par_helper(t, l # [h], r)
else par_helper(t, l, r # [h])
in par_helper(ys, [], []) end
And so on. Basically,
If a declaration (e.g. function) will be re-used, don't hide it.
If not, the point of local ... in ... end over let ... in ... end is void.
(local ... in ... end is useless.)
You never want to use local ... in ... end. Since its job is to isolate one set of helper declarations to a subset of your main declarations, this forces you to group those main declarations according to what they depend on, rather than perhaps a more desired order.
A better alternative is simply to write a structure, give it a signature and make that signature opaque. That way, all internal declarations can be used freely throughout the module without being exported.
One example of this in j4cbo's SML on Stilts web-framework is the module StaticServer: It exports only val server : ..., even though the structure also holds the two declarations structure U = WebUtil and val content_type = ....
structure StaticServer :> sig
val server: { basepath: string,
expires: LargeInt.int option,
headers: Web.header list } -> Web.app
end = struct
structure U = WebUtil
val content_type = fn
"png" => "image/png"
| "gif" => "image/gif"
| "jpg" => "image/jpeg"
| "css" => "text/css"
| "js" => "text/javascript"
| "html" => "text/html"
| _ => "text/plain"
fun server { basepath, expires, headers } (req: Web.request) = ...
end
The short answer is: local is a declaration, let is an expression. Consequently, they are used in different syntactic contexts, and local requires declarations between in and end, while let requires an expression there. It's not much deeper than that.
As #SimonShine mentioned, local is often discouraged in favour of using modules.

Expected concrete lifetime, found bound lifetime parameter when storing a closure [duplicate]

I'm encountering a strange pair of errors while trying to compile my Rust code below. In searching for others with similar problems, I came across another question with the same combination of (seemingly opposing) errors, but couldn't generalize the solution from there to my problem.
Basically, I seem to be missing a subtlety in Rust's ownership system. In trying to compile the (very pared down) code here:
struct Point {
x: f32,
y: f32,
}
fn fold<S, T, F>(item: &[S], accum: T, f: F) -> T
where
F: Fn(T, &S) -> T,
{
f(accum, &item[0])
}
fn test<'a>(points: &'a [Point]) -> (&'a Point, f32) {
let md = |(q, max_d): (&Point, f32), p: &'a Point| -> (&Point, f32) {
let d = p.x + p.y; // Standing in for a function call
if d > max_d {
(p, d)
} else {
(q, max_d)
}
};
fold(&points, (&Point { x: 0., y: 0. }, 0.), md)
}
I get the following error messages:
error[E0631]: type mismatch in closure arguments
--> src/main.rs:23:5
|
14 | let md = |(q, max_d): (&Point, f32), p: &'a Point| -> (&Point, f32) {
| ---------------------------------------------------------- found signature of `for<'r> fn((&'r Point, f32), &'a Point) -> _`
...
23 | fold(&points, (&Point { x: 0., y: 0. }, 0.), md)
| ^^^^ expected signature of `for<'r> fn((&Point, f32), &'r Point) -> _`
|
= note: required by `fold`
error[E0271]: type mismatch resolving `for<'r> <[closure#src/main.rs:14:14: 21:6] as std::ops::FnOnce<((&Point, f32), &'r Point)>>::Output == (&Point, f32)`
--> src/main.rs:23:5
|
23 | fold(&points, (&Point { x: 0., y: 0. }, 0.), md)
| ^^^^ expected bound lifetime parameter, found concrete lifetime
|
= note: required by `fold`
(A Rust Playground link for this code, for convenience.)
It seems to me that the function I'm supplying to fold should type-check properly... what am I missing here and how can I go about fixing it?
The short version is that there's a difference between the lifetimes that are inferred if the closure is written inline or stored as a variable. Write the closure inline and remove all the extraneous types:
fn test(points: &[Point]) -> (&Point, f32) {
let init = points.first().expect("No initial");
fold(&points, (init, 0.), |(q, max_d), p| {
let d = 12.;
if d > max_d {
(p, d)
} else {
(q, max_d)
}
})
}
If you truly must have the closure out-of-band, review How to declare a lifetime for a closure argument?.
Additionally, I had to pull the first value from the input array — you can't return a reference to a local variable. There's no need for lifetime parameters on the method; they will be inferred.
To actually get the code to compile, you need to provide more information about the fold method. Specifically, you have to indicate that the reference passed to the closure has the same lifetime as the argument passed in. Otherwise, it could just be a reference to a local variable:
fn fold<'a, S, T, F>(item: &'a [S], accum: T, f: F) -> T
where
F: Fn(T, &'a S) -> T,
{
f(accum, &item[0])
}
The related Rust issue is #41078.

Would you please explain OCaml functors to me? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
In Functional Programming, what is a functor?
I don't know much about OCaml, I've studied F# for some time and quite understand it.
They say that F# misses functor model, which is present in OCaml. I've tried to figure out what exactly functor is, but wikipedia and tutorials didn't help me much.
Could you please illuminate that mystery for me? Thanks in advance :)
EDIT:
I've caught the point, thx to everyone who helped me. You can close the question as exact duplicate of: In Functional Programming, what is a functor?
If you come from an OOP universe, then it probably helps to think of a module as analogous to a static class. Similar to .NET static classes, OCaml module have constructors; unlike .NET, OCaml modules can accept parameters in their constructors. A functor is a scary sounding name for the object you pass into the module constructor.
So using the canonical example of a binary tree, we'd normally write it in F# like this:
type 'a tree =
| Nil
| Node of 'a tree * 'a * 'a tree
module Tree =
let rec insert v = function
| Nil -> Node(Nil, v, Nil)
| Node(l, x, r) ->
if v < x then Node(insert v l, x, r)
elif v > x then Node(l, x, insert v r)
else Node(l, x, r)
Fine and dandy. But how does F# know how to compare two objects of type 'a using the < and > operators?
Behind the scenes, its doing something like this:
> let gt x y = x > y;;
val gt : 'a -> 'a -> bool when 'a : comparison
Alright, well what if you have an object of type Person which doesn't implement that particular interface? What if you wanted to define the sorting function on the fly? One approach is just to pass in the comparer as follows:
let rec insert comparer v = function
| Nil -> Node(Nil, v, Nil)
| Node(l, x, r) ->
if comparer v x = 1 then Node(insert v l, x, r)
elif comparer v x = -1 then Node(l, x, insert v r)
else Node(l, x, r)
It works, but if you're writing a module for tree operations with insert, lookup, removal, etc, you require clients to pass in an ordering function everytime they call anything.
If F# supported functors, its hypothetical syntax might look like this:
type 'a Comparer =
abstract Gt : 'a -> 'a -> bool
abstract Lt : 'a -> 'a -> bool
abstract Eq : 'a -> 'a -> bool
module Tree (comparer : 'a Comparer) =
let rec insert v = function
| Nil -> Node(Nil, v, Nil)
| Node(l, x, r) ->
if comparer.Lt v x then Node(insert v l, x, r)
elif comparer.Gt v x then Node(l, x, insert v r)
else Node(l, x, r)
Still in the hypothetical syntax, you'd create your module as such:
module PersonTree = Tree (new Comparer<Person>
{
member this.Lt x y = x.LastName < y.LastName
member this.Gt x y = x.LastName > y.LastName
member this.Eq x y = x.LastName = y.LastName
})
let people = PersonTree.insert 1 Nil
Unfortunately, F# doesn't support functors, so you have to fall back on some messy workarounds. For the scenario above, I would almost always store the "functor" in my data structure with some auxillary helper functions to make sure it gets copied around correctly:
type 'a Tree =
| Nil of 'a -> 'a -> int
| Node of 'a -> 'a -> int * 'a tree * 'a * 'a tree
module Tree =
let comparer = function
| Nil(f) -> f
| Node(f, _, _, _) -> f
let empty f = Nil(f)
let make (l, x, r) =
let f = comparer l
Node(f, l, x, r)
let rec insert v = function
| Nil(_) -> make(Nil, v, Nil)
| Node(f, l, x, r) ->
if f v x = -1 then make(insert v l, x, r)
elif f v x = 1 then make(l, x, insert v r)
else make(l, x, r)
let people = Tree.empty (function x y -> x.LastName.CompareTo(y.LastName))
Functors are modules parameterized by modules, i.e. a reflection from modules to modules (ordinary function is reflection from values to values, polymorphic function is reflection from types to ordinary functions).
See also ocaml-tutorial on modules.
Examples in the manual are helpful too.
Check out this data structures in ocaml course:
http://www.cs.cornell.edu/Courses/cs3110/2009fa/lecturenotes.asp
the functor lecture:
http://www.cs.cornell.edu/Courses/cs3110/2009fa/lectures/lec10.html
and the splay tree implementation using functor:
http://www.cs.cornell.edu/Courses/cs3110/2009fa/recitations/rec-splay.html

Resources