What is "monadic reflection"? - f#

What is "monadic reflection"?
How can I use it in F#-program?
Is the meaning of term "reflection" there same as .NET-reflection?

Monadic reflection is essentially a grammar for describing layered monads or monad layering. In Haskell describing also means constructing monads. This is a higher level system so the code looks like functional but the result is monad composition - meaning that without actual monads (which are non-functional) there's nothing real / runnable at the end of the day. Filinski did it originally to try to bring a kind of monad emulation to Scheme but much more to explore theoretical aspects of monads.
Correction from the comment - F# has a Monad equivalent named "Computation Expressions"
Filinski's paper at POPL 2010 - no code but a lot of theory, and of course his original paper from 1994 - Representing Monads. Plus one that has some code: Monad Transformers and Modular Interpreters (1995)
Oh and for people who like code - Filinski's code is on-line. I'll list just one - go one step up and see another 7 and readme. Also just a bit of F# code which claims to be inspired by Filinski

I read through the first Google hit, some slides:
http://www.cs.ioc.ee/mpc-amast06/msfp/filinski-slides.pdf
From this, it looks like
This is not the same as .NET reflection. The name seems to refer to turning data into code (and vice-versa, with reification).
The code uses standard pure-functional operations, so implementation should be easy in F#. (once you understand it)
I have no idea if this would be useful for implementing an immutable cache for a recursive function. It look like you can define mutable operations and convert them to equivalent immutable operations automatically? I don't really understand the slides.
Oleg Kiselyov also has an article, but I didn't even try to read it. There's also a paper from Jonathan Sobel (et al). Hit number 5 is this question, so I stopped looking after that.

As previous answers links describes, Monadic reflection is a concept to bridge call/cc style and Church style programming. To describe these two concepts some more:
F# Computation expressions (=monads) are created with custom Builder type.
Don Syme has a good blog post about this. If I write code to use a builder and use syntax like:
attempt { let! n1 = f inp1
let! n2 = failIfBig inp2
let sum = n1 + n2
return sum }
the syntax is translated to call/cc "call-with-current-continuation" style program:
attempt.Delay(fun () ->
attempt.Bind(f inp1,(fun n1 ->
attempt.Bind(f inp2,(fun n2 ->
attempt.Let(n1 + n2,(fun sum ->
attempt.Return(sum))))))))
The last parameter is the next-command-to-be-executed until the end.
(Scheme-style programming.)
F# is based on OCaml.
F# has partial function application, but it also is strongly typed and has value restriction.
But OCaml don't have value restriction.
OCaml can be used in Church kind of programming, where combinator-functions are used to construct any other functions (or programs):
// S K I combinators:
let I x = x
let K x y = x
let S x y z = x z (y z)
//examples:
let seven = S (K) (K) 7
let doubleI = I I //Won't work in F#
// y-combinator to make recursion
let Y = S (K (S I I)) (S (S (K S) K) (K (S I I)))
Church numerals is a way to represent numbers with pure functions.
let zero f x = x
//same as: let zero = fun f -> fun x -> x
let succ n f x = f (n f x)
let one = succ zero
let two = succ (succ zero)
let add n1 n2 f x = n1 f (n2 f x)
let multiply n1 n2 f = n2(n1(f))
let exp n1 n2 = n2(n1)
Here, zero is a function that takes two functions as parameters: f is applied zero times so this represent the number zero, and x is used to function combination in other calculations (like add). succ function is like plusOne so one = zero |> plusOne.
To execute the functions, the last function will call the other functions with last parameter (x) as null.
(Haskell-style programming.)
In F# value restriction makes this hard. Church numerals can be made with C# 4.0 dynamic keyword (which uses .NET reflection inside). I think there are workarounds to do that also in F#.

Related

F# and lisp-like apply function

For starters, I'm a novice in functional programming and F#, therefore I don't know if it's possible to do such thing at all. So let's say we have this function:
let sum x y z = x + y + z
And for some reason, we want to invoke it using the elements from a list as an arguments. My first attempt was just to do it like this:
//Seq.fold (fun f arg -> f arg) sum [1;2;3]
let rec apply f args =
match args with
| h::hs -> apply (f h) hs
| [] -> f
...which doesn't compile. It seems impossible to determine type of the f with a static type system. There's identical question for Haskell and the only solution uses Data.Dynamic to outfox the type system. I think the closest analog to it in F# is Dynamitey, but I'm not sure if it fits. This code
let dynsum = Dynamitey.Dynamic.Curry(sum, System.Nullable<int>(3))
produces dynsum variable of type obj, and objects of this type cannot be invoked, furthermore sum is not a .NET Delegate.So the question is, how can this be done with/without that library in F#?
F# is a statically typed functional language and so the programming patterns that you use with F# are quite different than those that you'd use in LISP (and actually, they are also different from those you'd use in Haskell). So, working with functions in the way you suggested is not something that you'd do in normal F# programming.
If you had some scenario in mind for this function, then perhaps try asking about the original problem and someone will help you find an idiomatic F# approach!
That said, even though this is not recommended, you can implement the apply function using the powerful .NET reflection capabilities. This is slow and unsafe, but if is occasionally useful.
open Microsoft.FSharp.Reflection
let rec apply (f:obj) (args:obj list) =
let invokeFunc =
f.GetType().GetMethods()
|> Seq.find (fun m ->
m.Name = "Invoke" &&
m.GetParameters().Length = args.Length)
invokeFunc.Invoke(f, Array.ofSeq args)
The code looks at the runtime type of the function, finds Invoke method and calls it.
let sum x y z = x + y + z
let res = apply sum [1;2;3]
let resNum = int res
At the end, you need to convert the result to an int because this is not statically known.

What is the name of |> in F# and what does it do?

A real F# noob question, but what is |> called and what does it do?
It's called the forward pipe operator. It pipes the result of one function to another.
The Forward pipe operator is simply defined as:
let (|>) x f = f x
And has a type signature:
'a -> ('a -> 'b) -> 'b
Which resolves to: given a generic type 'a, and a function which takes an 'a and returns a 'b, then return the application of the function on the input.
You can read more detail about how it works in an article here.
I usually refer to |> as the pipelining operator, but I'm not sure whether the official name is pipe operator or pipelining operator (though it probably doesn't really matter as the names are similar enough to avoid confusion :-)).
#LBushkin already gave a great answer, so I'll just add a couple of observations that may be also interesting. Obviously, the pipelining operator got it's name because it can be used for creating a pipeline that processes some data in several steps. The typical use is when working with lists:
[0 .. 10]
|> List.filter (fun n -> n % 3 = 0) // Get numbers divisible by three
|> List.map (fun n -> n * n) // Calculate squared of such numbers
This gives the result [0; 9; 36; 81]. Also, the operator is left-associative which means that the expression input |> f |> g is interpreted as (input |> f) |> g, which makes it possible to sequence multiple operations using |>.
Finally, I find it quite interesting that pipelining operaor in many cases corresponds to method chaining from object-oriented langauges. For example, the previous list processing example would look like this in C#:
Enumerable.Range(0, 10)
.Where(n => n % 3 == 0) // Get numbers divisible by three
.Select(n => n * n) // Calculate squared of such numbers
This may give you some idea about when the operator can be used if you're comming fromt the object-oriented background (although it is used in many other situations in F#).
As far as F# itself is concerned, the name is op_PipeRight (although no human would call it that). I pronounce it "pipe", like the unix shell pipe.
The spec is useful for figuring out these kinds of things. Section 4.1 has the operator names.
http://research.microsoft.com/en-us/um/cambridge/projects/fsharp/manual/spec.html
Don't forget to check out the library reference docs:
http://msdn.microsoft.com/en-us/library/ee353754(v=VS.100).aspx
which list the operators.

F# currying efficiency?

I have a function that looks as follows:
let isInSet setElems normalize p =
normalize p |> (Set.ofList setElems).Contains
This function can be used to quickly check whether an element is semantically part of some set; for example, to check if a file path belongs to an html file:
let getLowerExtension p = (Path.GetExtension p).ToLowerInvariant()
let isHtmlPath = isInSet [".htm"; ".html"; ".xhtml"] getLowerExtension
However, when I use a function such as the above, performance is poor since evaluation of the function body as written in "isInSet" seems to be delayed until all parameters are known - in particular, invariant bits such as (Set.ofList setElems).Contains are reevaluated each execution of isHtmlPath.
How can best I maintain F#'s succint, readable nature while still getting the more efficient behavior in which the set construction is preevaluated.
The above is just an example; I'm looking for a general approach that avoids bogging me down in implementation details - where possible I'd like to avoid being distracted by details such as the implementation's execution order since that's usually not important to me and kind of undermines a major selling point of functional programming.
As long as F# doesn't differentiate between pure and impure code, I doubt we'll see optimisations of that kind. You can, however, make the currying explicit.
let isInSet setElems =
let set = Set.ofList setElems
fun normalize p -> normalize p |> set.Contains
isHtmlSet will now call isInSet only once to obtain the closure, at the same time executing ofList.
The answer from Kha shows how to optimize the code manually by using closures directly. If this is a frequent pattern that you need to use often, it is also possible to define a higher-order function that constructs the efficient code from two functions - the first one that does pre-processing of some arguments and a second one which does the actual processing once it gets the remaining arguments.
The code would look like this:
let preProcess finit frun preInput =
let preRes = finit preInput
fun input -> frun preRes input
let f : string list -> ((string -> string) * string) -> bool =
preProcess
Set.ofList // Pre-processing of the first argument
(fun elemsSet (normalize, p) -> // Implements the actual work to be
normalize p |> elemsSet.Contains) // .. done once we get the last argument
It is a question whether this is more elegant though...
Another (crazy) idea is that you could use computation expressions for this. The definition of computation builder that allows you to do this is very non-standard (it is not something that people usually do with them and it isn't in any way related to monads or any other theory). However, it should be possible to write this:
type CurryBuilder() =
member x.Bind((), f:'a -> 'b) = f
member x.Return(a) = a
let curry = new CurryBuilder()
In the curry computation, you can use let! to denote that you want to take the next argument of the function (after evaluating the preceeding code):
let f : string list -> (string -> string) -> string -> bool = curry {
let! elems = ()
let elemsSet = Set.ofList elems
printf "elements converted"
let! normalize = ()
let! p = ()
printf "calling"
return normalize p |> elemsSet.Contains }
let ff = f [ "a"; "b"; "c" ] (fun s -> s.ToLower())
// Prints 'elements converted' here
ff "C"
ff "D"
// Prints 'calling' two times
Here are some resources with more information about computation expressions:
The usual way of using computation expressions is described in free sample chapter of my book: Chapter 12: Sequence Expressions and Alternative Workflows (PDF)
The example above uses some specifics of the translation which is in full detailes described in the F# specification (PDF)
#Kha's answer is spot on. F# cannot rewrite
// effects of g only after both x and y are passed
let f x y =
let xStuff = g x
h xStuff y
into
// effects of g once after x passed, returning new closure waiting on y
let f x =
let xStuff = g x
fun y -> h xStuff y
unless it knows that g has no effects, and in the .NET Framework today, it's usually impossible to reason about the effects of 99% of all expressions. Which means the programmer is still responsible for explicitly coding evaluation order as above.
Currying does not hurt. Currying sometimes introduces closures as well. They are usually efficient too.
refer to this question I asked before. You can use inline to boost performance if necessary.
However, your performance problem in the example is mainly due to your code:
normalize p |> (Set.ofList setElems).Contains
here you need to perform Set.ofList setElems even you curry it. It costs O(n log n) time.
You need to change the type of setElems to F# Set, not List now. Btw, for small set, using lists is faster than sets even for querying.

How do I define y-combinator without "let rec"?

In almost all examples, a y-combinator in ML-type languages is written like this:
let rec y f x = f (y f) x
let factorial = y (fun f -> function 0 -> 1 | n -> n * f(n - 1))
This works as expected, but it feels like cheating to define the y-combinator using let rec ....
I want to define this combinator without using recursion, using the standard definition:
Y = λf·(λx·f (x x)) (λx·f (x x))
A direct translation is as follows:
let y = fun f -> (fun x -> f (x x)) (fun x -> f (x x));;
However, F# complains that it can't figure out the types:
let y = fun f -> (fun x -> f (x x)) (fun x -> f (x x));;
--------------------------------^
C:\Users\Juliet\AppData\Local\Temp\stdin(6,33): error FS0001: Type mismatch. Expecting a
'a
but given a
'a -> 'b
The resulting type would be infinite when unifying ''a' and ''a -> 'b'
How do I write the y-combinator in F# without using let rec ...?
As the compiler points out, there is no type that can be assigned to x so that the expression (x x) is well-typed (this isn't strictly true; you can explicitly type x as obj->_ - see my last paragraph). You can work around this issue by declaring a recursive type so that a very similar expression will work:
type 'a Rec = Rec of ('a Rec -> 'a)
Now the Y-combinator can be written as:
let y f =
let f' (Rec x as rx) = f (x rx)
f' (Rec f')
Unfortunately, you'll find that this isn't very useful because F# is a strict language,
so any function that you try to define using this combinator will cause a stack overflow.
Instead, you need to use the applicative-order version of the Y-combinator (\f.(\x.f(\y.(x x)y))(\x.f(\y.(x x)y))):
let y f =
let f' (Rec x as rx) = f (fun y -> x rx y)
f' (Rec f')
Another option would be to use explicit laziness to define the normal-order Y-combinator:
type 'a Rec = Rec of ('a Rec -> 'a Lazy)
let y f =
let f' (Rec x as rx) = lazy f (x rx)
(f' (Rec f')).Value
This has the disadvantage that recursive function definitions now need an explicit force of the lazy value (using the Value property):
let factorial = y (fun f -> function | 0 -> 1 | n -> n * (f.Value (n - 1)))
However, it has the advantage that you can define non-function recursive values, just as you could in a lazy language:
let ones = y (fun ones -> LazyList.consf 1 (fun () -> ones.Value))
As a final alternative, you can try to better approximate the untyped lambda calculus by using boxing and downcasting. This would give you (again using the applicative-order version of the Y-combinator):
let y f =
let f' (x:obj -> _) = f (fun y -> x x y)
f' (fun x -> f' (x :?> _))
This has the obvious disadvantage that it will cause unneeded boxing and unboxing, but at least this is entirely internal to the implementation and will never actually lead to failure at runtime.
I would say it's impossible, and asked why, I would handwave and invoke the fact that simply typed lambda calculus has the normalization property. In short, all terms of the simply typed lambda calculus terminate (consequently Y can not be defined in the simply typed lambda calculus).
F#'s type system is not exactly the type system of simply typed lambda calculus, but it's close enough. F# without let rec comes really close to the simply typed lambda calculus -- and, to reiterate, in that language you cannot define a term that does not terminate, and that excludes defining Y too.
In other words, in F#, "let rec" needs to be a language primitive at the very least because even if you were able to define it from the other primitives, you would not be able to type this definition. Having it as a primitive allows you, among other things, to give a special type to that primitive.
EDIT: kvb shows in his answer that type definitions (one of the features absent from the simply typed lambda-calculus but present in let-rec-less F#) allow to get some sort of recursion. Very clever.
Case and let statements in ML derivatives are what makes it Turing Complete, I believe they're based on System F and not simply typed but the point is the same.
System F cannot find a type for the any fixed point combinator, if it could, it wasn't strongly normalizing.
What strongly normalizing means is that any expression has exactly one normal form, where a normal form is an expression that cannot be reduced any further, this differs from untyped where every expression has at max one normal form, it can also have no normal form at all.
If typed lambda calculi could construct a fixed point operator in what ever way, it was quite possible for an expression to have no normal form.
Another famous theorem, the Halting Problem, implies that strongly normalizing languages are not Turing complete, it says that's impossible to decide (different than prove) of a turing complete language what subset of its programs will halt on what input. If a language is strongly normalizing, it's decidable if it halts, namely it always halts. Our algorithm to decide this is the program: true;.
To solve this, ML-derivatives extend System-F with case and let (rec) to overcome this. Functions can thus refer to themselves in their definitions again, making them in effect no lambda calculi at all any more, it's no longer possible to rely on anonymous functions alone for all computable functions. They can thus again enter infinite loops and regain their turing-completeness.
Short answer: You can't.
Long answer:
The simply typed lambda calculus is strongly normalizing. This means it's not Turing equivalent. The reason for this basically boils down to the fact that a Y combinator must either be primitive or defined recursively (as you've found). It simply cannot be expressed in System F (or simpler typed calculi). There's no way around this (it's been proven, after all). The Y combinator you can implement works exactly the way you want, though.
I would suggest you try scheme if you want a real Church-style Y combinator. Use the applicative version given above, as other versions won't work, unless you explicitly add laziness, or use a lazy Scheme interpreter. (Scheme technically isn't completely untyped, but it's dynamically typed, which is good enough for this.)
See this for the proof of strong normalization:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.127.1794
After thinking some more, I'm pretty sure that adding a primitive Y combinator that behaves exactly the way the letrec defined one does makes System F Turing complete. All you need to do to simulate a Turing machine then is implement the tape as an integer (interpreted in binary) and a shift (to position the head).
Simply define a function taking its own type as a record, like in Swift (there it's a struct) :)
Here, Y (uppercase) is semantically defined as a function that can be called with its own type. In F# terms, it is defined as a record containing a function named call, so for calling a y defined as this type, you have to actually call y.call :)
type Y = { call: Y -> (int -> int) }
let fibonacci n =
let makeF f: int -> int =
fun x ->
if x = 0 then 0 else if x = 1 then 1 else f(x - 1) + f(x - 2)
let y = { call = fun y -> fun x -> (makeF (y.call y)) x }
(y.call y) n
It's not supremely elegant to read but it doesn't resort to recursion for defining a y combinator that is supposed to provide recursion all by itself ^^

Is it possible to write tacit functions in F#

Tacit or point-free style programming allows one to create functions without regard to their arguments. Can this be done in F#?
Just to go with Chuck's answer & Chris Smiths' comment, you could write
let digits = string_of_int >> String.length
digits 9000;; // 4
[1; 10; 100] |> List.map digits;; // [1;2;3]
When you combine those composition & pipeline operators with higher-order functions, you can do complicated stuff very succinctly:
let prodSqrtAbs = Seq.map (abs>>sqrt) >> Seq.reduce (*)
prodSqrtAbs [| -9.0; 4.0 |];; // 6.0
EDIT: I just read about J and its implicit fork operator. That is very powerful. You can build equivalent higher-order operators in F#, but they won't be applied implicitly. So, for example, first define lift (using explicit arguments)
let lift op a b x = op (a x) (b x)
and then apply it explicitly
let avg = lift (/) List.sum List.length
to get something resembling the J example on the Wikipedia page you linked to. But its not quite "tacit."
Sure. All you need is function composition and currying, and both of these are possible in F#.
let compose f1 f2 = fun x -> f1 (f2 x);;
let digits = compose String.length string_of_int;;
digits 9000;; // 4

Resources