I use F# a lot. All the basic collections in F# implement IEumberable interface, thus it is quite natural to access them using the single Seq module in F#. Is this possible in OCaml?
The other question is that 'a seq in F# is lazy, e.g. I can create a sequence from 1 to 100 using {1..100} or more verbosely:
seq { for i=1 to 100 do yield i }
In OCaml, I find myself using the following two methods to work around with this feature:
generate a list:
let rec range a b =
if a > b then []
else a :: range (a+1) b;;
or resort to explicit recursive functions.
The first generates extra lists. The second breaks the abstraction as I need to operate on the sequence level using higher order functions such as map and fold.
I know that the OCaml library has Stream module. But the functionality of it seems to be quite limited, not as general as 'a seq in F#.
BTW, I am playing Project Euler problems using OCaml recently. So there are quite a few sequences operations, that in an imperative language would be loops with a complex body.
This Ocaml library seems to offer what you are asking. I've not used it though.
http://batteries.forge.ocamlcore.org/
Checkout this module, Enum
http://batteries.forge.ocamlcore.org/doc.preview:batteries-beta1/html/api/Enum.html
I somehow feel Enum is a much better name than Seq. It eliminates the lowercase/uppercase confusion on Seqs.
An enumerator, seen from a functional programming angle, is exactly a fold function. Where a class would implement an Enumerable interface in an object-oriented data structures library, a type comes with a fold function in a functional data structure library.
Stream is a slightly quirky imperative lazy list (imperative in that reading an element is destructive). CamlP5 comes with a functional lazy list library, Fstream. This already cited thread offers some alternatives.
It seems you are looking for something like Lazy lists.
Check out this SO question
The Batteries library mentioned also provides the (--) operator:
# 1 -- 100;;
- : int BatEnum.t = <abstr>
The enum is not evaluated until you traverse the items, so it provides a feature similar to your second request. Just beware that Batteries' enums are mutable. Ideally, there would also be a lazy list implementation that all data structures could be converted to/from, but that work has not been done yet.
Related
I wonder if there is any difference in how the two features are implemented under the hood? I.e. Aren't just code quotations built on top of the old good expression trees?
Thanks.
The two types are quite similar, but they are represented differently.
Quotations are designed in a more functional way. For example foo a b would be represented as a series of applications App(App(foo, a), b)
Quotations can represent some constructs that are available only in F# and using expression trees would hide them. For example there is Expr.LetRecursive for let rec declarations
Quotations were first introduced in .NET 3.0. Back then expression trees could only represent C# expressions, so it wasn't possible to easily capture all F# constructs (quotations can capture any F# expression including imperative ones).
Quotations are also designed to be easily processible using recursion. The ExprShape module contains patterns that allow you to handle all possible quotations with just 4 cases (which is a lot easier than implementing visitor pattern with tens of methods in C#).
When you have an F# quotation, you can translate it to C# expression tree using FSharp.Quotations.Evaluator. This is quite useful if you're using some .NET API that expects expression trees from F#. As far as I know, there is no translation the other way round.
I've been watching an interesting video in which type classes in Haskell are used to solve the so-called "expression problem". About 15 minutes in, it shows how type classes can be used to "open up" a datatype based on a discriminated union for extension -- additional discriminators can be added separately without modifying / rebuilding the original definition.
I know type classes aren't available in F#, but is there a way using other language features to achieve this kind of extensibility? If not, how close can we come to solving the expression problem in F#?
Clarification: I'm assuming the problem is defined as described in the previous video
in the series -- extensibility of the datatype and operations on the datatype with the features of code-level modularization and separate compilation (extensions can be deployed as separate modules without needing to modify or recompile the original code) as well as static type safety.
As Jörg pointed out in a comment, it depends on what you mean by solve. If you mean solve including some form of type-checking that the you're not missing an implementation of some function for some case, then F# doesn't give you any elegant way (and I'm not sure if the Haskell solution is elegant). You may be able to encode it using the SML solution mentioned by kvb or maybe using one of the OO based solutions.
In reality, if I was developing a real-world system that needs to solve the problem, I would choose a solution that doesn't give you full checking, but is much easier to use.
A sketch would be to use obj as the representation of a type and use reflection to locate functions that provide implementation for individual cases. I would probably mark all parts using some attribute to make checking easier. A module adding application to an expression might look like this:
[<Extends("Expr")>] // Specifies that this type should be treated as a case of 'Expr'
type App = App of obj * obj
module AppModule =
[<Implements("format")>] // Specifies that this extends function 'format'
let format (App(e1, e2)) =
// We don't make recursive calls directly, but instead use `invoke` function
// and some representation of the function named `formatFunc`. Alternatively
// you could support 'e1?format' using dynamic invoke.
sprintfn "(%s %s)" (invoke formatFunc e1) (invoke formatFunc e2)
This does not give you any type-checking, but it gives you a fairly elegant solution that is easy to use and not that difficult to implement (using reflection). Checking that you're not missing a case is not done at compile-time, but you can easily write unit tests for that.
See Vesa Karvonen's comment here for one SML solution (albeit cumbersome), which can easily be translated to F#.
I know type classes aren't available in F#, but is there a way using other language features to achieve this kind of extensibility?
I do not believe so, no.
If not, how close can we come to solving the expression problem in F#?
The expression problem is about allowing the user to augment your library code with both new functions and new types without having to recompile your library. In F#, union types make it easy to add new functions (but impossible to add new union cases to an existing union type) and class types make it easy to derive new class types (but impossible to add new methods to an existing class hierarchy). These are the two forms of extensibility required in practice. The ability to extend in both directions simultaneously without sacrificing static type safety is just an academic curiosity, IME.
Incidentally, the most elegant way to provide this kind of extensibility that I have seen is to sacrifice type safety and use so-called "rule-based programming". Mathematica does this. For example, a function to compute the symbolic derivative of an expression that is an integer literal, variable or addition may be written in Mathematica like this:
D[_Integer, _] := 0
D[x_Symbol, x_] := 1
D[_Symbol, _] := 0
D[f_ + g_, x_] := D[f, x] + D[g, x]
We can retrofit support for multiplication like this:
D[f_ g_, x_] := f D[g, x] + g D[f, x]
and we can add a new function to evaluate an expression like this:
E[n_Integer] := n
E[f_ + g_] = E[f] + E[g]
To me, this is far more elegant than any of the solutions written in languages like OCaml, Haskell and Scala but, of course, it is not type safe.
I'm learning F# (new to functional programming in general though used functional aspects of C# for years but let's face it, that's pretty different) and one of the things that I've read is that the F# compiler identifies tail recursion and compiles it into a while loop (see http://thevalerios.net/matt/2009/01/recursion-in-f-and-the-tail-recursion-police/).
What I don't understand is why you would write a recursive function instead of a while loop if that's what it's going to turn into anyway. Especially considering that you need to do some extra work to make your function recursive.
I have a feeling someone might say that the while loop is not particularly functional and you want to act all functional and whatnot so you use recursion but then why is it sufficient for the compiler to turn it into a while loop?
Can someone explain this to me?
You could use the same argument for any transformation that the compiler performs. For instance, when you're using C#, do you ever use lambda expressions or anonymous delegates? If the compiler is just going to turn those into classes and (non-anonymous) delegates, then why not just use those constructions yourself? Likewise, do you ever use iterator blocks? If the compiler is just going to turn those into state machines which explicitly implement IEnumerable<T>, then why not just write that code yourself? Or if the C# compiler is just going to emit IL anyway, why bother writing C# instead of IL in the first place? And so on.
One obvious answer to all of these questions is that we want to write code which allows us to express ourselves clearly. Likewise, there are many algorithms which are naturally recursive, and so writing recursive functions will often lead to a clear expression of those algorithms. In particular, it is arguably easier to reason about the termination of a recursive algorithm than a while loop in many cases (e.g. is there a clear base case, and does each recursive call make the problem "smaller"?).
However, since we're writing code and not mathematics papers, it's also nice to have software which meets certain real-world performance criteria (such as the ability to handle large inputs without overflowing the stack). Therefore, the fact that tail recursion is converted into the equivalent of while loops is critical for being able to use recursive formulations of algorithms.
A recursive function is often the most natural way to work with certain data structures (such as trees and F# lists). If the compiler wants to transform my natural, intuitive code into an awkward while loop for performance reasons that's fine, but why would I want to write that myself?
Also, Brian's answer to a related question is relevant here. Higher-order functions can often replace both loops and recursive functions in your code.
The fact that F# performs tail optimization is just an implementation detail that allows you to use tail recursion with the same efficiency (and no fear of a stack overflow) as a while loop. But it is just that - an implementation detail - on the surface your algorithm is still recursive and is structured that way, which for many algorithms is the most logical, functional way to represent it.
The same applies to some of the list handling internals as well in F# - internally mutation is used for a more efficient implementation of list manipulation, but this fact is hidden from the programmer.
What it comes down to is how the language allows you to describe and implement your algorithm, not what mechanics are used under the hood to make it happen.
A while loop is imperative by its nature. Most of the time, when using while loops, you will find yourself writing code like this:
let mutable x = ...
...
while someCond do
...
x <- ...
This pattern is common in imperative languages like C, C++ or C#, but not so common in functional languages.
As the other posters have said some data structures, more exactly recursive data structures, lend themselves to recursive processing. Since the most common data structure in functional languages is by far the singly linked list, solving problems by using lists and recursive functions is a common practice.
Another argument in favor of recursive solutions is the tight relation between recursion and induction. Using a recursive solution allows the programmer to think about the problem inductively, which arguably helps in solving it.
Again, as other posters said, the fact that the compiler optimizes tail-recursive functions (obviously, not all functions can benefit from tail-call optimization) is an implementation detail which lets your recursive algorithm run in constant space.
Why is it that functions in F# and OCaml (and possibly other languages) are not by default recursive?
In other words, why did the language designers decide it was a good idea to explicitly make you type rec in a declaration like:
let rec foo ... = ...
and not give the function recursive capability by default? Why the need for an explicit rec construct?
The French and British descendants of the original ML made different choices and their choices have been inherited through the decades to the modern variants. So this is just legacy but it does affect idioms in these languages.
Functions are not recursive by default in the French CAML family of languages (including OCaml). This choice makes it easy to supercede function (and variable) definitions using let in those languages because you can refer to the previous definition inside the body of a new definition. F# inherited this syntax from OCaml.
For example, superceding the function p when computing the Shannon entropy of a sequence in OCaml:
let shannon fold p =
let p x = p x *. log(p x) /. log 2.0 in
let p t x = t +. p x in
-. fold p 0.0
Note how the argument p to the higher-order shannon function is superceded by another p in the first line of the body and then another p in the second line of the body.
Conversely, the British SML branch of the ML family of languages took the other choice and SML's fun-bound functions are recursive by default. When most function definitions do not need access to previous bindings of their function name, this results in simpler code. However, superceded functions are made to use different names (f1, f2 etc.) which pollutes the scope and makes it possible to accidentally invoke the wrong "version" of a function. And there is now a discrepancy between implicitly-recursive fun-bound functions and non-recursive val-bound functions.
Haskell makes it possible to infer the dependencies between definitions by restricting them to be pure. This makes toy samples look simpler but comes at a grave cost elsewhere.
Note that the answers given by Ganesh and Eddie are red herrings. They explained why groups of functions cannot be placed inside a giant let rec ... and ... because it affects when type variables get generalized. This has nothing to do with rec being default in SML but not OCaml.
One crucial reason for the explicit use of rec is to do with Hindley-Milner type inference, which underlies all staticly typed functional programming languages (albeit changed and extended in various ways).
If you have a definition let f x = x, you'd expect it to have type 'a -> 'a and to be applicable on different 'a types at different points. But equally, if you write let g x = (x + 1) + ..., you'd expect x to be treated as an int in the rest of the body of g.
The way that Hindley-Milner inference deals with this distinction is through an explicit generalisation step. At certain points when processing your program, the type system stops and says "ok, the types of these definitions will be generalised at this point, so that when someone uses them, any free type variables in their type will be freshly instantiated, and thus won't interfere with any other uses of this definition."
It turns out that the sensible place to do this generalisation is after checking a mutually recursive set of functions. Any earlier, and you'll generalise too much, leading to situations where types could actually collide. Any later, and you'll generalise too little, making definitions that can't be used with multiple type instantiations.
So, given that the type checker needs to know about which sets of definitions are mutually recursive, what can it do? One possibility is to simply do a dependency analysis on all the definitions in a scope, and reorder them into the smallest possible groups. Haskell actually does this, but in languages like F# (and OCaml and SML) which have unrestricted side-effects, this is a bad idea because it might reorder the side-effects too. So instead it asks the user to explicitly mark which definitions are mutually recursive, and thus by extension where generalisation should occur.
There are two key reasons this is a good idea:
First, if you enable recursive definitions then you can't refer to a previous binding of a value of the same name. This is often a useful idiom when you are doing something like extending an existing module.
Second, recursive values, and especially sets of mutually recursive values, are much harder to reason about then are definitions that proceed in order, each new definition building on top of what has been already defined. It is nice when reading such code to have the guarantee that, except for definitions explicitly marked as recursive, new definitions can only refer to previous definitions.
Some guesses:
let is not only used to bind functions, but also other regular values. Most forms of values are not allowed to be recursive. Certain forms of recursive values are allowed (e.g. functions, lazy expressions, etc.), so it needs an explicit syntax to indicate this.
It might be easier to optimize non-recursive functions
The closure created when you create a recursive function needs to include an entry that points to the function itself (so the function can recursively call itself), which makes recursive closures more complicated than non-recursive closures. So it might be nice to be able to create simpler non-recursive closures when you don't need recursion
It allows you to define a function in terms of a previously-defined function or value of the same name; although I think this is bad practice
Extra safety? Makes sure that you are doing what you intended. e.g. If you don't intend it to be recursive but you accidentally used a name inside the function with the same name as the function itself, it will most likely complain (unless the name has been defined before)
The let construct is similar to the let construct in Lisp and Scheme; which are non-recursive. There is a separate letrec construct in Scheme for recursive let's
Given this:
let f x = ... and g y = ...;;
Compare:
let f a = f (g a)
With this:
let rec f a = f (g a)
The former redefines f to apply the previously defined f to the result of applying g to a. The latter redefines f to loop forever applying g to a, which is usually not what you want in ML variants.
That said, it's a language designer style thing. Just go with it.
A big part of it is that it gives the programmer more control over the complexity of their local scopes. The spectrum of let, let* and let rec offer an increasing level of both power and cost. let* and let rec are in essence nested versions of the simple let, so using either one is more expensive. This grading allows you to micromanage the optimization of your program as you can choose which level of let you need for the task at hand. If you don't need recursion or the ability to refer to previous bindings, then you can fall back on a simple let to save a bit of performance.
It's similar to the graded equality predicates in Scheme. (i.e. eq?, eqv? and equal?)
I have been trying to explain the difference between switch statements and pattern matching(F#) to a couple of people but I haven't really been able to explain it well..most of the time they just look at me and say "so why don't you just use if..then..else".
How would you explain it to them?
EDIT! Thanks everyone for the great answers, I really wish I could mark multiple right answers.
Having formerly been one of "those people", I don't know that there's a succinct way to sum up why pattern-matching is such tasty goodness. It's experiential.
Back when I had just glanced at pattern-matching and thought it was a glorified switch statement, I think that I didn't have experience programming with algebraic data types (tuples and discriminated unions) and didn't quite see that pattern matching was both a control construct and a binding construct. Now that I've been programming with F#, I finally "get it". Pattern-matching's coolness is due to a confluence of features found in functional programming languages, and so it's non-trivial for the outsider-looking-in to appreciate.
I tried to sum up one aspect of why pattern-matching is useful in the second of a short two-part blog series on language and API design; check out part one and part two.
Patterns give you a small language to describe the structure of the values you want to match. The structure can be arbitrarily deep and you can bind variables to parts of the structured value.
This allows you to write things extremely succinctly. You can illustrate this with a small example, such as a derivative function for a simple type of mathematical expressions:
type expr =
| Int of int
| Var of string
| Add of expr * expr
| Mul of expr * expr;;
let rec d(f, x) =
match f with
| Var y when x=y -> Int 1
| Int _ | Var _ -> Int 0
| Add(f, g) -> Add(d(f, x), d(g, x))
| Mul(f, g) -> Add(Mul(f, d(g, x)), Mul(g, d(f, x)));;
Additionally, because pattern matching is a static construct for static types, the compiler can (i) verify that you covered all cases (ii) detect redundant branches that can never match any value (iii) provide a very efficient implementation (with jumps etc.).
Excerpt from this blog article:
Pattern matching has several advantages over switch statements and method dispatch:
Pattern matches can act upon ints,
floats, strings and other types as
well as objects.
Pattern matches can act upon several
different values simultaneously:
parallel pattern matching. Method
dispatch and switch are limited to a single
value, e.g. "this".
Patterns can be nested, allowing
dispatch over trees of arbitrary
depth. Method dispatch and switch are limited
to the non-nested case.
Or-patterns allow subpatterns to be
shared. Method dispatch only allows
sharing when methods are from
classes that happen to share a base
class. Otherwise you must manually
factor out the commonality into a
separate function (giving it a
name) and then manually insert calls
from all appropriate places to your
unnecessary function.
Pattern matching provides redundancy
checking which catches errors.
Nested and/or parallel pattern
matches are optimized for you by the
F# compiler. The OO equivalent must
be written by hand and constantly
reoptimized by hand during
development, which is prohibitively
tedious and error prone so
production-quality OO code tends to
be extremely slow in comparison.
Active patterns allow you to inject
custom dispatch semantics.
Off the top of my head:
The compiler can tell if you haven't covered all possibilities in your matches
You can use a match as an assignment
If you have a discriminated union, each match can have a different 'type'
Tuples have "," and Variants have Ctor args .. these are constructors, they create things.
Patterns are destructors, they rip them apart.
They're dual concepts.
To put this more forcefully: the notion of a tuple or variant cannot be described merely by its constructor: the destructor is required or the value you made is useless. It is these dual descriptions which define a value.
Generally we think of constructors as data, and destructors as control flow. Variant destructors are alternate branches (one of many), tuple destructors are parallel threads (all of many).
The parallelism is evident in operations like
(f * g) . (h * k) = (f . h * g . k)
if you think of control flowing through a function, tuples provide a way to split up a calculation into parallel threads of control.
Looked at this way, expressions are ways to compose tuples and variants to make complicated data structures (think of an AST).
And pattern matches are ways to compose the destructors (again, think of an AST).
Switch is the two front wheels.
Pattern-matching is the entire car.
Pattern matches in OCaml, in addition to being more expressive as mentioned in several ways that have been described above, also give some very important static guarantees. The compiler will prove for you that the case-analysis embodied by your pattern-match statement is:
exhaustive (no cases are missed)
non-redundant (no cases that can never be hit because they are pre-empted by a previous case)
sound (no patterns that are impossible given the datatype in question)
This is a really big deal. It's helpful when you're writing the program for the first time, and enormously useful when your program is evolving. Used properly, match-statements make it easier to change the types in your code reliably, because the type system points you at the broken match statements, which are a decent indicator of where you have code that needs to be fixed.
If-Else (or switch) statements are about choosing different ways to process a value (input) depending on properties of the value at hand.
Pattern matching is about defining how to process a value given its structure, (also note that single case pattern matches make sense).
Thus pattern matching is more about deconstructing values than making choices, this makes them a very convenient mechanism for defining (recursive) functions on inductive structures (recursive union types), which explains why they are so abundantly used in languages like Ocaml etc.
PS: You might know the pattern-match and If-Else "patterns" from their ad-hoc use in math;
"if x has property A then y else z" (If-Else)
"some term in p1..pn where .... is the prime decomposition of x.." ((single case) pattern match)
Perhaps you could draw an analogy with strings and regular expressions? You describe what you are looking for, and let the compiler figure out how for itself. It makes your code much simpler and clearer.
As an aside: I find that the most useful thing about pattern matching is that it encourages good habits. I deal with the corner cases first, and it's easy to check that I've covered every case.