I have been trying to explain the difference between switch statements and pattern matching(F#) to a couple of people but I haven't really been able to explain it well..most of the time they just look at me and say "so why don't you just use if..then..else".
How would you explain it to them?
EDIT! Thanks everyone for the great answers, I really wish I could mark multiple right answers.
Having formerly been one of "those people", I don't know that there's a succinct way to sum up why pattern-matching is such tasty goodness. It's experiential.
Back when I had just glanced at pattern-matching and thought it was a glorified switch statement, I think that I didn't have experience programming with algebraic data types (tuples and discriminated unions) and didn't quite see that pattern matching was both a control construct and a binding construct. Now that I've been programming with F#, I finally "get it". Pattern-matching's coolness is due to a confluence of features found in functional programming languages, and so it's non-trivial for the outsider-looking-in to appreciate.
I tried to sum up one aspect of why pattern-matching is useful in the second of a short two-part blog series on language and API design; check out part one and part two.
Patterns give you a small language to describe the structure of the values you want to match. The structure can be arbitrarily deep and you can bind variables to parts of the structured value.
This allows you to write things extremely succinctly. You can illustrate this with a small example, such as a derivative function for a simple type of mathematical expressions:
type expr =
| Int of int
| Var of string
| Add of expr * expr
| Mul of expr * expr;;
let rec d(f, x) =
match f with
| Var y when x=y -> Int 1
| Int _ | Var _ -> Int 0
| Add(f, g) -> Add(d(f, x), d(g, x))
| Mul(f, g) -> Add(Mul(f, d(g, x)), Mul(g, d(f, x)));;
Additionally, because pattern matching is a static construct for static types, the compiler can (i) verify that you covered all cases (ii) detect redundant branches that can never match any value (iii) provide a very efficient implementation (with jumps etc.).
Excerpt from this blog article:
Pattern matching has several advantages over switch statements and method dispatch:
Pattern matches can act upon ints,
floats, strings and other types as
well as objects.
Pattern matches can act upon several
different values simultaneously:
parallel pattern matching. Method
dispatch and switch are limited to a single
value, e.g. "this".
Patterns can be nested, allowing
dispatch over trees of arbitrary
depth. Method dispatch and switch are limited
to the non-nested case.
Or-patterns allow subpatterns to be
shared. Method dispatch only allows
sharing when methods are from
classes that happen to share a base
class. Otherwise you must manually
factor out the commonality into a
separate function (giving it a
name) and then manually insert calls
from all appropriate places to your
unnecessary function.
Pattern matching provides redundancy
checking which catches errors.
Nested and/or parallel pattern
matches are optimized for you by the
F# compiler. The OO equivalent must
be written by hand and constantly
reoptimized by hand during
development, which is prohibitively
tedious and error prone so
production-quality OO code tends to
be extremely slow in comparison.
Active patterns allow you to inject
custom dispatch semantics.
Off the top of my head:
The compiler can tell if you haven't covered all possibilities in your matches
You can use a match as an assignment
If you have a discriminated union, each match can have a different 'type'
Tuples have "," and Variants have Ctor args .. these are constructors, they create things.
Patterns are destructors, they rip them apart.
They're dual concepts.
To put this more forcefully: the notion of a tuple or variant cannot be described merely by its constructor: the destructor is required or the value you made is useless. It is these dual descriptions which define a value.
Generally we think of constructors as data, and destructors as control flow. Variant destructors are alternate branches (one of many), tuple destructors are parallel threads (all of many).
The parallelism is evident in operations like
(f * g) . (h * k) = (f . h * g . k)
if you think of control flowing through a function, tuples provide a way to split up a calculation into parallel threads of control.
Looked at this way, expressions are ways to compose tuples and variants to make complicated data structures (think of an AST).
And pattern matches are ways to compose the destructors (again, think of an AST).
Switch is the two front wheels.
Pattern-matching is the entire car.
Pattern matches in OCaml, in addition to being more expressive as mentioned in several ways that have been described above, also give some very important static guarantees. The compiler will prove for you that the case-analysis embodied by your pattern-match statement is:
exhaustive (no cases are missed)
non-redundant (no cases that can never be hit because they are pre-empted by a previous case)
sound (no patterns that are impossible given the datatype in question)
This is a really big deal. It's helpful when you're writing the program for the first time, and enormously useful when your program is evolving. Used properly, match-statements make it easier to change the types in your code reliably, because the type system points you at the broken match statements, which are a decent indicator of where you have code that needs to be fixed.
If-Else (or switch) statements are about choosing different ways to process a value (input) depending on properties of the value at hand.
Pattern matching is about defining how to process a value given its structure, (also note that single case pattern matches make sense).
Thus pattern matching is more about deconstructing values than making choices, this makes them a very convenient mechanism for defining (recursive) functions on inductive structures (recursive union types), which explains why they are so abundantly used in languages like Ocaml etc.
PS: You might know the pattern-match and If-Else "patterns" from their ad-hoc use in math;
"if x has property A then y else z" (If-Else)
"some term in p1..pn where .... is the prime decomposition of x.." ((single case) pattern match)
Perhaps you could draw an analogy with strings and regular expressions? You describe what you are looking for, and let the compiler figure out how for itself. It makes your code much simpler and clearer.
As an aside: I find that the most useful thing about pattern matching is that it encourages good habits. I deal with the corner cases first, and it's easy to check that I've covered every case.
Related
Taking the example of writing a parser-unparser pair for a DU.
The unparser function of course uses match expression, where we can count on static exhaustiveness check if new cases are added later.
Now, for the parsing side, the obvious route is (with eg FParsec) use a choice parser with a list of the DU case parsers. This works, but does not have exhaustiveness checking.
My best solution for this, is to create a DU_Case -> parser function that uses match expression, and using runtime reflection get all DU cases, call the function with them, and pass the so generated list to the choice combinator. This works, and does have static exhaustiveness checking, BUT gets pretty clunky fast, especially if the cases have varying fields.
In other words, is there a good pattern other than using reflection for ensuring exhaustiveness when the discriminated union is the output and not the input?
Edit: example on the reflection based solution
By getting clunky, I mean that for every case that has fields, I have to also create the field values dynamically, instead of the simple Array.zeroCreate(0) which is of course always valid. But what in the else branch? There needs to be code to dynamically create the correct number of field values, with their correct (possibly complex) constructors and so on. Not impossible with reflection, but clunky.
FSharpType.GetUnionCases duType
|> Seq.map (fun duCase ->
if duCase.GetFields().Length = 0 then
let case = FSharpValue.MakeUnion(duCase, Array.zeroCreate(0))
else
(*...*)
When clauses in function and match statements aren't mutually exclusive, the order clearly matters. However when clauses are mutually exclusive they can be written in any order. E.g., to find the minimum element in a list, the following are functionally equivalent:
let rec minElt =
function
| [] -> failwith "empty list"
| [x0] -> x0
| x0::xtl -> min x0 (minElt xtl)
let rec minElt =
function
| [x0] -> x0
| x0::xtl -> min x0 (minElt xtl)
| [] -> failwith "empty list"
I prefer the first one stylistically, because the patterns are listed in increasing order of size / the base cases are first. However is there any advantage for the second one? In particular, is the second one more efficient because the exceptional case will never be checked in the course of normal evaluation?
I don't think there is any established idiomatic style. I would focus on making the code readable and understandable first - I think this depends on personal preferences, but I guess you could write:
Special cases first (anything that needs special handling, or handles special but valid values)
Most common cases next (the typical path, such as x::xs for lists)
Exceptional cases (anything that means invalid input)
I guess this is how I generally tend to write pattern matching (because this is the order in which I think about the possible cases).
I would not worry too much about the performance. I tested your function just out of curiosity. I called it 1000 times on lists from length 1 to 100 (so that's 100000 iterations) and the first one was about 895ms while the second one 878ms so the difference is 2%. Does not sound like something that would matter over readability (this was in F# Interactive, so the difference might be even smaller).
I typically put the finishing case of a recursive function first. This is usually either the 0 case or the 1 case. Next I put the case which recurses (assuming there is only one), followed by any exceptional cases.
The reasoning behind this is that it's how I understand inductive reasoning: i.e. I understand the base case (how the recursion ends), following by how that base case is used to prove the correctness of the algorithm (how the recursion reaches the end). I put exceptional cases last because they don't add to my understanding of the logic behind the recursion. Recursion is more difficult to understand if you start in the middle of the argument than if you start at the end (and since recursion doesn't have a well-defined beginning point, you can't very well start there).
This reasoning leans heavily on the mathematical foundations of functional programming, so if that isn't how you approach functional programming, then maybe some other ordering would make more sense to you.
Reading about F# today and I'm not clear on one thing:
From: http://msdn.microsoft.com/en-us/library/dd233200.aspx
you need only one element of the tuple, the wildcard character (the underscore) can be used to avoid creating a new name for a variable that you do not need
let (a, _) = (1, 2)
I can't think of a time that I've been in this situation. Why would you avoid creating a variable name?
Because you don't need the value. I use this often. It documents the fact that a value is unused and saves naming variables unused, dummy, etc. Great feature if you ask me.
Interesting question. There are many trade-offs involved here.
Your comparisons have been with the Ruby programming language so perhaps the first trade-off you should consider is static typing. If you use the pattern x, _, _ then F# knows you are referring to the first element of a triple of exactly three elements and will enforce this constraint at compile time. Ruby cannot. F# also checks patterns for exhaustiveness and redundancy. Again, Ruby cannot.
Your comparisons have also used only flat patterns. Consider the patterns _, (x, _) or x, None | _, Some x or [] | [_] and so on. These are not so easily translated.
Finally, I'd mention that Standard ML is a programming language related to F# and it does provide operators called #1 etc. to extract the first element of a tuple with an arbitrary number of elements (see here) so this idea was implemented and discarded decades ago. I believe this is because SML's #n notation culminates in incomprehensible error messages within the constraints of the type system. For example, a function that uses #n is not making it clear what the arity of the tuple is but functions cannot be generic over tuple arity so this must result in an error message saying that you must give more type information but many users found that confusing. With the CAML/OCaml/F# approach there is no such confusion.
The let-binding you've given is an example of a language facility called pattern matching, which can be used to destructure many types, not just tuples. In pattern matches, underscores are the idiomatic way to express that you won't refer to a value.
Directly accessing the elements of a tuple can be more concise, but it's less general. Pattern matching allows you to look at the structure of some data and dispatch to an approprate handling case.
match x with
| (x, _, 20) -> x
| (_, y, _) -> y
This pattern match will return the first item in x only if the third element is 20. Otherwise it returns the second element. Once you get beyond trivial cases, the underscores are an important readability aid. Compare the above with:
match x with
| (x, y, 20) -> x
| (x, y, z) -> y
In the first code sample, it's much easier to tell which bindings you care about in the pattern.
Sometimes a method will return multiple values but the code you're writing is only interested in a select few (or one) of them. You can use multiple underscores to essentially ignore the values you don't need, rather than having a bunch of variables hanging around in local scope.
I know that in some languages (Haskell?) the striving is to achieve point-free style, or to never explicitly refer to function arguments by name. This is a very difficult concept for me to master, but it might help me to understand what the advantages (or maybe even disadvantages) of that style are. Can anyone explain?
The point-free style is considered by some author as the ultimate functional programming style. To put things simply, a function of type t1 -> t2 describes a transformation from one element of type t1 into another element of type t2. The idea is that "pointful" functions (written using variables) emphasize elements (when you write \x -> ... x ..., you're describing what's happening to the element x), while "point-free" functions (expressed without using variables) emphasize the transformation itself, as a composition of simpler transforms. Advocates of the point-free style argue that transformations should indeed be the central concept, and that the pointful notation, while easy to use, distracts us from this noble ideal.
Point-free functional programming has been available for a very long time. It was already known by logicians which have studied combinatory logic since the seminal work by Moses Schönfinkel in 1924, and has been the basis for the first study on what would become ML type inference by Robert Feys and Haskell Curry in the 1950s.
The idea to build functions from an expressive set of basic combinators is very appealing and has been applied in various domains, such as the array-manipulation languages derived from APL, or the parser combinator libraries such as Haskell's Parsec. A notable advocate of point-free programming is John Backus. In his 1978 speech "Can Programming Be Liberated From the Von Neumann Style ?", he wrote:
The lambda expression (with its substitution rules) is capable of
defining all possible computable functions of all possible types
and of any number of arguments. This freedom and power has its
disadvantages as well as its obvious advantages. It is analogous
to the power of unrestricted control statements in conventional
languages: with unrestricted freedom comes chaos. If one
constantly invents new combining forms to suit the occasion, as
one can in the lambda calculus, one will not become familiar with
the style or useful properties of the few combining forms that
are adequate for all purposes. Just as structured programming
eschews many control statements to obtain programs with simpler
structure, better properties, and uniform methods for
understanding their behavior, so functional programming eschews
the lambda expression, substitution, and multiple function
types. It thereby achieves programs built with familiar
functional forms with known useful properties. These programs are
so structured that their behavior can often be understood and
proven by mechanical use of algebraic techniques similar to those
used in solving high school algebra problems.
So here they are. The main advantage of point-free programming are that they force a structured combinator style which makes equational reasoning natural. Equational reasoning has been particularly advertised by the proponents of the "Squiggol" movement (see [1] [2]), and indeed use a fair share of point-free combinators and computation/rewriting/reasoning rules.
[1] "An introduction to the Bird-Merteens Formalism", Jeremy Gibbons, 1994
[2] "Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire", Erik Meijer, Maarten Fokkinga and Ross Paterson, 1991
Finally, one cause for the popularity of point-free programming among Haskellites is its relation to category theory. In category theory, morphisms (which could be seen as "transformations between objects") are the basic object of study and computation. While partial results allow reasoning in specific categories to be performed in a pointful style, the common way to build, examine and manipulate arrows is still the point-free style, and other syntaxes such as string diagrams also exhibit this "pointfreeness". There are rather tight links between the people advocating "algebra of programming" methods and users of categories in programming (for example the authors of the banana paper [2] are/were hardcore categorists).
You may be interested in the Pointfree page of the Haskell wiki.
The downside of pointfree style is rather obvious: it can be a real pain to read. The reason why we still love to use variables, despite the numerous horrors of shadowing, alpha-equivalence etc., is that it's a notation that's just so natural to read and think about. The general idea is that a complex function (in a referentially transparent language) is like a complex plumbing system: the inputs are the parameters, they get into some pipes, are applied to inner functions, duplicated (\x -> (x,x)) or forgotten (\x -> (), pipe leading nowhere), etc. And the variable notation is nicely implicit about all that machinery: you give a name to the input, and names on the outputs (or auxiliary computations), but you don't have to describe all the plumbing plan, where the small pipes will go not to be a hindrance for the bigger ones, etc. The amount of plumbing inside something as short as \(f,x,y) -> ((x,y), f x y) is amazing. You may follow each variable individually, or read each intermediate plumbing node, but you never have to see the whole machinery together. When you use a point-free style, all the plumbing is explicit, you have to write everything down, and look at it afterwards, and sometimes it's just plain ugly.
PS: this plumbing vision is closely related to the stack programming languages, which are probably the least pointful programming languages (barely) in use. I would recommend trying to do some programming in them just to get of feeling of it (as I would recommend logic programming). See Factor, Cat or the venerable Forth.
I believe the purpose is to be succinct and to express pipelined computations as a composition of functions rather than thinking of threading arguments through. Simple example (in F#) - given:
let sum = List.sum
let sqr = List.map (fun x -> x * x)
Used like:
> sum [3;4;5]
12
> sqr [3;4;5]
[9;16;25]
We could express a "sum of squares" function as:
let sumsqr x = sum (sqr x)
And use like:
> sumsqr [3;4;5]
50
Or we could define it by piping x through:
let sumsqr x = x |> sqr |> sum
Written this way, it's obvious that x is being passed in only to be "threaded" through a sequence of functions. Direct composition looks much nicer:
let sumsqr = sqr >> sum
This is more concise and it's a different way of thinking of what we're doing; composing functions rather than imagining the process of arguments flowing through. We're not describing how sumsqr works. We're describing what it is.
PS: An interesting way to get your head around composition is to try programming in a concatenative language such as Forth, Joy, Factor, etc. These can be thought of as being nothing but composition (Forth : sumsqr sqr sum ;) in which the space between words is the composition operator.
PPS: Perhaps others could comment on the performance differences. It seems to me that composition may reduce GC pressure by making it more obvious to the compiler that there is no need to produce intermediate values as in pipelining; helping make the so-called "deforestation" problem more tractable.
While I'm attracted to the point-free concept and used it for some things, and agree with all the positives said before, I found these things with it as negative (some are detailed above):
The shorter notation reduces redundancy; in a heavily structured composition (ramda.js style, or point-free in Haskell, or whatever concatenative language) the code reading is more complex than linearly scanning through a bunch of const bindings and using a symbol highlighter to see which binding goes into what other downstream calculation. Besides the tree vs linear structure, the loss of descriptive symbol names makes the function hard to intuitively grasp. Of course both the tree structure and the loss of named bindings also have a lot of positives as well, for example, functions will feel more general - not bound to some application domain via the chosen symbol names - and the tree structure is semantically present even if bindings are laid out, and can be comprehended sequentially (lisp let/let* style).
Point-free is simplest when just piping through or composing a series of functions, as this also results in a linear structure that we humans find easy to follow. However, threading some interim calculation through multiple recipients is tedious. There are all kinds of wrapping into tuples, lensing and other painstaking mechanisms go into just making some calculation accessible, that would otherwise be just the multiple use of some value binding. Of course the repeated part can be extracted out as a separate function and maybe it's a good idea anyway, but there are also arguments for some non-short functions and even if it's extracted, its arguments will have to be somehow threaded through both applications, and then there may be a need for memoizing the function to not actually repeat the calculation. One will use a lot of converge, lens, memoize, useWidth etc.
JavaScript specific: harder to casually debug. With a linear flow of let bindings, it's easy to add a breakpoint wherever. With the point-free style, even if a breakpoint is somehow added, the value flow is hard to read, eg. you can't just query or hover over some variable in the dev console. Also, as point-free is not native in JS, library functions of ramda.js or similar will obscure the stack quite a bit, especially with the obligate currying.
Code brittleness, especially on nontrivial size systems and in production. If a new piece of requirement comes in, then the above disadvantages get into play (eg. harder to read the code for the next maintainer who may be yourself a few weeks down the line, and also harder to trace the dataflow for inspection). But most importantly, even something seemingly small and innocent new requirement can necessitate a whole different structuring of the code. It may be argued that it's a good thing in that it'll be a crystal clear representation of the new thing, but rewriting large swaths of point-free code is very time consuming and then we haven't mentioned testing. So it feels that the looser, less structured, lexical assignment based coding can be more quickly repurposed. Especially if the coding is exploratory, and in the domain of human data with weird conventions (time etc.) that can rarely be captured 100% accurately and there may always be an upcoming request for handling something more accurately or more to the needs of the customer, whichever method leads to faster pivoting matters a lot.
To the pointfree variant, the concatenative programming language, i have to write:
I had a little experience with Joy. Joy is a very simple and beautiful concept with lists. When converting a problem into a Joy function, you have to split your brain into a part for the stack plumbing work and a part for the solution in the Joy syntax. The stack is always handled from the back. Since the composition is contained in Joy, there is no computing time for a composition combiner.
Why is it that functions in F# and OCaml (and possibly other languages) are not by default recursive?
In other words, why did the language designers decide it was a good idea to explicitly make you type rec in a declaration like:
let rec foo ... = ...
and not give the function recursive capability by default? Why the need for an explicit rec construct?
The French and British descendants of the original ML made different choices and their choices have been inherited through the decades to the modern variants. So this is just legacy but it does affect idioms in these languages.
Functions are not recursive by default in the French CAML family of languages (including OCaml). This choice makes it easy to supercede function (and variable) definitions using let in those languages because you can refer to the previous definition inside the body of a new definition. F# inherited this syntax from OCaml.
For example, superceding the function p when computing the Shannon entropy of a sequence in OCaml:
let shannon fold p =
let p x = p x *. log(p x) /. log 2.0 in
let p t x = t +. p x in
-. fold p 0.0
Note how the argument p to the higher-order shannon function is superceded by another p in the first line of the body and then another p in the second line of the body.
Conversely, the British SML branch of the ML family of languages took the other choice and SML's fun-bound functions are recursive by default. When most function definitions do not need access to previous bindings of their function name, this results in simpler code. However, superceded functions are made to use different names (f1, f2 etc.) which pollutes the scope and makes it possible to accidentally invoke the wrong "version" of a function. And there is now a discrepancy between implicitly-recursive fun-bound functions and non-recursive val-bound functions.
Haskell makes it possible to infer the dependencies between definitions by restricting them to be pure. This makes toy samples look simpler but comes at a grave cost elsewhere.
Note that the answers given by Ganesh and Eddie are red herrings. They explained why groups of functions cannot be placed inside a giant let rec ... and ... because it affects when type variables get generalized. This has nothing to do with rec being default in SML but not OCaml.
One crucial reason for the explicit use of rec is to do with Hindley-Milner type inference, which underlies all staticly typed functional programming languages (albeit changed and extended in various ways).
If you have a definition let f x = x, you'd expect it to have type 'a -> 'a and to be applicable on different 'a types at different points. But equally, if you write let g x = (x + 1) + ..., you'd expect x to be treated as an int in the rest of the body of g.
The way that Hindley-Milner inference deals with this distinction is through an explicit generalisation step. At certain points when processing your program, the type system stops and says "ok, the types of these definitions will be generalised at this point, so that when someone uses them, any free type variables in their type will be freshly instantiated, and thus won't interfere with any other uses of this definition."
It turns out that the sensible place to do this generalisation is after checking a mutually recursive set of functions. Any earlier, and you'll generalise too much, leading to situations where types could actually collide. Any later, and you'll generalise too little, making definitions that can't be used with multiple type instantiations.
So, given that the type checker needs to know about which sets of definitions are mutually recursive, what can it do? One possibility is to simply do a dependency analysis on all the definitions in a scope, and reorder them into the smallest possible groups. Haskell actually does this, but in languages like F# (and OCaml and SML) which have unrestricted side-effects, this is a bad idea because it might reorder the side-effects too. So instead it asks the user to explicitly mark which definitions are mutually recursive, and thus by extension where generalisation should occur.
There are two key reasons this is a good idea:
First, if you enable recursive definitions then you can't refer to a previous binding of a value of the same name. This is often a useful idiom when you are doing something like extending an existing module.
Second, recursive values, and especially sets of mutually recursive values, are much harder to reason about then are definitions that proceed in order, each new definition building on top of what has been already defined. It is nice when reading such code to have the guarantee that, except for definitions explicitly marked as recursive, new definitions can only refer to previous definitions.
Some guesses:
let is not only used to bind functions, but also other regular values. Most forms of values are not allowed to be recursive. Certain forms of recursive values are allowed (e.g. functions, lazy expressions, etc.), so it needs an explicit syntax to indicate this.
It might be easier to optimize non-recursive functions
The closure created when you create a recursive function needs to include an entry that points to the function itself (so the function can recursively call itself), which makes recursive closures more complicated than non-recursive closures. So it might be nice to be able to create simpler non-recursive closures when you don't need recursion
It allows you to define a function in terms of a previously-defined function or value of the same name; although I think this is bad practice
Extra safety? Makes sure that you are doing what you intended. e.g. If you don't intend it to be recursive but you accidentally used a name inside the function with the same name as the function itself, it will most likely complain (unless the name has been defined before)
The let construct is similar to the let construct in Lisp and Scheme; which are non-recursive. There is a separate letrec construct in Scheme for recursive let's
Given this:
let f x = ... and g y = ...;;
Compare:
let f a = f (g a)
With this:
let rec f a = f (g a)
The former redefines f to apply the previously defined f to the result of applying g to a. The latter redefines f to loop forever applying g to a, which is usually not what you want in ML variants.
That said, it's a language designer style thing. Just go with it.
A big part of it is that it gives the programmer more control over the complexity of their local scopes. The spectrum of let, let* and let rec offer an increasing level of both power and cost. let* and let rec are in essence nested versions of the simple let, so using either one is more expensive. This grading allows you to micromanage the optimization of your program as you can choose which level of let you need for the task at hand. If you don't need recursion or the ability to refer to previous bindings, then you can fall back on a simple let to save a bit of performance.
It's similar to the graded equality predicates in Scheme. (i.e. eq?, eqv? and equal?)