I'm starting to doubt I really understand this topic.
Until now, I was understanding a continuation as calling a function with closure (typically returned by another function). But MLton seems to have a non‑standard special structure for this (a structure I'm not sure to understand), and also in some other documents, mention special optimizations (using jumps, as quickly mentioned on page 58, printed page 51) with continuations, namely, instead of naming call to functions with closure. Also, function closures seems to be sometime described as the basis for continuations, but not described as being continuations, while some other times people assert the opposite (that function closures are special case of continuations, not the other way).
As an example, how do continuations differs from this, and what would looks like the same, with continuations instead of function with closure:
datatype next = Next of (unit -> next)
fun f (i:int): next =
(print (Int.toString i);
Next (fn () => f (i + 1)))
val Next g = f 1
val Next g = g ()
val Next g = g ()
val Next g = g ()
…
I wonder about it, in the general computer‑science context, as much as specifically in the practical SML context.
Note: the question may looks the same as “difference between closures and continuations”, but reading this one did not answer my question and does not address a practical case as a basis. Except it drove me to add another question: why are continuations said to be more abstract than closures, if in the end continuations are made of closures as the incomplete (to my eyes) answer in the above link suggest?
Is the difference really important or just a matter of style / syntax / vocabulary?
I feel a similar question arise with monads versus continuations, but that would be too much for a single question post (but if on the opposite, that can be simply answered in the while, feel free…).
Update
Still from MLton's world, a wording which seems to suggest continuations and function closures are the same (unless I'm not understanding correctly).
CommonArg (mlton.org), near the bottom of the page, says:
What I think the common argument optimization shows is that the
dominator analysis does slightly better than the reviewer puts it:
we find more than just constant continuations, we find common
continuations. And I think this is further justified by the fact
that I have observed common argument eliminate some env_X arguments
which would appear to correspond to determining that while the
closure being executed isn’t constant it is at least the same as
the closure being passed elsewhere.
It's talking about the same using both words, isn't it?
Similarly and may be more explicitely, at the bottom on this page: ReturnStatement (mlton.org).
There too, it seems to be the same. Is it?
It seems there is a terminological confusion. 'Continuation' is an abstract concept, which is a meaning of a context of an expression. Closure is a very particular way to realize
values that represent functions (higher-order languages can be implemented without closures at all, for example, using substitution semantics).
Control operator can capture the current continuation and produce a particular representation of it (this is called reification). The particular representation of a captured continuation may indeed be a closure -- or may be not. For example, in OCaml, the continuations captured by the delimcc library are repersented as values of the abstract data type (whose realization is quite different from closures). You might find the introduction part of the following page useful.
Undelimited continuations are not functions
Related
I'm trying to work out how to use a computation builder to represent a deferred, nested set of steps.
I've got the following so far:
type Entry =
| Leaf of string * (unit -> unit)
| Node of string * Entry list * (unit -> unit)
type StepBuilder(desc:string) =
member this.Zero() = Leaf(desc,id)
member this.Bind(v:string, f:unit->string) =
Node(f(), [Leaf(v,id)], id)
member this.Bind(v:Entry, f:unit->Entry) =
match f() with
| Node(label,children,a) -> Node(label, v :: children, a)
| Leaf(label,a) -> Node(label, [v], a)
let step desc = StepBuilder(desc)
let a = step "a" {
do! step "b" {
do! step "c" {
do! step "c.1" {
// todo: this still evals as it goes; need to find a way to defer
// the inner contents...
printfn "TEST"
}
}
}
do! step "d" {
printfn "d"
}
}
This produces the desired structure:
A(B(C(c.1)), D)
My issue is that in building the structure up, the printfn calls are made.
Ideally what I want is to be able to retrieve the tree structure, but be able to call some returned function/s that will then execute the inner blocks.
I realise this means that if you have two nested steps with some "normal" code between them, it would need to be able to read the step declarations, and then invoke over it (if that makes sense?).
I know that Delay and Run are things that are used in deferred execution for computation expressions, but I'm not sure if they help me here, as they unfortunately evaluate for everything.
I'm most likely missing something glaringly obvious and very "functional-y" but I just can't seem to make it do what I want.
Clarification
I'm using id for demonstration, they're part of the puzzle, and I imagine how I might surface the "invokable" parts of my expression that I want.
As mentioned in the other answer, free monads provide a useful theoretical framework for thinking about this kind of problems - however, I think you do not necessarily need them to get an answer to the specific question you are asking here.
First, I had to add Return to your computation builder to make your code compile. As you never return anything, I just added an overload taking unit which is equivalent to Zero:
member this.Return( () ) = this.Zero()
Now, to answer your question - I think you need to modify your discriminated union to allow delaying of computations that produce Entry - you do have functions unit -> unit in the domain model, but that's not quite enough to delay a computation that will produce a new entry. So, I think you need to extend the type:
type Entry =
| Leaf of string * (unit -> unit)
| Node of string * Entry list * (unit -> unit)
| Delayed of (unit -> Entry)
When you are evaluating Entry, you will now need to handle the Delayed case - which contains a function that might perform side-effect such as printing "TEST".
Now you can add Delay to your computation builder and also implement the missing case for Delayed in Bind like this:
member this.Delay(f) = Delayed(f)
member this.Bind(v:Entry, f:unit->Entry) = Delayed(fun () ->
let rec loop = function
| Delayed f -> loop (f())
| Node(label,children,a) -> Node(label, v :: children, a)
| Leaf(label,a) -> Node(label, [v], a)
loop (f()) )
Essentially, Bind will create a new delayed computation that, when called, evaluates the entry v until it finds a node or a leaf (collapsing all other delayed nodes) and then does the same thing as what your code did before.
I think this answers your question - but I'd be a bit careful here. I think computation expressions are useful as a syntactic sugar, but they are very harmful if you think about them more than you think about the domain of the problem that you are actually solving - in the question, you did not say much about your actual problem. If you did, the answer might be very different.
You wrote:
Ideally what I want is to be able to retrieve the tree structure, but be able to call some returned function/s that will then execute the inner blocks.
This is an almost perfect description of the "free monad", which is basically the functional-programming equivalent of the OOP "interpreter pattern". The basic idea behind the free monad is that you convert imperative-style code into a two-step process. The first step builds up an AST, and the second step executes the AST. That way you can do things in between step 1 and step 2, like analyze the tree structure without executing the code. Then when you're ready, you can run your "execute" function, which takes the AST as input and actually does the steps it represents.
I'm not experienced enough with free monads to be able to write a complete tutorial on them, nor to directly answer your question with a step-by-step specific free-monad solution. But I can point you to a few resources that may help you understand the concepts behind them. First, the required Scott Wlaschin link:
https://fsharpforfunandprofit.com/posts/13-ways-of-looking-at-a-turtle-2/#way13
This is the last part of his "13 ways of looking at a turtle" series, where he builds a small LOGO-like turtle-graphics app using many different design styles. In #13, he uses the free-monad style, building it from the ground up so you can see the design decisions that go into that style.
Second, a set of links to Mark Seemann's blog. For the past month or two, Mark Seemann has been writing posts about the free-monad style, though I didn't realize that that's what he was writing about until he was several articles in. There's a terminology difference that may confuse you at first: Scott Wlaschin uses the terms "Stop" and "KeepGoing" for the two possible AST cases ("this is the end of the command list" vs. "there are more commands after this one"). But the traditional names for those two free-monad cases are "Pure" and "Free". IMHO, the names "Pure" and "Free" are too abstract, and I like Scott Wlaschin's "Stop" and "KeepGoing" names better. But I mention this so that when you see "Pure" and "Free" in Mark Seemann's posts, you'll know that it's the same concept as Scott Wlaschin's turtle example.
Okay, with that explanation finished, here are the links to Mark Seemann's posts:
http://blog.ploeh.dk/2017/06/27/pure-times/
http://blog.ploeh.dk/2017/06/28/pure-times-in-haskell/
http://blog.ploeh.dk/2017/07/04/pure-times-in-f/
http://blog.ploeh.dk/2017/07/10/pure-interactions/
http://blog.ploeh.dk/2017/07/11/hello-pure-command-line-interaction/
http://blog.ploeh.dk/2017/07/17/a-pure-command-line-wizard/
http://blog.ploeh.dk/2017/07/24/combining-free-monads-in-haskell/
http://blog.ploeh.dk/2017/07/31/combining-free-monads-in-f/
http://blog.ploeh.dk/2017/08/07/f-free-monad-recipe/
Mark intersperses Haskell examples with F# examples, as you can tell from the URLs. If you are completely unfamiliar with Haskell, you can probably skip those posts as they might confuse you more than they help. But if you've got a passing familiarity with Haskell syntax, seeing the same ideas expressed in both Haskell and F# may help you grasp the concepts better, so I've included the Haskell posts as well as the F# posts.
As I said, I'm not quite familiar enough with free monads to be able to give you a specific answer to your question. But hopefully these links will give you some background knowledge that can help you implement what you're looking for.
I'm learning Elixir and wonder why it has two types of function definitions:
functions defined in a module with def, called using myfunction(param1, param2)
anonymous functions defined with fn, called using myfn.(param1, param2)
Only the second kind of function seems to be a first-class object and can be passed as a parameter to other functions. A function defined in a module needs to be wrapped in a fn. There's some syntactic sugar which looks like otherfunction(&myfunction(&1, &2)) in order to make that easy, but why is it necessary in the first place? Why can't we just do otherfunction(myfunction))? Is it only to allow calling module functions without parenthesis like in Ruby? It seems to have inherited this characteristic from Erlang which also has module functions and funs, so does it actually comes from how the Erlang VM works internally?
It there any benefit having two types of functions and converting from one type to another in order to pass them to other functions? Is there a benefit having two different notations to call functions?
Just to clarify the naming, they are both functions. One is a named function and the other is an anonymous one. But you are right, they work somewhat differently and I am going to illustrate why they work like that.
Let's start with the second, fn. fn is a closure, similar to a lambda in Ruby. We can create it as follows:
x = 1
fun = fn y -> x + y end
fun.(2) #=> 3
A function can have multiple clauses too:
x = 1
fun = fn
y when y < 0 -> x - y
y -> x + y
end
fun.(2) #=> 3
fun.(-2) #=> 3
Now, let's try something different. Let's try to define different clauses expecting a different number of arguments:
fn
x, y -> x + y
x -> x
end
** (SyntaxError) cannot mix clauses with different arities in function definition
Oh no! We get an error! We cannot mix clauses that expect a different number of arguments. A function always has a fixed arity.
Now, let's talk about the named functions:
def hello(x, y) do
x + y
end
As expected, they have a name and they can also receive some arguments. However, they are not closures:
x = 1
def hello(y) do
x + y
end
This code will fail to compile because every time you see a def, you get an empty variable scope. That is an important difference between them. I particularly like the fact that each named function starts with a clean slate and you don't get the variables of different scopes all mixed up together. You have a clear boundary.
We could retrieve the named hello function above as an anonymous function. You mentioned it yourself:
other_function(&hello(&1))
And then you asked, why I cannot simply pass it as hello as in other languages? That's because functions in Elixir are identified by name and arity. So a function that expects two arguments is a different function than one that expects three, even if they had the same name. So if we simply passed hello, we would have no idea which hello you actually meant. The one with two, three or four arguments? This is exactly the same reason why we can't create an anonymous function with clauses with different arities.
Since Elixir v0.10.1, we have a syntax to capture named functions:
&hello/1
That will capture the local named function hello with arity 1. Throughout the language and its documentation, it is very common to identify functions in this hello/1 syntax.
This is also why Elixir uses a dot for calling anonymous functions. Since you can't simply pass hello around as a function, instead you need to explicitly capture it, there is a natural distinction between named and anonymous functions and a distinct syntax for calling each makes everything a bit more explicit (Lispers would be familiar with this due to the Lisp 1 vs. Lisp 2 discussion).
Overall, those are the reasons why we have two functions and why they behave differently.
I don't know how useful this will be to anyone else, but the way I finally wrapped my head around the concept was to realize that elixir functions aren't Functions.
Everything in elixir is an expression. So
MyModule.my_function(foo)
is not a function but the expression returned by executing the code in my_function. There is actually only one way to get a "Function" that you can pass around as an argument and that is to use the anonymous function notation.
It is tempting to refer to the fn or & notation as a function pointer, but it is actually much more. It's a closure of the surrounding environment.
If you ask yourself:
Do I need an execution environment or a data value in this spot?
And if you need execution use fn, then most of the difficulties become much
clearer.
I may be wrong since nobody mentioned it, but I was also under the impression that the reason for this is also the ruby heritage of being able to call functions without brackets.
Arity is obviously involved but lets put it aside for a while and use functions without arguments. In a language like javascript where brackets are mandatory, it is easy to make the difference between passing a function as an argument and calling the function. You call it only when you use the brackets.
my_function // argument
(function() {}) // argument
my_function() // function is called
(function() {})() // function is called
As you can see, naming it or not does not make a big difference. But elixir and ruby allow you to call functions without the brackets. This is a design choice which I personally like but it has this side effect you cannot use just the name without the brackets because it could mean you want to call the function. This is what the & is for. If you leave arity appart for a second, prepending your function name with & means that you explicitly want to use this function as an argument, not what this function returns.
Now the anonymous function is bit different in that it is mainly used as an argument. Again this is a design choice but the rational behind it is that it is mainly used by iterators kind of functions which take functions as arguments. So obviously you don't need to use & because they are already considered arguments by default. It is their purpose.
Now the last problem is that sometimes you have to call them in your code, because they are not always used with an iterator kind of function, or you might be coding an iterator yourself. For the little story, since ruby is object oriented, the main way to do it was to use the call method on the object. That way, you could keep the non-mandatory brackets behaviour consistent.
my_lambda.call
my_lambda.call()
my_lambda_with_arguments.call :h2g2, 42
my_lambda_with_arguments.call(:h2g2, 42)
Now somebody came up with a shortcut which basically looks like a method with no name.
my_lambda.()
my_lambda_with_arguments.(:h2g2, 42)
Again, this is a design choice. Now elixir is not object oriented and therefore call not use the first form for sure. I can't speak for José but it looks like the second form was used in elixir because it still looks like a function call with an extra character. It's close enough to a function call.
I did not think about all the pros and cons, but it looks like in both languages you could get away with just the brackets as long as you make brackets mandatory for anonymous functions. It seems like it is:
Mandatory brackets VS Slightly different notation
In both cases you make an exception because you make both behave differently. Since there is a difference, you might as well make it obvious and go for the different notation. The mandatory brackets would look natural in most cases but very confusing when things don't go as planned.
Here you go. Now this might not be the best explanation in the world because I simplified most of the details. Also most of it are design choices and I tried to give a reason for them without judging them. I love elixir, I love ruby, I like the function calls without brackets, but like you, I find the consequences quite misguiding once in a while.
And in elixir, it is just this extra dot, whereas in ruby you have blocks on top of this. Blocks are amazing and I am surprised how much you can do with just blocks, but they only work when you need just one anonymous function which is the last argument. Then since you should be able to deal with other scenarios, here comes the whole method/lambda/proc/block confusion.
Anyway... this is out of scope.
I've never understood why explanations of this are so complicated.
It's really just an exceptionally small distinction combined with the realities of Ruby-style "function execution without parens".
Compare:
def fun1(x, y) do
x + y
end
To:
fun2 = fn
x, y -> x + y
end
While both of these are just identifiers...
fun1 is an identifier that describes a named function defined with def.
fun2 is an identifier that describes a variable (that happens to contain a reference to function).
Consider what that means when you see fun1 or fun2 in some other expression? When evaluating that expression, do you call the referenced function or do you just reference a value out of memory?
There's no good way to know at compile time. Ruby has the luxury of introspecting the variable namespace to find out if a variable binding has shadowed a function at some point in time. Elixir, being compiled, can't really do this. That's what the dot-notation does, it tells Elixir that it should contain a function reference and that it should be called.
And this is really hard. Imagine that there wasn't a dot notation. Consider this code:
val = 5
if :rand.uniform < 0.5 do
val = fn -> 5 end
end
IO.puts val # Does this work?
IO.puts val.() # Or maybe this?
Given the above code, I think it's pretty clear why you have to give Elixir the hint. Imagine if every variable de-reference had to check for a function? Alternatively, imagine what heroics would be necessary to always infer that variable dereference was using a function?
There's an excellent blog post about this behavior: link
Two types of functions
If a module contains this:
fac(0) when N > 0 -> 1;
fac(N) -> N* fac(N-1).
You can’t just cut and paste this into the shell and get the same
result.
It’s because there is a bug in Erlang. Modules in Erlang are sequences
of FORMS. The Erlang shell evaluates a sequence of
EXPRESSIONS. In Erlang FORMS are not EXPRESSIONS.
double(X) -> 2*X. in an Erlang module is a FORM
Double = fun(X) -> 2*X end. in the shell is an EXPRESSION
The two are not the same. This bit of silliness has been Erlang
forever but we didn’t notice it and we learned to live with it.
Dot in calling fn
iex> f = fn(x) -> 2 * x end
#Function<erl_eval.6.17052888>
iex> f.(10)
20
In school I learned to call functions by writing f(10) not f.(10) -
this is “really” a function with a name like Shell.f(10) (it’s a
function defined in the shell) The shell part is implicit so it should
just be called f(10).
If you leave it like this expect to spend the next twenty years of
your life explaining why.
Elixir has optional braces for functions, including functions with 0 arity. Let's see an example of why it makes a separate calling syntax important:
defmodule Insanity do
def dive(), do: fn() -> 1 end
end
Insanity.dive
# #Function<0.16121902/0 in Insanity.dive/0>
Insanity.dive()
# #Function<0.16121902/0 in Insanity.dive/0>
Insanity.dive.()
# 1
Insanity.dive().()
# 1
Without making a difference between 2 types of functions, we can't say what Insanity.dive means: getting a function itself, calling it, or also calling the resulting anonymous function.
fn -> syntax is for using anonymous functions. Doing var.() is just telling elixir that I want you to take that var with a func in it and run it instead of referring to the var as something just holding that function.
Elixir has a this common pattern where instead of having logic inside of a function to see how something should execute, we pattern match different functions based on what kind of input we have. I assume this is why we refer to things by arity in the function_name/1 sense.
It's kind of weird to get used to doing shorthand function definitions (func(&1), etc), but handy when you're trying to pipe or keep your code concise.
In elixir we use def for simply define a function like we do in other languages.
fn creates an anonymous function refer to this for more clarification
Only the second kind of function seems to be a first-class object and can be passed as a parameter to other functions. A function defined in a module needs to be wrapped in a fn. There's some syntactic sugar which looks like otherfunction(myfunction(&1, &2)) in order to make that easy, but why is it necessary in the first place? Why can't we just do otherfunction(myfunction))?
You can do otherfunction(&myfunction/2)
Since elixir can execute functions without the brackets (like myfunction), using otherfunction(myfunction)) it will try to execute myfunction/0.
So, you need to use the capture operator and specify the function, including arity, since you can have different functions with the same name. Thus, &myfunction/2.
I was reading this and now wonder: what is the evaluation order in F#?
Obviously ; makes effects happen in a sequential fashion. But what about things like function calls or applications, order of evaluation for operators, and the like.
I've glanced at the F# spec, but there is no mention of that. Thanks for any insight!
I found some emails where we fixed the implementation to have a rigid application order. The code
open System
let f a =
Console.WriteLine "app1";
fun b ->
Console.WriteLine "app2";
()
(Console.WriteLine "f"; f) (Console.WriteLine "arg1") (Console.WriteLine "arg2")
will print "f", "arg1", "arg2", "app1", "app2". However this didn't make it into the spec. I'll file a spec bug.
(Some other portions of the spec are already more explicit, e.g.
6.9.6 Evaluating Method Applications
For elaborated applications of methods, the elaborated form of the expression will be either expr.M(args) or M(args).
The (optional) expr and args are evaluated in left-to-right order and the body of the member is evaluated in an environment with formal parameters that are mapped to corresponding argument values.
If expr evaluates to null then NullReferenceException is raised.
If the method is a virtual dispatch slot (that is, a method that is declared abstract) then the body of the member is chosen according to the dispatch maps of the value of expr.
That said, some experts believe that you will live a longer, happier life if you do not rely on evaluation order. :) )
(Possibly see also
http://blogs.msdn.com/ericlippert/archive/2009/11/19/always-write-a-spec-part-one.aspx
http://blogs.msdn.com/ericlippert/archive/2009/11/23/always-write-a-spec-part-two.aspx
for more on how easy it is to screw things up with evaluation order.)
I have been trying to explain the difference between switch statements and pattern matching(F#) to a couple of people but I haven't really been able to explain it well..most of the time they just look at me and say "so why don't you just use if..then..else".
How would you explain it to them?
EDIT! Thanks everyone for the great answers, I really wish I could mark multiple right answers.
Having formerly been one of "those people", I don't know that there's a succinct way to sum up why pattern-matching is such tasty goodness. It's experiential.
Back when I had just glanced at pattern-matching and thought it was a glorified switch statement, I think that I didn't have experience programming with algebraic data types (tuples and discriminated unions) and didn't quite see that pattern matching was both a control construct and a binding construct. Now that I've been programming with F#, I finally "get it". Pattern-matching's coolness is due to a confluence of features found in functional programming languages, and so it's non-trivial for the outsider-looking-in to appreciate.
I tried to sum up one aspect of why pattern-matching is useful in the second of a short two-part blog series on language and API design; check out part one and part two.
Patterns give you a small language to describe the structure of the values you want to match. The structure can be arbitrarily deep and you can bind variables to parts of the structured value.
This allows you to write things extremely succinctly. You can illustrate this with a small example, such as a derivative function for a simple type of mathematical expressions:
type expr =
| Int of int
| Var of string
| Add of expr * expr
| Mul of expr * expr;;
let rec d(f, x) =
match f with
| Var y when x=y -> Int 1
| Int _ | Var _ -> Int 0
| Add(f, g) -> Add(d(f, x), d(g, x))
| Mul(f, g) -> Add(Mul(f, d(g, x)), Mul(g, d(f, x)));;
Additionally, because pattern matching is a static construct for static types, the compiler can (i) verify that you covered all cases (ii) detect redundant branches that can never match any value (iii) provide a very efficient implementation (with jumps etc.).
Excerpt from this blog article:
Pattern matching has several advantages over switch statements and method dispatch:
Pattern matches can act upon ints,
floats, strings and other types as
well as objects.
Pattern matches can act upon several
different values simultaneously:
parallel pattern matching. Method
dispatch and switch are limited to a single
value, e.g. "this".
Patterns can be nested, allowing
dispatch over trees of arbitrary
depth. Method dispatch and switch are limited
to the non-nested case.
Or-patterns allow subpatterns to be
shared. Method dispatch only allows
sharing when methods are from
classes that happen to share a base
class. Otherwise you must manually
factor out the commonality into a
separate function (giving it a
name) and then manually insert calls
from all appropriate places to your
unnecessary function.
Pattern matching provides redundancy
checking which catches errors.
Nested and/or parallel pattern
matches are optimized for you by the
F# compiler. The OO equivalent must
be written by hand and constantly
reoptimized by hand during
development, which is prohibitively
tedious and error prone so
production-quality OO code tends to
be extremely slow in comparison.
Active patterns allow you to inject
custom dispatch semantics.
Off the top of my head:
The compiler can tell if you haven't covered all possibilities in your matches
You can use a match as an assignment
If you have a discriminated union, each match can have a different 'type'
Tuples have "," and Variants have Ctor args .. these are constructors, they create things.
Patterns are destructors, they rip them apart.
They're dual concepts.
To put this more forcefully: the notion of a tuple or variant cannot be described merely by its constructor: the destructor is required or the value you made is useless. It is these dual descriptions which define a value.
Generally we think of constructors as data, and destructors as control flow. Variant destructors are alternate branches (one of many), tuple destructors are parallel threads (all of many).
The parallelism is evident in operations like
(f * g) . (h * k) = (f . h * g . k)
if you think of control flowing through a function, tuples provide a way to split up a calculation into parallel threads of control.
Looked at this way, expressions are ways to compose tuples and variants to make complicated data structures (think of an AST).
And pattern matches are ways to compose the destructors (again, think of an AST).
Switch is the two front wheels.
Pattern-matching is the entire car.
Pattern matches in OCaml, in addition to being more expressive as mentioned in several ways that have been described above, also give some very important static guarantees. The compiler will prove for you that the case-analysis embodied by your pattern-match statement is:
exhaustive (no cases are missed)
non-redundant (no cases that can never be hit because they are pre-empted by a previous case)
sound (no patterns that are impossible given the datatype in question)
This is a really big deal. It's helpful when you're writing the program for the first time, and enormously useful when your program is evolving. Used properly, match-statements make it easier to change the types in your code reliably, because the type system points you at the broken match statements, which are a decent indicator of where you have code that needs to be fixed.
If-Else (or switch) statements are about choosing different ways to process a value (input) depending on properties of the value at hand.
Pattern matching is about defining how to process a value given its structure, (also note that single case pattern matches make sense).
Thus pattern matching is more about deconstructing values than making choices, this makes them a very convenient mechanism for defining (recursive) functions on inductive structures (recursive union types), which explains why they are so abundantly used in languages like Ocaml etc.
PS: You might know the pattern-match and If-Else "patterns" from their ad-hoc use in math;
"if x has property A then y else z" (If-Else)
"some term in p1..pn where .... is the prime decomposition of x.." ((single case) pattern match)
Perhaps you could draw an analogy with strings and regular expressions? You describe what you are looking for, and let the compiler figure out how for itself. It makes your code much simpler and clearer.
As an aside: I find that the most useful thing about pattern matching is that it encourages good habits. I deal with the corner cases first, and it's easy to check that I've covered every case.
I've read through some of the post on here about closures and currying but I feel like I didn't find the answer. So what's the differences and possibly the similarities of closures and currying? Thanks for the help :)
Currying is really a mathematical concept first and foremost. It's the just observation that for any n-ary function f: S0×...Sn → R, you can define a new function fprime (just found a markdown bug!) with n-1 parameters where that first parameter is replaced by a constant. So, if you have a function add(a,b), you can define a new function add1(b) as
add1(b) ::= add(1, b)
...reading "::=" as "is defined to be."
A closure is more of a programming concept. (Of course, everything in programming is a mathematical concept as well, but closures became interesting because of programming.) When you construct a closure, you bind one or more variables; you're creating a chunk of code that has some variables tied to it.
The relationship is that you can use a closure in order to implement currying: you could build your add1 function above by making a closure in which that first parameter is bound to 1.