I could not find an object choice in the standard libraries, that allows me to write
let safeDiv (numer : Choice<Exception, int>) (denom : Choice<Exception, int>) =
choice {
let! n = numer
let! d = denom
return! if d = 0
then Choice1Of2 (new DivideByZeroException())
else Choice2Of2 (n / d)
}
like in Haskell. Did I miss anything, or is there a third-party library for writing this kind of things, or do I have to re-invent this wheel?
There is no built-in computation expression for the Choice<'a,'b> type. In general, F# does not have a built-in computation expression for the commonly used Monads, but it does offer a fairly simple way to create them yourself: Computation Builders. This series is a good tutorial on how to implement them yourself. The F# library does often have a bind function defined that can be used as the basis of a Computation Builder, but it doesn't have one for the Choice type (I suspect because there are many variations of Choice).
Based on the example you provided, I suspect the F# Result<'a, 'error> type would actually be a better fit for your scenario. There's a code-review from a few months ago where a user posted an Either Computation Builder, and the accepted answer has a fairly complete implementation if you'd like to leverage it.
It is worth noting that, unlike in Haskell, using exceptions is a perfectly acceptable way to handle exceptional situations in F#. The language and the runtime both have a first-class support for exceptions and there is nothing wrong about using them.
I understand that your safeDiv function is for illustration, rather than being a real-world problem, so there is no reason for showing how to write that using exceptions.
In more realistic scenarios:
If the exception happens only when something actually goes wrong (network failure, etc.) then I would just let the system throw an exception and handle that using try ... with at the point where you need to restart the work or notify the user.
If the exception represents something expected (e.g. invalid user input) then you'll probably get more readable code if you define a custom data type to represent the wrong states (rather than using Choice<'a, exn> which has no semantic meaning).
It is also worth noting that computation expressions are only useful if you need to mix your special behaviour (exception propagation) with ordinary computation. I think it's often desirable to avoid that as much as possible (because it interleaves effects with pure computations).
For example, if you were doing input validation, you could define something like:
let result = validateAll [ condition1; condition2; condition3 ]
I would prefer that over a computation expression:
let result = validate {
do! condition1
do! condition2
do! condition3 }
That said, if you are absolutely certain that custom computation builder for error propagation is what you need, then Aaron's answer has all the information you need.
Related
The language suggestion states that the advantages are detailed in the linked paper. I had a quick skim through and I can't see it spelled out explicitly.
Is the advantage just that each statement gets executed in parallel so I might get a speed boost?
Or is there some kind of logic it caters for, that is not convenient using the usual monadic let!?
I understand that, this being applicative, means that it comes with the limitation that I can't use previous expressions to determine the logic of subsequent expressions. So does that mean the trade off is flexibility for efficiency?
The way I understood Don Syme's description, when I read it a while ago, was every step in the let! ... and! ... chain will be executed, unlike when you use let! ... let! .... Say, for instance, that you use the option CE. Then if you write
option {
let! a = parseInt("a")
let! b = parseInt("b")
return a + b
}
only the first let! will be executed, as the CE short-circuits as soon as it meets a None. Writing instead let! a ... and! b = ... will try to parse both strings; though not necessarily in parallel, as I understand it.
I see a huge benefit in this, when parsing data from external sources. Consider for instance running
parse {
let! name = tryParse<Name>("John Doe")
and! email = tryParse<EMail>("johndoe#")
and! age = tryParse<uint>("abc")
return (name, email, age)
}
and getting an Error(["'johndoe#' isn't a valid e-mail"; "'abc' isn't a valid uint"]) in return, instead of only the first of the errors. That's pretty neat to me.
I am excited for this for two reasons. Potentially better performance and that some things that can't fit into the Monad pattern can fit into the Applicative Functor pattern.
In order to support the already existing let! one need to implement Bind and Return, ie the Monad pattern.
let! and and! requires Apply and Pure (not exactly sure what F# the equivalent are named), ie the Applicative Functor pattern.
Applicative Functors are less powerful than Monads but still very useful. For example, in parsers.
As mentioned by Philip Carter, Applicative Functor can be made more efficient than Monads as it's possible to cache them.
The reason is the signature of Bind: M<'T> -> ('T -> M<'U>) -> M<'U>.
One typical way you implement bind is that you "execute" the first argument to produce a value of 'T and pass it the second argument to get the next step M<'U> which you execute as well.
As the second argument is a function that can return any M<'U> it means caching is not possible.
With Apply the signature is: M<'T -> 'U> -> M<'T> -> M<'U>. As neither of the two inputs are functions you can potentially cache them or do precomputation.
This is useful for parsers, for example in FParsec for performance reasons it's not recommended to use Bind.
In addition; a "drawback" with Bind is that if the first input doesn't produce a value 'T there is no way to get the second computation M<'U>. If you are building some kind of validator monad this means that as soon as a valid value can't be produced because the input is invalid then the validation will stop and you won't get a report of the remaining input.
With Applicative Functor you can execute all validators to produce a validation report for the entire document.
So I think it's pretty cool!
I'm trying to work out how to use a computation builder to represent a deferred, nested set of steps.
I've got the following so far:
type Entry =
| Leaf of string * (unit -> unit)
| Node of string * Entry list * (unit -> unit)
type StepBuilder(desc:string) =
member this.Zero() = Leaf(desc,id)
member this.Bind(v:string, f:unit->string) =
Node(f(), [Leaf(v,id)], id)
member this.Bind(v:Entry, f:unit->Entry) =
match f() with
| Node(label,children,a) -> Node(label, v :: children, a)
| Leaf(label,a) -> Node(label, [v], a)
let step desc = StepBuilder(desc)
let a = step "a" {
do! step "b" {
do! step "c" {
do! step "c.1" {
// todo: this still evals as it goes; need to find a way to defer
// the inner contents...
printfn "TEST"
}
}
}
do! step "d" {
printfn "d"
}
}
This produces the desired structure:
A(B(C(c.1)), D)
My issue is that in building the structure up, the printfn calls are made.
Ideally what I want is to be able to retrieve the tree structure, but be able to call some returned function/s that will then execute the inner blocks.
I realise this means that if you have two nested steps with some "normal" code between them, it would need to be able to read the step declarations, and then invoke over it (if that makes sense?).
I know that Delay and Run are things that are used in deferred execution for computation expressions, but I'm not sure if they help me here, as they unfortunately evaluate for everything.
I'm most likely missing something glaringly obvious and very "functional-y" but I just can't seem to make it do what I want.
Clarification
I'm using id for demonstration, they're part of the puzzle, and I imagine how I might surface the "invokable" parts of my expression that I want.
As mentioned in the other answer, free monads provide a useful theoretical framework for thinking about this kind of problems - however, I think you do not necessarily need them to get an answer to the specific question you are asking here.
First, I had to add Return to your computation builder to make your code compile. As you never return anything, I just added an overload taking unit which is equivalent to Zero:
member this.Return( () ) = this.Zero()
Now, to answer your question - I think you need to modify your discriminated union to allow delaying of computations that produce Entry - you do have functions unit -> unit in the domain model, but that's not quite enough to delay a computation that will produce a new entry. So, I think you need to extend the type:
type Entry =
| Leaf of string * (unit -> unit)
| Node of string * Entry list * (unit -> unit)
| Delayed of (unit -> Entry)
When you are evaluating Entry, you will now need to handle the Delayed case - which contains a function that might perform side-effect such as printing "TEST".
Now you can add Delay to your computation builder and also implement the missing case for Delayed in Bind like this:
member this.Delay(f) = Delayed(f)
member this.Bind(v:Entry, f:unit->Entry) = Delayed(fun () ->
let rec loop = function
| Delayed f -> loop (f())
| Node(label,children,a) -> Node(label, v :: children, a)
| Leaf(label,a) -> Node(label, [v], a)
loop (f()) )
Essentially, Bind will create a new delayed computation that, when called, evaluates the entry v until it finds a node or a leaf (collapsing all other delayed nodes) and then does the same thing as what your code did before.
I think this answers your question - but I'd be a bit careful here. I think computation expressions are useful as a syntactic sugar, but they are very harmful if you think about them more than you think about the domain of the problem that you are actually solving - in the question, you did not say much about your actual problem. If you did, the answer might be very different.
You wrote:
Ideally what I want is to be able to retrieve the tree structure, but be able to call some returned function/s that will then execute the inner blocks.
This is an almost perfect description of the "free monad", which is basically the functional-programming equivalent of the OOP "interpreter pattern". The basic idea behind the free monad is that you convert imperative-style code into a two-step process. The first step builds up an AST, and the second step executes the AST. That way you can do things in between step 1 and step 2, like analyze the tree structure without executing the code. Then when you're ready, you can run your "execute" function, which takes the AST as input and actually does the steps it represents.
I'm not experienced enough with free monads to be able to write a complete tutorial on them, nor to directly answer your question with a step-by-step specific free-monad solution. But I can point you to a few resources that may help you understand the concepts behind them. First, the required Scott Wlaschin link:
https://fsharpforfunandprofit.com/posts/13-ways-of-looking-at-a-turtle-2/#way13
This is the last part of his "13 ways of looking at a turtle" series, where he builds a small LOGO-like turtle-graphics app using many different design styles. In #13, he uses the free-monad style, building it from the ground up so you can see the design decisions that go into that style.
Second, a set of links to Mark Seemann's blog. For the past month or two, Mark Seemann has been writing posts about the free-monad style, though I didn't realize that that's what he was writing about until he was several articles in. There's a terminology difference that may confuse you at first: Scott Wlaschin uses the terms "Stop" and "KeepGoing" for the two possible AST cases ("this is the end of the command list" vs. "there are more commands after this one"). But the traditional names for those two free-monad cases are "Pure" and "Free". IMHO, the names "Pure" and "Free" are too abstract, and I like Scott Wlaschin's "Stop" and "KeepGoing" names better. But I mention this so that when you see "Pure" and "Free" in Mark Seemann's posts, you'll know that it's the same concept as Scott Wlaschin's turtle example.
Okay, with that explanation finished, here are the links to Mark Seemann's posts:
http://blog.ploeh.dk/2017/06/27/pure-times/
http://blog.ploeh.dk/2017/06/28/pure-times-in-haskell/
http://blog.ploeh.dk/2017/07/04/pure-times-in-f/
http://blog.ploeh.dk/2017/07/10/pure-interactions/
http://blog.ploeh.dk/2017/07/11/hello-pure-command-line-interaction/
http://blog.ploeh.dk/2017/07/17/a-pure-command-line-wizard/
http://blog.ploeh.dk/2017/07/24/combining-free-monads-in-haskell/
http://blog.ploeh.dk/2017/07/31/combining-free-monads-in-f/
http://blog.ploeh.dk/2017/08/07/f-free-monad-recipe/
Mark intersperses Haskell examples with F# examples, as you can tell from the URLs. If you are completely unfamiliar with Haskell, you can probably skip those posts as they might confuse you more than they help. But if you've got a passing familiarity with Haskell syntax, seeing the same ideas expressed in both Haskell and F# may help you grasp the concepts better, so I've included the Haskell posts as well as the F# posts.
As I said, I'm not quite familiar enough with free monads to be able to give you a specific answer to your question. But hopefully these links will give you some background knowledge that can help you implement what you're looking for.
I've been watching an interesting video in which type classes in Haskell are used to solve the so-called "expression problem". About 15 minutes in, it shows how type classes can be used to "open up" a datatype based on a discriminated union for extension -- additional discriminators can be added separately without modifying / rebuilding the original definition.
I know type classes aren't available in F#, but is there a way using other language features to achieve this kind of extensibility? If not, how close can we come to solving the expression problem in F#?
Clarification: I'm assuming the problem is defined as described in the previous video
in the series -- extensibility of the datatype and operations on the datatype with the features of code-level modularization and separate compilation (extensions can be deployed as separate modules without needing to modify or recompile the original code) as well as static type safety.
As Jörg pointed out in a comment, it depends on what you mean by solve. If you mean solve including some form of type-checking that the you're not missing an implementation of some function for some case, then F# doesn't give you any elegant way (and I'm not sure if the Haskell solution is elegant). You may be able to encode it using the SML solution mentioned by kvb or maybe using one of the OO based solutions.
In reality, if I was developing a real-world system that needs to solve the problem, I would choose a solution that doesn't give you full checking, but is much easier to use.
A sketch would be to use obj as the representation of a type and use reflection to locate functions that provide implementation for individual cases. I would probably mark all parts using some attribute to make checking easier. A module adding application to an expression might look like this:
[<Extends("Expr")>] // Specifies that this type should be treated as a case of 'Expr'
type App = App of obj * obj
module AppModule =
[<Implements("format")>] // Specifies that this extends function 'format'
let format (App(e1, e2)) =
// We don't make recursive calls directly, but instead use `invoke` function
// and some representation of the function named `formatFunc`. Alternatively
// you could support 'e1?format' using dynamic invoke.
sprintfn "(%s %s)" (invoke formatFunc e1) (invoke formatFunc e2)
This does not give you any type-checking, but it gives you a fairly elegant solution that is easy to use and not that difficult to implement (using reflection). Checking that you're not missing a case is not done at compile-time, but you can easily write unit tests for that.
See Vesa Karvonen's comment here for one SML solution (albeit cumbersome), which can easily be translated to F#.
I know type classes aren't available in F#, but is there a way using other language features to achieve this kind of extensibility?
I do not believe so, no.
If not, how close can we come to solving the expression problem in F#?
The expression problem is about allowing the user to augment your library code with both new functions and new types without having to recompile your library. In F#, union types make it easy to add new functions (but impossible to add new union cases to an existing union type) and class types make it easy to derive new class types (but impossible to add new methods to an existing class hierarchy). These are the two forms of extensibility required in practice. The ability to extend in both directions simultaneously without sacrificing static type safety is just an academic curiosity, IME.
Incidentally, the most elegant way to provide this kind of extensibility that I have seen is to sacrifice type safety and use so-called "rule-based programming". Mathematica does this. For example, a function to compute the symbolic derivative of an expression that is an integer literal, variable or addition may be written in Mathematica like this:
D[_Integer, _] := 0
D[x_Symbol, x_] := 1
D[_Symbol, _] := 0
D[f_ + g_, x_] := D[f, x] + D[g, x]
We can retrofit support for multiplication like this:
D[f_ g_, x_] := f D[g, x] + g D[f, x]
and we can add a new function to evaluate an expression like this:
E[n_Integer] := n
E[f_ + g_] = E[f] + E[g]
To me, this is far more elegant than any of the solutions written in languages like OCaml, Haskell and Scala but, of course, it is not type safe.
I have been trying to explain the difference between switch statements and pattern matching(F#) to a couple of people but I haven't really been able to explain it well..most of the time they just look at me and say "so why don't you just use if..then..else".
How would you explain it to them?
EDIT! Thanks everyone for the great answers, I really wish I could mark multiple right answers.
Having formerly been one of "those people", I don't know that there's a succinct way to sum up why pattern-matching is such tasty goodness. It's experiential.
Back when I had just glanced at pattern-matching and thought it was a glorified switch statement, I think that I didn't have experience programming with algebraic data types (tuples and discriminated unions) and didn't quite see that pattern matching was both a control construct and a binding construct. Now that I've been programming with F#, I finally "get it". Pattern-matching's coolness is due to a confluence of features found in functional programming languages, and so it's non-trivial for the outsider-looking-in to appreciate.
I tried to sum up one aspect of why pattern-matching is useful in the second of a short two-part blog series on language and API design; check out part one and part two.
Patterns give you a small language to describe the structure of the values you want to match. The structure can be arbitrarily deep and you can bind variables to parts of the structured value.
This allows you to write things extremely succinctly. You can illustrate this with a small example, such as a derivative function for a simple type of mathematical expressions:
type expr =
| Int of int
| Var of string
| Add of expr * expr
| Mul of expr * expr;;
let rec d(f, x) =
match f with
| Var y when x=y -> Int 1
| Int _ | Var _ -> Int 0
| Add(f, g) -> Add(d(f, x), d(g, x))
| Mul(f, g) -> Add(Mul(f, d(g, x)), Mul(g, d(f, x)));;
Additionally, because pattern matching is a static construct for static types, the compiler can (i) verify that you covered all cases (ii) detect redundant branches that can never match any value (iii) provide a very efficient implementation (with jumps etc.).
Excerpt from this blog article:
Pattern matching has several advantages over switch statements and method dispatch:
Pattern matches can act upon ints,
floats, strings and other types as
well as objects.
Pattern matches can act upon several
different values simultaneously:
parallel pattern matching. Method
dispatch and switch are limited to a single
value, e.g. "this".
Patterns can be nested, allowing
dispatch over trees of arbitrary
depth. Method dispatch and switch are limited
to the non-nested case.
Or-patterns allow subpatterns to be
shared. Method dispatch only allows
sharing when methods are from
classes that happen to share a base
class. Otherwise you must manually
factor out the commonality into a
separate function (giving it a
name) and then manually insert calls
from all appropriate places to your
unnecessary function.
Pattern matching provides redundancy
checking which catches errors.
Nested and/or parallel pattern
matches are optimized for you by the
F# compiler. The OO equivalent must
be written by hand and constantly
reoptimized by hand during
development, which is prohibitively
tedious and error prone so
production-quality OO code tends to
be extremely slow in comparison.
Active patterns allow you to inject
custom dispatch semantics.
Off the top of my head:
The compiler can tell if you haven't covered all possibilities in your matches
You can use a match as an assignment
If you have a discriminated union, each match can have a different 'type'
Tuples have "," and Variants have Ctor args .. these are constructors, they create things.
Patterns are destructors, they rip them apart.
They're dual concepts.
To put this more forcefully: the notion of a tuple or variant cannot be described merely by its constructor: the destructor is required or the value you made is useless. It is these dual descriptions which define a value.
Generally we think of constructors as data, and destructors as control flow. Variant destructors are alternate branches (one of many), tuple destructors are parallel threads (all of many).
The parallelism is evident in operations like
(f * g) . (h * k) = (f . h * g . k)
if you think of control flowing through a function, tuples provide a way to split up a calculation into parallel threads of control.
Looked at this way, expressions are ways to compose tuples and variants to make complicated data structures (think of an AST).
And pattern matches are ways to compose the destructors (again, think of an AST).
Switch is the two front wheels.
Pattern-matching is the entire car.
Pattern matches in OCaml, in addition to being more expressive as mentioned in several ways that have been described above, also give some very important static guarantees. The compiler will prove for you that the case-analysis embodied by your pattern-match statement is:
exhaustive (no cases are missed)
non-redundant (no cases that can never be hit because they are pre-empted by a previous case)
sound (no patterns that are impossible given the datatype in question)
This is a really big deal. It's helpful when you're writing the program for the first time, and enormously useful when your program is evolving. Used properly, match-statements make it easier to change the types in your code reliably, because the type system points you at the broken match statements, which are a decent indicator of where you have code that needs to be fixed.
If-Else (or switch) statements are about choosing different ways to process a value (input) depending on properties of the value at hand.
Pattern matching is about defining how to process a value given its structure, (also note that single case pattern matches make sense).
Thus pattern matching is more about deconstructing values than making choices, this makes them a very convenient mechanism for defining (recursive) functions on inductive structures (recursive union types), which explains why they are so abundantly used in languages like Ocaml etc.
PS: You might know the pattern-match and If-Else "patterns" from their ad-hoc use in math;
"if x has property A then y else z" (If-Else)
"some term in p1..pn where .... is the prime decomposition of x.." ((single case) pattern match)
Perhaps you could draw an analogy with strings and regular expressions? You describe what you are looking for, and let the compiler figure out how for itself. It makes your code much simpler and clearer.
As an aside: I find that the most useful thing about pattern matching is that it encourages good habits. I deal with the corner cases first, and it's easy to check that I've covered every case.
Given a grammar and the attached action code, are there any standard solution for deducing what type each production needs to result in (and consequently, what type the invoking production should expect to get from it)?
I'm thinking of an OO program and action code that employs something like c#'s var syntax (but I'm not looking for something that is c# specific).
This would be fairly simple if it were not for function overloading and recursive grammars.
The issue arises with cases like this:
Foo ::=
Bar Baz { return Fig(Bar, Baz); }
Foo Pit { return Pop(Foo, Pit); } // typeof(foo) = fn(typeof(Foo))
If you are writing code in a functional language it is easy; standard Hindley-Milner type inference works great. Do not do this. In my EBNF parser generator (never released but source code available on request), which supports Icon, c, and Standard ML, I actually implemented the idea you are asking about for the Standard ML back end: all the types were inferred. The resulting grammars were nearly impossible to debug.
If you throw overloading into the mix, the results are only going to be harder to debug. (That's right! This just in! Harder than impossible! Greater than infinity! Past my bedtime!) If you really want to try it yourself you're welcome to my code. (You don't; there's a reason I never released it.)
The return value of a grammar action is really no different from a local variable, so you should be able to use C# type inference to do the job. See this paper for some insight into how C# type inference is implemented.
The standard way of doing type inference is the Hindley-Milner algorithm, but that will not handle overloading out-of-the-box.
Note that even parser generators for type-inferencing languages don't typically infer the types of grammar actions. For example, ocamlyacc requires type annotations. The Happy parser generator for Haskell can infer types, but seems to discourage the practice. This might indicate that inferring types in grammars is difficult, a bad idea, or both.
[UPDATE] Very much pwned by Norman Ramsey, who has the benefit of bitter experience.