Why are mutables allowed in F#? When are they essential? - f#

Coming from C#, trying to get my head around the language.
From what I understand one of the main benefits of F# is that you ditch the concept of state, which should (in many cases) make things much more robust.
If this is the case (and correct me if it's not), why allow us to break this principle with mutables? To me it feels like it they don't belong in the language. I understand you don't have to use them, but it gives you the tools to go off track and think in an OOP manner.
Can anyone provide an example of where a mutable value is essential?

Current compilers for declarative (stateless) code are not very smart. This results in lots of memory allocations and copy operations, which are rather expensive. Mutating some property of an object allows to re-use the object in its new state, which is much faster.
Imagine you make a game with 10000 units moving around at 60 ticks a second. You can do this in F#, including collisions with a mutable quad- or octree, on a single CPU core.
Now imagine the units and quadtree are immutable. The compiler would have no better idea than to allocate and construct 600000 units per second and create 60 new trees per second. And this excludes any changes in other management structures. In a real-world use case with complex units, this kind of solution will be too slow.
F# is a multi-paradigm language that enables the programmer to write functional, object-oriented, and, to an extent, imperative programs. Currently, each variant has its valid uses. Maybe, at some point in the future, better compilers will allow for better optimization of declarative programs, but right now, we have to fall back to imperative programming when performance becomes an issue.

Having the ability to use mutable state is often important for performance reasons, among other things.
Consider implementing the API List.take: count : int -> list : 'a list -> 'a list which returns a list consisting of only the first count elements from the input list.
If you are bound by immutability, Lists can only be built up back-to-front. Implementing take then boils down to
Build up result list back-to-front with first count guys from input: O(count)
Reverse that result and return O(count)
The F# runtime, for performance reasons, has the magic special ability to build Lists front-to-back when needed (i.e. to mutate the tail of the last guy to point to a new tail element). The basic algorithm used for List.take is:
Build up result list front-to-back with first count guys from input: O(count)
Return the result
Same asymptotic performance, but in practical terms it's twice as fast to use mutation in this case.
Pervasive mutable state can be a nightmare as code is difficult to reason about. But if you factor your code so that mutable state is tightly encapsulated (e.g. in implementation details of List.take), then you can enjoy its benefits where it makes sense. So making immutability the default, but still allowing mutability, is a very practical and useful feature of the language.

First of all, what makes F# powerful is, in my opinion, not just the immutability by default, but a whole mix of features like: immutability by default, type inference, lightweight syntax, sum (DUs) and product types (tuples), pattern matching and currying by default. Possibly more.
These make F# very functional by default and they make you program in a certain way. In particular they make you feel uncomfortable when you use mutable state, as it requires the mutable keyword. Uncomfortable in this sense means more careful. And that is exactly what you should be.
Mutable state is not forbidden or evil per se, but it should be controlled. The need to explicitly use mutable is like a warning sign making you aware of danger. And good ways how to control it, is using it internally within a function. That way you can have your own internal mutable state and still be perfectly thread-safe because you don't have shared mutable state. In fact, your function can still be referentially transparent even if it uses mutable state internally.
As for why F# allows mutable state; it would be very difficult to write usual real-world code without the possibility for it. For instance in Haskell, something like a random number cannot be done in the same way as it can be done in F#, but rather needs threading through the state explicitly.
When I write applications, I tend to have about 95% of the code base in a very functional style that would be pretty much 1:1 portable to say Haskell without any trouble. But then at the system boundaries or at some performance-critical inner loop mutable state is used. That way you get the best of both worlds.

Related

How to guarantee referential transparency in F# applications?

So I'm trying to learn FP and I'm trying to get my head around referential transparency and side effects.
I have learned that making all effects explicit in the type system is the only way to guarantee referential transparency:
The idea of “mostly functional programming” is unfeasible. It is impossible to make imperative
programming languages safer by only partially removing implicit side effects. Leaving one kind of effect is often enough to simulate the very effect you just tried to remove. On the other hand, allowing effects to be “forgotten” in a pure language also causes mayhem in its own way.
Unfortunately, there is no golden middle, and we are faced with a classic dichotomy: the curse of the excluded middle, which presents the choice of either (a) trying to tame effects using purity annotations, yet fully embracing the fact that your code is still fundamentally effectful; or (b) fully embracing purity by making all effects explicit in the type system and being pragmatic - Source
I have also learned that not-pure FP languages like Scala or F# cannot guarantee referential transparency:
The ability to enforce referential transparency this is pretty much incompatible with Scala's goal of having a class/object system that is interoperable with Java. - Source
And that in not-pure FP it is up to the programmer to ensure referential transparency:
In impure languages like ML, Scala or F#, it is up to the programmer to ensure referential transparency, and of course in dynamically typed languages like Clojure or Scheme, there is no static type system to enforce referential transparency. - Source
I'm interested in F# because I have a .Net background so my next questions is:
What can I do to guarantee referential transparency in an F# applications if it is not enforced by the F# compiler?
The short answer to this question is that there is no way to guarantee referential transparency in F#. One of the big advantages of F# is that it has fantastic interop with other .NET languages but the downside of this, compared to a more isolated language like Haskell, is that side-effects are there and you will have to deal with them.
How you actually deal with side effects in F# is a different question entirely.
There is actually nothing to stop you from bringing effects into the type system in F# in very much the same way as you might in Haskell although effectively you are 'opting in' to this approach rather than it being enforced upon you.
All you really need is some infrastructure like this:
/// A value of type IO<'a> represents an action which, when performed (e.g. by calling the IO.run function), does some I/O which results in a value of type 'a.
type IO<'a> =
private
|Return of 'a
|Delay of (unit -> 'a)
/// Pure IO Functions
module IO =
/// Runs the IO actions and evaluates the result
let run io =
match io with
|Return a -> a
|Delay (a) -> a()
/// Return a value as an IO action
let return' x = Return x
/// Creates an IO action from an effectful computation, this simply takes a side effecting function and brings it into IO
let fromEffectful f = Delay (f)
/// Monadic bind for IO action, this is used to combine and sequence IO actions
let bind x f =
match x with
|Return a -> f a
|Delay (g) -> Delay (fun _ -> run << f <| g())
return brings a value within IO.
fromEffectful takes a side-effecting function unit -> 'a and brings it within IO.
bind is the monadic bind function and lets you sequence effects.
run runs the IO to perform all of the enclosed effects. This is like unsafePerformIO in Haskell.
You could then define a computation expression builder using these primitive functions and give yourself lots of nice syntactic sugar.
Another worthwhile question to ask is, is this useful in F#?
A fundamental difference between F# and Haskell is that F# is an eager by default language while Haskell is lazy by default. The Haskell community (and I suspect the .NET community, to a lesser extent) has learnt that when you combine lazy evaluation and side-effects/IO, very bad things can happen.
When you work in the IO monad in Haskell, you are (generally) guaranteeing something about the sequential nature of IO and ensuring that one piece of IO is done before another. You are also guaranteeing something about how often and when effects can occur.
One example I like to pose in F# is this one:
let randomSeq = Seq.init 4 (fun _ -> rnd.Next())
let sortedSeq = Seq.sort randomSeq
printfn "Sorted: %A" sortedSeq
printfn "Random: %A" randomSeq
At first glance, this code might appear to generate a sequence, sort the same sequence and then print the sorted and unsorted versions.
It doesn't. It generates two sequences, one of which is sorted and one of which isn't. They can, and almost certainly do, have completely distinct values.
This is a direct consequence of combining side effects and lazy evaluation without referential transparency. You could gain back some control by using Seq.cache which prevents repeat evaluation but still doesn't give you control over when, and in what order, effects occur.
By contrast, when you're working with eagerly evaluated data structures, the consequences are generally less insidious so I think the requirement for explicit effects in F# is vastly reduced compared to Haskell.
That said, a large advantage of making all effects explicit within the type system is that it helps to enforce good design. The likes of Mark Seemann will tell you that the best strategy for designing robust a system, whether it's object oriented or functional, involves isolating side-effects at the edge of your system and relying on a referentially transparent, highly unit-testable, core.
If you are working with explicit effects and IO in the type system and all of your functions are ending up being written in IO, that's a strong and obvious design smell.
Going back to the original question of whether this is worthwhile in F# though, I still have to answer with a "I don't know". I have been working on a library for referentially transparent effects in F# to explore this possibility myself. There is more material there on this subject as well as a much fuller implementation of IO there, if you are interested.
Finally, I think it's worth remembering that the Curse of the Excluded Middle is probably targeted at programming language designers more than your typical developer.
If you are working in an impure language, you will need to find a way of coping with and taming your side effects, the precise strategy which you follow to do this is open to interpretation and what best suits the needs of yourself and/or your team but I think that F# gives you plenty of tools to do this.
Finally, my pragmatic and experienced view of F# tells me that actually, "mostly functional" programming is still a big improvement over its competition almost all of the time.
I think you need to read the source article in an appropriate context - it is an opinion piece coming from a specific perspective and it is intentionally provocative - but it is not a hard fact.
If you are using F#, you will get referential transparency by writing good code. That means writing most logic as a sequence of transformations and performing effects to read the data before running the transformations & running effects to write the results somewhere after. (Not all programs fit into this pattern, but those that can be written in a referentially transparent way generally do.)
In my experience, you can live perfectly happily in the "middle". That means, write referentially transparent code most of the time, but break the rules when you need to for some practical reason.
To respond to some of the specific points in the quotes:
It is impossible to make imperative programming languages safer by only partially removing implicit side effects.
I would agree it is impossible to make them "safe" (if by safe we mean they have no side-effects), but you can make them safer by removing some side effects.
Leaving one kind of effect is often enough to simulate the very effect you just tried to remove.
Yes, but simulating effect to provide theoretical proof is not what programmers do. If it is sufficiently discouraged to achieve the effect, you'll tend to write code in other (safer) ways.
I have also learned that not-pure FP languages like Scala or F# cannot guarantee referential transparency:
Yes, that's true - but "referential transparency" is not what functional programming is about. For me, it is about having better ways to model my domain and having tools (like the type system) that guide me along the "happy path". Referential transparency is one part of that, but it is not a silver bullet. Referential transparency is not going to magically solve all your problems.
Like Mark Seemann has confirmed in the comments "Nothing in F# can guarantee referential transparency. It's up to the programmer to think about this."
I have been doing some search online and I found that "discipline is your best friend" and some recommendations to try to keep the level of referential transparency in your F# applications as high as possible:
Don't use mutable, for or while loops, ref keywords, etc.
Stick with purely immutable data structures (discriminated union, list, tuple, map, etc).
If you need to do IO at some point, architect your program so that they are separated from your purely functional code. Don't forget functional programming is all about limiting and isolating side-effects.
Algebraic data types (ADT) AKA "discriminated unions" instead of objects.
Learning to love laziness.
Embracing the Monad.

Why does F# Set need IComparable?

So I am trying to use the F# Set as a hash table. But my element type doesn't implement the IComparable interface (although it implements IEquatable). I got an error saying the construction is not allowed because of comparison constraint. And through some further read, I discovered that F# Set is implemented using binary tree, which makes insertion causes O(log(n)). This looks weird to me, why is the Set structure designed this way?
Edit: So I learned that Set in F# is actually a SortedSet. And I guess the question becomes, why is Sorted Set somehow more preferable than a general Hash Set as an immutable/functional data structure?
There are two important points that should help you understand how sets in F# (and in functional languages in general) work and how they are used:
Implementing immutable hashtables (like .NET HashSet) is hard - when you remove or add elements, you want to avoid copying everything in the data structure and (as far as I know) there is no general way of doing that (you would end up copying too much, so it would be inefficient).
For this reason, most functional sets are implemented as (some form of trees). Those require comparison to build a sorted tree. The nice property of balanced trees is that removing and adding elements does not have to copy everything in the tree, so even the worst case scenario is reasonably efficient (though mutable hashtable is still faster).
Now, F# is functional-first, which means that immutable structures are preferred, but it is perfectly fine to use mutable data structures (especially if you limit the usage to some well defined and restricted scope). For this reason, F# programmers often use Dictionary or HashSet, especially when this is only within the scope of a single function.

F# tail recursion and why not write a while loop?

I'm learning F# (new to functional programming in general though used functional aspects of C# for years but let's face it, that's pretty different) and one of the things that I've read is that the F# compiler identifies tail recursion and compiles it into a while loop (see http://thevalerios.net/matt/2009/01/recursion-in-f-and-the-tail-recursion-police/).
What I don't understand is why you would write a recursive function instead of a while loop if that's what it's going to turn into anyway. Especially considering that you need to do some extra work to make your function recursive.
I have a feeling someone might say that the while loop is not particularly functional and you want to act all functional and whatnot so you use recursion but then why is it sufficient for the compiler to turn it into a while loop?
Can someone explain this to me?
You could use the same argument for any transformation that the compiler performs. For instance, when you're using C#, do you ever use lambda expressions or anonymous delegates? If the compiler is just going to turn those into classes and (non-anonymous) delegates, then why not just use those constructions yourself? Likewise, do you ever use iterator blocks? If the compiler is just going to turn those into state machines which explicitly implement IEnumerable<T>, then why not just write that code yourself? Or if the C# compiler is just going to emit IL anyway, why bother writing C# instead of IL in the first place? And so on.
One obvious answer to all of these questions is that we want to write code which allows us to express ourselves clearly. Likewise, there are many algorithms which are naturally recursive, and so writing recursive functions will often lead to a clear expression of those algorithms. In particular, it is arguably easier to reason about the termination of a recursive algorithm than a while loop in many cases (e.g. is there a clear base case, and does each recursive call make the problem "smaller"?).
However, since we're writing code and not mathematics papers, it's also nice to have software which meets certain real-world performance criteria (such as the ability to handle large inputs without overflowing the stack). Therefore, the fact that tail recursion is converted into the equivalent of while loops is critical for being able to use recursive formulations of algorithms.
A recursive function is often the most natural way to work with certain data structures (such as trees and F# lists). If the compiler wants to transform my natural, intuitive code into an awkward while loop for performance reasons that's fine, but why would I want to write that myself?
Also, Brian's answer to a related question is relevant here. Higher-order functions can often replace both loops and recursive functions in your code.
The fact that F# performs tail optimization is just an implementation detail that allows you to use tail recursion with the same efficiency (and no fear of a stack overflow) as a while loop. But it is just that - an implementation detail - on the surface your algorithm is still recursive and is structured that way, which for many algorithms is the most logical, functional way to represent it.
The same applies to some of the list handling internals as well in F# - internally mutation is used for a more efficient implementation of list manipulation, but this fact is hidden from the programmer.
What it comes down to is how the language allows you to describe and implement your algorithm, not what mechanics are used under the hood to make it happen.
A while loop is imperative by its nature. Most of the time, when using while loops, you will find yourself writing code like this:
let mutable x = ...
...
while someCond do
...
x <- ...
This pattern is common in imperative languages like C, C++ or C#, but not so common in functional languages.
As the other posters have said some data structures, more exactly recursive data structures, lend themselves to recursive processing. Since the most common data structure in functional languages is by far the singly linked list, solving problems by using lists and recursive functions is a common practice.
Another argument in favor of recursive solutions is the tight relation between recursion and induction. Using a recursive solution allows the programmer to think about the problem inductively, which arguably helps in solving it.
Again, as other posters said, the fact that the compiler optimizes tail-recursive functions (obviously, not all functions can benefit from tail-call optimization) is an implementation detail which lets your recursive algorithm run in constant space.

Does functional programming avoid state?

According to wikipedia: functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. (emphasis mine).
Is this really true? My personal understanding is that it makes the state more explicit, in the sense that programming is essentially applying functions (transforms) to a given state to get a transformed state. In particular, constructs like monads let you carry the state explicitly through functions. I also don't think that any programming paradigm can avoid state altogether.
So, is the Wikipedia definition right or wrong? And if it is wrong what is a better way to define functional programming?
Edit: I guess a central point in this question is what is state? Do you understand state to be variables or object attributes (mutable data) or is immutable data also state? To take an example (in F#):
let x = 3
let double n = 2 * n
let y = double x
printfn "%A" y
would you say this snippet contains a state or not?
Edit 2: Thanks to everyone for participating. I now understand the issue to be more of a linguistic discrepancy, with the use of the word state differing from one community to the other, as Brian mentions in his comment. In particular, many in the functional programming community (mainly Haskellers) interpret state to carry some state of dynamism like a signal varying with time. Other uses of state in things like finite state machine, Representational State Transfer, and stateless network protocols may mean different things.
I think you're just using the term "state" in an unusual way. If you consider adding 1 and 1 to get 2 as being stateful, then you could say functional programming embraces state. But when most people say "state," they mean storing and changing values, such that calling a function might leave things different than before the function was called, and calling the function a second time with the same input might not have the same result.
Essentially, stateless computations are things like this:
The result of 1 + 1
The string consisting of 's' prepended to "pool"
The area of a given rectangle
Stateful computations are things like this:
Increment a counter
Remove an element from the array
Set the width of a rectangle to be twice what it is now
Rather than avoids state, think of it like this:
It avoids changing state. That word "mutable".
Think in terms of a C# or Java object. Normally you would call a method on the object and you might expect it to modify its internal state as a result of that method call.
With functional programming, you still have data, but it's just passed through each function, creating output that corresponds to the input and the operation.
At least in theory. In reality, not everything you do actually works functionally so you frequently end up hiding state to make things work.
Edit:
Of course, hiding state also frequently leads to some spectacular bugs, which is why you should only use functional programming languages for purely functional situations. I've found that the best languages are the ones that are object oriented and functional, like Python or C#, giving you the best of both worlds and the freedom to move between each as necessary.
The Wikipedia definition is right. Functional programming avoids state. Functional programming means that you apply a function to a given input and get a result. However, it is guaranteed that your input will not be modified in any way. It doesn't mean that you cannot have a state at all. Monads are perfect example of that.
The Wikipedia definition is correct. It may seem puzzling at first, but if you start working with say Haskell, you'll note that you don't have any variables laying around containing values.
The state can still be sort of represented using state monads.
At some point, all apps must have state (some data that changes). Otherwise, you would start the app up, it wouldn't be able to do anything, and the app would finish.
You can have immutable data structures as part of your state, but you will swap those for other instances of other immutable data structures.
The point of immutability is not that data does not change over time. The point is that you can create pure functions that don't have side effects.

Is there an idiomatic way to order function arguments in Erlang?

Seems like it's inconsistent in the lists module. For example, split has the number as the first argument and the list as the second, but sublists has the list as the first argument and the len as the second argument.
OK, a little history as I remember it and some principles behind my style.
As Christian has said the libraries evolved and tended to get the argument order and feel from the impulses we were getting just then. So for example the reason why element/setelement have the argument order they do is because it matches the arg/3 predicate in Prolog; logical then but not now. Often we would have the thing being worked on first, but unfortunately not always. This is often a good choice as it allows "optional" arguments to be conveniently added to the end; for example string:substr/2/3. Functions with the thing as the last argument were often influenced by functional languages with currying, for example Haskell, where it is very easy to use currying and partial evaluation to build specific functions which can then be applied to the thing. This is very noticeable in the higher order functions in lists.
The only influence we didn't have was from the OO world. :-)
Usually we at least managed to be consistent within a module, but not always. See lists again. We did try to have some consistency, so the argument order in the higher order functions in dict/sets match those of the corresponding functions in lists.
The problem was also aggravated by the fact that we, especially me, had a rather cavalier attitude to libraries. I just did not see them as a selling point for the language, so I wasn't that worried about it. "If you want a library which does something then you just write it" was my motto. This meant that my libraries were structured, just not always with the same structure. :-) That was how many of the initial libraries came about.
This, of course, creates unnecessary confusion and breaks the law of least astonishment, but we have not been able to do anything about it. Any suggestions of revising the modules have always been met with a resounding "no".
My own personal style is a usually structured, though I don't know if it conforms to any written guidelines or standards.
I generally have the thing or things I am working on as the first arguments, or at least very close to the beginning; the order depends on what feels best. If there is a global state which is chained through the whole module, which there usually is, it is placed as the last argument and given a very descriptive name like St0, St1, ... (I belong to the church of short variable names). Arguments which are chained through functions (both input and output) I try to keep the same argument order as return order. This makes it much easier to see the structure of the code. Apart from that I try to group together arguments which belong together. Also, where possible, I try to preserve the same argument order throughout a whole module.
None of this is very revolutionary, but I find if you keep a consistent style then it is one less thing to worry about and it makes your code feel better and generally be more readable. Also I will actually rewrite code if the argument order feels wrong.
A small example which may help:
fubar({f,A0,B0}, Arg2, Ch0, Arg4, St0) ->
{A1,Ch1,St1} = foo(A0, Arg2, Ch0, St0),
{B1,Ch2,St2} = bar(B0, Arg4, Ch1, St1),
Res = baz(A1, B1),
{Res,Ch2,St2}.
Here Ch is a local chained through variable while St is a more global state. Check out the code on github for LFE, especially the compiler, if you want a longer example.
This became much longer than it should have been, sorry.
P.S. I used the word thing instead of object to avoid confusion about what I was talking.
No, there is no consistently-used idiom in the sense that you mean.
However, there are some useful relevant hints that apply especially when you're going to be making deeply recursive calls. For instance, keeping whichever arguments will remain unchanged during tail calls in the same order/position in the argument list allows the virtual machine to make some very nice optimizations.

Resources