What is the difference between a monad and a closure? - closures

i am kinda confused reading the definition between the two. Can they actually intersect in terms of definition? or am i completely lost? Thanks.

Closures, as the word tends to be used, are just functions (or blocks of code, if you like) that you can treat like a piece of data and pass to other functions, etc. (the "closed" bit is that wherever you eventually call it, it behaves just as it would if you called it where it was originally defined). A monad is (roughly) more like a context in which functions can be chained together sequentially, and controls how data is passed from one function to the next.

They're quite different, although monads will often use closures to capture logic.
Personally I would try to get solid on the definition of closures (essentially a piece of logic which also captures its environment, i.e. local variables etc) before worrying about monads. They can come later :)
There are various questions about closures on Stack Overflow - the best one to help you will depend on what platform you're working on. For instance, there's:
What are closures in .NET?
Function pointers, closures and lambda
Personally I'm only just beginning to "grok" monads (thanks to the book I'm helping out on). One day I'll get round to writing an article about them, when I feel I understand them well enough :)

A "closure" is an object comprising 1) a function, and 2) the values of its free variables where it's constructed.
A "monad" is a class of functions that can be composed in a certain way, i.e. by using associated bind and return higher-order function operators, to produce other functions.

I think monads are a little more complicated than closures because closures are just blocks of code that remember something from the point of their definitions and monads are a construct for "twisting" the usual function composition operation.

Related

Are partial valued functions closures?

My understanding is that closures are basically functions (and by that I mean piece of code) using variables that are bound to some values.
The partial valuation of a function on the other and is nothing but a new function obtained by another one binding some of its variables/arguments.
It seems o me that two concepts are basically the same: indeed one could regard (i.e. implement) closures as partial valuations of functions, that use additional arguments for the variables to be bound in the closure, and on the other hand a partial valuation seems to be just the closure of a function in which some of its variables-arguments are bound to values.
Is this line of thought correct? Are these two concepts really the same? And if no, what are the differences between these concepts?
Thanks in advance for any answer.
I wouldn't say they are the same thing. They are two concepts that have the same power, but that does not mean they are the same thing.

Swift 3: Difference between DispatchQueue.main.async{} and DispatcQueue.main.async(execute:{})?

There's a very narrow semantic difference between the two, and I find myself wondering why both options exist. Are they in any way different functionally, or is one likely just an alias of the other?
There is no difference at all. They are, in fact, the very same method.
To the compiler,
myQueue.async(execute: { foo() })
is exactly the same as
myQueue.async {
foo()
}
When the last argument of any function or method is a function, you can pass that argument as a trailing closure instead of passing it inside the argument list. This is done in order to make higher-order functions such as DispatchQueue.async feel more like part of the language, reduce syntactic overhead and ease the creation of domain-specific languages.
There's documentation on trailing closure syntax here.
And by the way, the idiomatic way to write my first example would be:
myQueue.async(execute: foo)
What you're referring to is called trailing closure syntax. It's a syntactic sugar for making closures easier to work with.
There are many other kinds of syntactic sugar features that pertain to closures, which I cover in my answer here.
As always, I highly recommend the Swift Language guide, which does a great job at explaining the basics like this.

F# tail recursion and why not write a while loop?

I'm learning F# (new to functional programming in general though used functional aspects of C# for years but let's face it, that's pretty different) and one of the things that I've read is that the F# compiler identifies tail recursion and compiles it into a while loop (see http://thevalerios.net/matt/2009/01/recursion-in-f-and-the-tail-recursion-police/).
What I don't understand is why you would write a recursive function instead of a while loop if that's what it's going to turn into anyway. Especially considering that you need to do some extra work to make your function recursive.
I have a feeling someone might say that the while loop is not particularly functional and you want to act all functional and whatnot so you use recursion but then why is it sufficient for the compiler to turn it into a while loop?
Can someone explain this to me?
You could use the same argument for any transformation that the compiler performs. For instance, when you're using C#, do you ever use lambda expressions or anonymous delegates? If the compiler is just going to turn those into classes and (non-anonymous) delegates, then why not just use those constructions yourself? Likewise, do you ever use iterator blocks? If the compiler is just going to turn those into state machines which explicitly implement IEnumerable<T>, then why not just write that code yourself? Or if the C# compiler is just going to emit IL anyway, why bother writing C# instead of IL in the first place? And so on.
One obvious answer to all of these questions is that we want to write code which allows us to express ourselves clearly. Likewise, there are many algorithms which are naturally recursive, and so writing recursive functions will often lead to a clear expression of those algorithms. In particular, it is arguably easier to reason about the termination of a recursive algorithm than a while loop in many cases (e.g. is there a clear base case, and does each recursive call make the problem "smaller"?).
However, since we're writing code and not mathematics papers, it's also nice to have software which meets certain real-world performance criteria (such as the ability to handle large inputs without overflowing the stack). Therefore, the fact that tail recursion is converted into the equivalent of while loops is critical for being able to use recursive formulations of algorithms.
A recursive function is often the most natural way to work with certain data structures (such as trees and F# lists). If the compiler wants to transform my natural, intuitive code into an awkward while loop for performance reasons that's fine, but why would I want to write that myself?
Also, Brian's answer to a related question is relevant here. Higher-order functions can often replace both loops and recursive functions in your code.
The fact that F# performs tail optimization is just an implementation detail that allows you to use tail recursion with the same efficiency (and no fear of a stack overflow) as a while loop. But it is just that - an implementation detail - on the surface your algorithm is still recursive and is structured that way, which for many algorithms is the most logical, functional way to represent it.
The same applies to some of the list handling internals as well in F# - internally mutation is used for a more efficient implementation of list manipulation, but this fact is hidden from the programmer.
What it comes down to is how the language allows you to describe and implement your algorithm, not what mechanics are used under the hood to make it happen.
A while loop is imperative by its nature. Most of the time, when using while loops, you will find yourself writing code like this:
let mutable x = ...
...
while someCond do
...
x <- ...
This pattern is common in imperative languages like C, C++ or C#, but not so common in functional languages.
As the other posters have said some data structures, more exactly recursive data structures, lend themselves to recursive processing. Since the most common data structure in functional languages is by far the singly linked list, solving problems by using lists and recursive functions is a common practice.
Another argument in favor of recursive solutions is the tight relation between recursion and induction. Using a recursive solution allows the programmer to think about the problem inductively, which arguably helps in solving it.
Again, as other posters said, the fact that the compiler optimizes tail-recursive functions (obviously, not all functions can benefit from tail-call optimization) is an implementation detail which lets your recursive algorithm run in constant space.

Is a Foreach loop a type of block?

In thinking about blocks, I've always wondered why the
thingers.each { |thing|
example is actually interesting (since there's another, built-in way to do it). In most modern languages there's a way to iterate through a collection and apply some inline code to it. But then I thought that maybe the for (Thing thing : things) { syntax is really a type of block, even in languages like Java-sans-Groovy where there are none.
So the question is: Is a for-each loop a type of block, albeit with a fixed syntax?
Edit: This question is confusing but yet got some attention so I can't delete it. Anyway... by Blocks I mean "closures" and not just code blocks. As far as why I tagged this as language agnostic is that I'm not interested in how a for-each is implemented under-the-hood. I'm more interested in whether, from a programmer's perspective, a for-each could be considered a freebie closure in languages that doesn't have them, like Java.
This is most definitely not language agnostic (as it is tagged).
For example, in Ruby, you are passing a closure or block to the .each method. The semantics are defined by the language.
In C#, foreach compiles to a call to IEnumerable.GetEnumerator and the use of the IEnumerator.MoveNext method. It just depends on the language.
EDIT:
Edit: This question is confusing but yet got some attention so I can't delete it. Anyway... by Blocks I mean "closures" and not just code blocks. As far as why I tagged this as language agnostic is that I'm not interested in how a for-each is implemented under-the-hood. I'm more interested in whether, from a programmer's perspective, a for-each could be considered a freebie closure in languages that doesn't have them, like Java.
No, you are thinking only of one use case for a closure. You can't pass a control structure to a method as you can a closure in some languages. A control structure is not a first class first class data type like a closure is in some languages. In the case of "run this code on each object in a collection" yes, they are semantically similar. However, that it only one example of what you can do with a closure.
You say that you w
As Swangren says, this question is not language agnostic as tagged. Not unless there's some general "IT theory" definition of a "block" that I'm not aware of.
In Java, a "block" is a series of statements enclosed in { }. So a FOR statement normally operates on a block. The purpose of a block is to delimit what is included within certain language constructs, like IF statements and loops.
I believe C and C++ have similar definitions of a "block".
Most languages have a similar concept.
Not sure if this helps.
foreach ($foo as $bar) {
$baz = 'Hello, World!';
}
echo $baz; // 'Hello, World!', "leaked" from the "closure"
Nope, sorry, foreach loops are not lambdas, closures or functions in languages where they aren't explicitly such.
Please elaborate. My definition of a "block" is a section of code that's grouped together. In this case there's a lot of stuff in Java can be a "block"
If you are talking about ruby-style blocks, which are really closures, then Java doesn't support them (not in any sane fashion anyways). foreach is a control structure, and behaves differently from closures.
On your updated question: It's easy to use closures as control structures, but the other way around is harder, if not impossible. A closure is a first class function, which allows you to do some pretty cool stuff with it - not just iterating through elements.

Is there an idiomatic way to order function arguments in Erlang?

Seems like it's inconsistent in the lists module. For example, split has the number as the first argument and the list as the second, but sublists has the list as the first argument and the len as the second argument.
OK, a little history as I remember it and some principles behind my style.
As Christian has said the libraries evolved and tended to get the argument order and feel from the impulses we were getting just then. So for example the reason why element/setelement have the argument order they do is because it matches the arg/3 predicate in Prolog; logical then but not now. Often we would have the thing being worked on first, but unfortunately not always. This is often a good choice as it allows "optional" arguments to be conveniently added to the end; for example string:substr/2/3. Functions with the thing as the last argument were often influenced by functional languages with currying, for example Haskell, where it is very easy to use currying and partial evaluation to build specific functions which can then be applied to the thing. This is very noticeable in the higher order functions in lists.
The only influence we didn't have was from the OO world. :-)
Usually we at least managed to be consistent within a module, but not always. See lists again. We did try to have some consistency, so the argument order in the higher order functions in dict/sets match those of the corresponding functions in lists.
The problem was also aggravated by the fact that we, especially me, had a rather cavalier attitude to libraries. I just did not see them as a selling point for the language, so I wasn't that worried about it. "If you want a library which does something then you just write it" was my motto. This meant that my libraries were structured, just not always with the same structure. :-) That was how many of the initial libraries came about.
This, of course, creates unnecessary confusion and breaks the law of least astonishment, but we have not been able to do anything about it. Any suggestions of revising the modules have always been met with a resounding "no".
My own personal style is a usually structured, though I don't know if it conforms to any written guidelines or standards.
I generally have the thing or things I am working on as the first arguments, or at least very close to the beginning; the order depends on what feels best. If there is a global state which is chained through the whole module, which there usually is, it is placed as the last argument and given a very descriptive name like St0, St1, ... (I belong to the church of short variable names). Arguments which are chained through functions (both input and output) I try to keep the same argument order as return order. This makes it much easier to see the structure of the code. Apart from that I try to group together arguments which belong together. Also, where possible, I try to preserve the same argument order throughout a whole module.
None of this is very revolutionary, but I find if you keep a consistent style then it is one less thing to worry about and it makes your code feel better and generally be more readable. Also I will actually rewrite code if the argument order feels wrong.
A small example which may help:
fubar({f,A0,B0}, Arg2, Ch0, Arg4, St0) ->
{A1,Ch1,St1} = foo(A0, Arg2, Ch0, St0),
{B1,Ch2,St2} = bar(B0, Arg4, Ch1, St1),
Res = baz(A1, B1),
{Res,Ch2,St2}.
Here Ch is a local chained through variable while St is a more global state. Check out the code on github for LFE, especially the compiler, if you want a longer example.
This became much longer than it should have been, sorry.
P.S. I used the word thing instead of object to avoid confusion about what I was talking.
No, there is no consistently-used idiom in the sense that you mean.
However, there are some useful relevant hints that apply especially when you're going to be making deeply recursive calls. For instance, keeping whichever arguments will remain unchanged during tail calls in the same order/position in the argument list allows the virtual machine to make some very nice optimizations.

Resources