Check if function is declared recursive - f#

Is it possible to check if a function was declared as recursive, i.e. with let rec?
I have a memoize function, but it doesn't work with arbitrary recursive functions. I would like to give an error if the user calls it with such a function. (Feel free to tell me if this is a bad way to deal with errors in F#)

F# code is compiled to .NET Common Intermediate Language. F# rec keyword is just a hint to F# compiler that makes identifier that is being defined available in the scope of the function. So I believe that there is no way to find out at runtime that the function is declared as recursive.
However you could use System.Diagnostic.StackTrace class (https://msdn.microsoft.com/en-us/library/system.diagnostics.stacktrace.aspx) to get and analyze stack frames at runtime. But accessing stack information have significant performance impact (I'm assuming that your memoization function is for speeding up program performance). Also stack information could be different in Debug and Release versions of your program as the compiler can make optimizations.

I don't think it can be done in a sensible and comprehensive manner.
You might want to reevaluate your problem though. I assume your function "doesn't work" with recursive functions, because it only memoizes the first call, and all the recursive calls go to the non-memoized function? Depending on your scenario, it may not be a bad thing.
Memoization trades off memory for speed. In a large system, you might not want to pay the cost of memoizing intermediate results if you only care about the final one. All those dictionaries add up over time.
If you however particularly care about memoizing a recursive function, you can explicitly structure it in a way that would lend itself to memoization. See linked threads.
So my answer would be that a memoize function shouldn't be concerned about recursion at all. If the user wants to memoize a recursive function, it falls to him to understand the trade-offs involved, and structure the function in a way that would allow intermediate calls to be memoized if that's what he requires.

Related

Why does Erlang if statement support only specific functions in its guard?

Why does the Erlang if statement support only specific functions in its guard?
i.e -
ok(A) ->
if
whereis(abc)=:=undefined ->
register(abc,A);
true -> exit(already_registered)
end.
In this case we get an "illegal guard" error.
What would be the best practice to use function's return values as conditions?
Coming from other programming languages, Erlang's if seems strangely restrictive, and in fact, isn't used very much, with most people opting to use case instead. The distinction between the two is that while case can test any expression, if can only use valid Guard Expressions.
As explained in the above link, Guard Expressions are limited to known functions that are guaranteed to be free of side-effects. There are a number of reasons for this, most of which boil down to code predictability and inspectability. For instance, since matching is done top-down, guard expressions that don't match will be executed until one is found that does. If those expressions had side-effects, it could easily lead to unpredictable and confusing outcomes during debugging. While you can still accomplish that with case expressions, if you see an if you can know there are no side effects being introduced in the test without needing to check.
One last, but important thing, is that guards have to terminate. If they did not, the reduction of a function call could go on forever, and as the scheduler is based around reductions, that would be very bad indeed, with little to go on when things went badly.
As a counter-example, you can starve the scheduler in Go for exactly this reason. Because Go's scheduler (like all micro-process schedulers) is co-operatively multitasked, it has to wait for a goroutine to yield before it can schedule another one. Much like in Erlang, it waits for a function to finish what it's currently doing before it can move on. The difference is that Erlang has no loop-alike. To accomplish looping, you recurse, which necessitates a function call/reduction, and allows a point for the scheduler to intervene. In Go, you have C-style loops, which do not require a function call in their body, so code akin to for { i = i+1 } will starve the scheduler. Not that such loops without function calls in their body are super-common, but this issue does exist.
On the contrary, in Erlang it's extremely difficult to do something like this without setting out to do so explicitly. But if guards contained code that didn't terminate, it would become trivial.
Check this question: About the usage of "if" in Erlang language
In short:
Only a limited number of functions are allowed in guard sequences, and whereis is not one of them
Use case instead.

Erlang: Compute data structure literal (constant) at compile time?

This may be a naive question, and I suspect the answer is "yes," but I had no luck searching here and elsewhere on terms like "erlang compiler optimization constants" etc.
At any rate, can (will) the erlang compiler create a data structure that is constant or literal at compile time, and use that instead of creating code that creates the data structure over and over again? I will provide a simple toy example.
test() -> sets:from_list([usd, eur, yen, nzd, peso]).
Can (will) the compiler simply stick the set there at the output of the function instead of computing it every time?
The reason I ask is, I want to have a lookup table in a program I'm developing. The table is just constants that can be calculated (at least theoretically) at compile time. I'd like to just compute the table once, and not have to compute it every time. I know I could do this in other ways, such as compute the thing and store it in the process dictionary for instance (or perhaps an ets or mnesia table). But I always start simple, and to me the simplest solution is to do it like the toy example above, if the compiler optimizes it.
If that doesn't work, is there some other way to achieve what I want? (I guess I could look into parse transforms if they would work for this, but that's getting more complicated than I would like?)
THIS JUST IN. I used compile:file/2 with an 'S' option to produce the following. I'm no erlang assembly expert, but it looks like the optimization isn't performed:
{function, test, 0, 5}.
{label,4}.
{func_info,{atom,exchange},{atom,test},0}.
{label,5}.
{move,{literal,[usd,eur,yen,nzd,peso]},{x,0}}.
{call_ext_only,1,{extfunc,sets,from_list,1}}.
No, erlang compiler doesn't perform partial evaluation of calls to external modules which set is. You can use ct_expand module of famous parse_trans to achieve this effect.
providing that set is not native datatype for erlang, and (as matter of fact) it's just a library, written in erlang, I don't think it's feasibly for compiler to create sets at compile time.
As you could see, sets are not optimized in erlang (as any other library written in erlang).
The way of solving your problem is to compute the set once and pass it as a parameter to the functions or to use ETS/Mnesia.

F# tail recursion and why not write a while loop?

I'm learning F# (new to functional programming in general though used functional aspects of C# for years but let's face it, that's pretty different) and one of the things that I've read is that the F# compiler identifies tail recursion and compiles it into a while loop (see http://thevalerios.net/matt/2009/01/recursion-in-f-and-the-tail-recursion-police/).
What I don't understand is why you would write a recursive function instead of a while loop if that's what it's going to turn into anyway. Especially considering that you need to do some extra work to make your function recursive.
I have a feeling someone might say that the while loop is not particularly functional and you want to act all functional and whatnot so you use recursion but then why is it sufficient for the compiler to turn it into a while loop?
Can someone explain this to me?
You could use the same argument for any transformation that the compiler performs. For instance, when you're using C#, do you ever use lambda expressions or anonymous delegates? If the compiler is just going to turn those into classes and (non-anonymous) delegates, then why not just use those constructions yourself? Likewise, do you ever use iterator blocks? If the compiler is just going to turn those into state machines which explicitly implement IEnumerable<T>, then why not just write that code yourself? Or if the C# compiler is just going to emit IL anyway, why bother writing C# instead of IL in the first place? And so on.
One obvious answer to all of these questions is that we want to write code which allows us to express ourselves clearly. Likewise, there are many algorithms which are naturally recursive, and so writing recursive functions will often lead to a clear expression of those algorithms. In particular, it is arguably easier to reason about the termination of a recursive algorithm than a while loop in many cases (e.g. is there a clear base case, and does each recursive call make the problem "smaller"?).
However, since we're writing code and not mathematics papers, it's also nice to have software which meets certain real-world performance criteria (such as the ability to handle large inputs without overflowing the stack). Therefore, the fact that tail recursion is converted into the equivalent of while loops is critical for being able to use recursive formulations of algorithms.
A recursive function is often the most natural way to work with certain data structures (such as trees and F# lists). If the compiler wants to transform my natural, intuitive code into an awkward while loop for performance reasons that's fine, but why would I want to write that myself?
Also, Brian's answer to a related question is relevant here. Higher-order functions can often replace both loops and recursive functions in your code.
The fact that F# performs tail optimization is just an implementation detail that allows you to use tail recursion with the same efficiency (and no fear of a stack overflow) as a while loop. But it is just that - an implementation detail - on the surface your algorithm is still recursive and is structured that way, which for many algorithms is the most logical, functional way to represent it.
The same applies to some of the list handling internals as well in F# - internally mutation is used for a more efficient implementation of list manipulation, but this fact is hidden from the programmer.
What it comes down to is how the language allows you to describe and implement your algorithm, not what mechanics are used under the hood to make it happen.
A while loop is imperative by its nature. Most of the time, when using while loops, you will find yourself writing code like this:
let mutable x = ...
...
while someCond do
...
x <- ...
This pattern is common in imperative languages like C, C++ or C#, but not so common in functional languages.
As the other posters have said some data structures, more exactly recursive data structures, lend themselves to recursive processing. Since the most common data structure in functional languages is by far the singly linked list, solving problems by using lists and recursive functions is a common practice.
Another argument in favor of recursive solutions is the tight relation between recursion and induction. Using a recursive solution allows the programmer to think about the problem inductively, which arguably helps in solving it.
Again, as other posters said, the fact that the compiler optimizes tail-recursive functions (obviously, not all functions can benefit from tail-call optimization) is an implementation detail which lets your recursive algorithm run in constant space.

Is there an idiomatic way to order function arguments in Erlang?

Seems like it's inconsistent in the lists module. For example, split has the number as the first argument and the list as the second, but sublists has the list as the first argument and the len as the second argument.
OK, a little history as I remember it and some principles behind my style.
As Christian has said the libraries evolved and tended to get the argument order and feel from the impulses we were getting just then. So for example the reason why element/setelement have the argument order they do is because it matches the arg/3 predicate in Prolog; logical then but not now. Often we would have the thing being worked on first, but unfortunately not always. This is often a good choice as it allows "optional" arguments to be conveniently added to the end; for example string:substr/2/3. Functions with the thing as the last argument were often influenced by functional languages with currying, for example Haskell, where it is very easy to use currying and partial evaluation to build specific functions which can then be applied to the thing. This is very noticeable in the higher order functions in lists.
The only influence we didn't have was from the OO world. :-)
Usually we at least managed to be consistent within a module, but not always. See lists again. We did try to have some consistency, so the argument order in the higher order functions in dict/sets match those of the corresponding functions in lists.
The problem was also aggravated by the fact that we, especially me, had a rather cavalier attitude to libraries. I just did not see them as a selling point for the language, so I wasn't that worried about it. "If you want a library which does something then you just write it" was my motto. This meant that my libraries were structured, just not always with the same structure. :-) That was how many of the initial libraries came about.
This, of course, creates unnecessary confusion and breaks the law of least astonishment, but we have not been able to do anything about it. Any suggestions of revising the modules have always been met with a resounding "no".
My own personal style is a usually structured, though I don't know if it conforms to any written guidelines or standards.
I generally have the thing or things I am working on as the first arguments, or at least very close to the beginning; the order depends on what feels best. If there is a global state which is chained through the whole module, which there usually is, it is placed as the last argument and given a very descriptive name like St0, St1, ... (I belong to the church of short variable names). Arguments which are chained through functions (both input and output) I try to keep the same argument order as return order. This makes it much easier to see the structure of the code. Apart from that I try to group together arguments which belong together. Also, where possible, I try to preserve the same argument order throughout a whole module.
None of this is very revolutionary, but I find if you keep a consistent style then it is one less thing to worry about and it makes your code feel better and generally be more readable. Also I will actually rewrite code if the argument order feels wrong.
A small example which may help:
fubar({f,A0,B0}, Arg2, Ch0, Arg4, St0) ->
{A1,Ch1,St1} = foo(A0, Arg2, Ch0, St0),
{B1,Ch2,St2} = bar(B0, Arg4, Ch1, St1),
Res = baz(A1, B1),
{Res,Ch2,St2}.
Here Ch is a local chained through variable while St is a more global state. Check out the code on github for LFE, especially the compiler, if you want a longer example.
This became much longer than it should have been, sorry.
P.S. I used the word thing instead of object to avoid confusion about what I was talking.
No, there is no consistently-used idiom in the sense that you mean.
However, there are some useful relevant hints that apply especially when you're going to be making deeply recursive calls. For instance, keeping whichever arguments will remain unchanged during tail calls in the same order/position in the argument list allows the virtual machine to make some very nice optimizations.

What is the difference between a monad and a closure?

i am kinda confused reading the definition between the two. Can they actually intersect in terms of definition? or am i completely lost? Thanks.
Closures, as the word tends to be used, are just functions (or blocks of code, if you like) that you can treat like a piece of data and pass to other functions, etc. (the "closed" bit is that wherever you eventually call it, it behaves just as it would if you called it where it was originally defined). A monad is (roughly) more like a context in which functions can be chained together sequentially, and controls how data is passed from one function to the next.
They're quite different, although monads will often use closures to capture logic.
Personally I would try to get solid on the definition of closures (essentially a piece of logic which also captures its environment, i.e. local variables etc) before worrying about monads. They can come later :)
There are various questions about closures on Stack Overflow - the best one to help you will depend on what platform you're working on. For instance, there's:
What are closures in .NET?
Function pointers, closures and lambda
Personally I'm only just beginning to "grok" monads (thanks to the book I'm helping out on). One day I'll get round to writing an article about them, when I feel I understand them well enough :)
A "closure" is an object comprising 1) a function, and 2) the values of its free variables where it's constructed.
A "monad" is a class of functions that can be composed in a certain way, i.e. by using associated bind and return higher-order function operators, to produce other functions.
I think monads are a little more complicated than closures because closures are just blocks of code that remember something from the point of their definitions and monads are a construct for "twisting" the usual function composition operation.

Resources