Does Rascal do tail-call optimization? In particular, if I write code using tail recursion instead of those built-in loop construct, do I expect efficiency penalty?
Yes, you should expect efficiency penalty with the current implementation.
We do expect the compiler (which is being written now) to do tail-call optimization in the future.
Related
It seems that define-funs-rec is a strict superset of what define-fun can do according to the SMTLIB standard. If so is there a reason for not always using define-funs-rec, maybe except for syntactic simplicity?
Strictly speaking; no. But define-fun-rec is rather new (as opposed to good old define-fun), so if you want greater portability and have no need for a recursive definition, then you should stick to define-fun.
It is conceivable that the define-fun-rec might also bring heavier-machinery in a solver that is not really needed unless you really have a recursive definition, such as a well-foundedness checker. This might end up costing some performance cycles; though I doubt this would be too much of a concern.
Is it possible to check if a function was declared as recursive, i.e. with let rec?
I have a memoize function, but it doesn't work with arbitrary recursive functions. I would like to give an error if the user calls it with such a function. (Feel free to tell me if this is a bad way to deal with errors in F#)
F# code is compiled to .NET Common Intermediate Language. F# rec keyword is just a hint to F# compiler that makes identifier that is being defined available in the scope of the function. So I believe that there is no way to find out at runtime that the function is declared as recursive.
However you could use System.Diagnostic.StackTrace class (https://msdn.microsoft.com/en-us/library/system.diagnostics.stacktrace.aspx) to get and analyze stack frames at runtime. But accessing stack information have significant performance impact (I'm assuming that your memoization function is for speeding up program performance). Also stack information could be different in Debug and Release versions of your program as the compiler can make optimizations.
I don't think it can be done in a sensible and comprehensive manner.
You might want to reevaluate your problem though. I assume your function "doesn't work" with recursive functions, because it only memoizes the first call, and all the recursive calls go to the non-memoized function? Depending on your scenario, it may not be a bad thing.
Memoization trades off memory for speed. In a large system, you might not want to pay the cost of memoizing intermediate results if you only care about the final one. All those dictionaries add up over time.
If you however particularly care about memoizing a recursive function, you can explicitly structure it in a way that would lend itself to memoization. See linked threads.
So my answer would be that a memoize function shouldn't be concerned about recursion at all. If the user wants to memoize a recursive function, it falls to him to understand the trade-offs involved, and structure the function in a way that would allow intermediate calls to be memoized if that's what he requires.
I'm playing with F# and the compiler warns me if I don't use some result (same problem described here). Since F# even has the function "Ignore" for that, it seems that it's somewhat important, but I don't really understand why - why doesn't C# care about it, but F# does?
One fundamental difference between C# and F# is that in F# everything is an expression (as opposed to a mix of expressions and statements). This includes things that in C-style languages are statements, like control flow constructs.
When programming in a functional way, you want to have small pieces of referentially transparent code that you can compose together. The fact that everything is an expression plays right into that.
On the other hand, when you do something that gives you a value, and you just leave it there, you are going against that mindset. You are either doing it for some side-effect or you simply have a piece of left-over code somewhere. In either case it's fair game to warn you that you're doing something atypical.
F# discourages, but doesn't disallow side-effects, and lets you have (potentially side-effecting) expressions executed in a sequence, as long as the intermediate ones are of type unit. And this is what ignore does - takes an argument and returns unit.
In F#, most everything is an expression with a value.
If you neglect the value of an expression in F# by either failing to bind it or return it, then it feels like you're making a mistake. Ignoring the value of an expression is an indication that you're depending on the side-effect of an operation and in F# you should be eschewing side-effects.
In general, I want to ask,
if a problem can be solved by both imperative language way as well as functional language way, would functional language wasting memory, at least not saving memory, compare to imperative language, since, function language heavily reply on recursion, and recursion push lots of memory stack ?
and follow by above question, from memory optimization point view, if a job can be done in imperative language, it shouldn't (at least won't worse than) using functional language?
The above questions, actually come from a algorithm question:
reserve a stack without using additional space:
void insert_at_bottom(node **stack, int data)
{
if( isempty(*stack) ){
push(stack,data);
return;
}
int temp=pop(stack);
insert_at_bottom(stack,data);
push(stack,temp);
}
void rev_stack(node **stack)
{
if( isempty(*stack) ) return;
int temp = pop(stack);
rev_stack(stack);
insert_at_bottom(stack,temp);
}
The above question can be solved by using double recursions, in my opinion, even if it didn't use addition memory in the code, it actually "hide" those additional spaces in stack.
Of course, my question more in general, you don't have to focus on above specific question.
Thank you for your thoughtful advise!
In a theoretical sense, no. You can always transform an iterative algorithm to a recursive one and vice versa. Assuming the same algorithm and implemented with tail-call optimization, the big-O of memory consumption would be exactly the same.
In a practical sense, maybe. The style of using immutable data structures in functional programming can take up a lot of memory.
IMO, using functional vs imperative programming is a matter of style. Use whichever one suits the code the best. And, if you need every last ounce of performance out of your machine, you can always write hand-optimized assembly.
It's not that easy.
First, recursion, specifically tail recursion need not more memory than loops. The same holds for tail calls in general. A tail call, whether it is recursive or not, can always be compiled to a jump/branch instruction, if the target machine language allows this. Hence, a program in a functional language does not need to have bigger stacks than a comparable program in imperative languages by necessity.
On the other hand, in order to minimize side effects, functional programming prefers to deal with immutable data, in fact, pure functional programming allows only immutable data. Hence functional data structures tend to be more memory costly than mutable ones that are found in imperative languages.
I'm learning F# (new to functional programming in general though used functional aspects of C# for years but let's face it, that's pretty different) and one of the things that I've read is that the F# compiler identifies tail recursion and compiles it into a while loop (see http://thevalerios.net/matt/2009/01/recursion-in-f-and-the-tail-recursion-police/).
What I don't understand is why you would write a recursive function instead of a while loop if that's what it's going to turn into anyway. Especially considering that you need to do some extra work to make your function recursive.
I have a feeling someone might say that the while loop is not particularly functional and you want to act all functional and whatnot so you use recursion but then why is it sufficient for the compiler to turn it into a while loop?
Can someone explain this to me?
You could use the same argument for any transformation that the compiler performs. For instance, when you're using C#, do you ever use lambda expressions or anonymous delegates? If the compiler is just going to turn those into classes and (non-anonymous) delegates, then why not just use those constructions yourself? Likewise, do you ever use iterator blocks? If the compiler is just going to turn those into state machines which explicitly implement IEnumerable<T>, then why not just write that code yourself? Or if the C# compiler is just going to emit IL anyway, why bother writing C# instead of IL in the first place? And so on.
One obvious answer to all of these questions is that we want to write code which allows us to express ourselves clearly. Likewise, there are many algorithms which are naturally recursive, and so writing recursive functions will often lead to a clear expression of those algorithms. In particular, it is arguably easier to reason about the termination of a recursive algorithm than a while loop in many cases (e.g. is there a clear base case, and does each recursive call make the problem "smaller"?).
However, since we're writing code and not mathematics papers, it's also nice to have software which meets certain real-world performance criteria (such as the ability to handle large inputs without overflowing the stack). Therefore, the fact that tail recursion is converted into the equivalent of while loops is critical for being able to use recursive formulations of algorithms.
A recursive function is often the most natural way to work with certain data structures (such as trees and F# lists). If the compiler wants to transform my natural, intuitive code into an awkward while loop for performance reasons that's fine, but why would I want to write that myself?
Also, Brian's answer to a related question is relevant here. Higher-order functions can often replace both loops and recursive functions in your code.
The fact that F# performs tail optimization is just an implementation detail that allows you to use tail recursion with the same efficiency (and no fear of a stack overflow) as a while loop. But it is just that - an implementation detail - on the surface your algorithm is still recursive and is structured that way, which for many algorithms is the most logical, functional way to represent it.
The same applies to some of the list handling internals as well in F# - internally mutation is used for a more efficient implementation of list manipulation, but this fact is hidden from the programmer.
What it comes down to is how the language allows you to describe and implement your algorithm, not what mechanics are used under the hood to make it happen.
A while loop is imperative by its nature. Most of the time, when using while loops, you will find yourself writing code like this:
let mutable x = ...
...
while someCond do
...
x <- ...
This pattern is common in imperative languages like C, C++ or C#, but not so common in functional languages.
As the other posters have said some data structures, more exactly recursive data structures, lend themselves to recursive processing. Since the most common data structure in functional languages is by far the singly linked list, solving problems by using lists and recursive functions is a common practice.
Another argument in favor of recursive solutions is the tight relation between recursion and induction. Using a recursive solution allows the programmer to think about the problem inductively, which arguably helps in solving it.
Again, as other posters said, the fact that the compiler optimizes tail-recursive functions (obviously, not all functions can benefit from tail-call optimization) is an implementation detail which lets your recursive algorithm run in constant space.