Does using a lot of tail-recursion in Erlang slow it down? - erlang

I've been reading about Erlang lately and how tail-recursion is so heavily used, due to the difficulty of using iterative loops.
Doesn't this high use of recursion slow it down, what with all the function calls and the effect they have on the stack? Or does the tail recursion negate most of this?

The point is that Erlang optimizes tail calls (not only recursion). Optimizing tail calls is quite simple: if the return value is computed by a call to another function, then this other function is not just put on the function call stack on top of the calling function, but instead the stack frame of the current function is replaced by one of the called function. This means that tail calls don't add to the stack size.
So, no, using tail recursion doesn't slow Erlang down, nor does it pose a risk of stack overflow.
With tail call optimization in place, you can not only use simple tail recursion, but also mutual tail recursion of several functions (a tail-calls b, which tail-calls c, which tail-calls a ...). This can sometimes be a good model of computation.

Iterative tail recursion is generally implemented using Tail calls.
This is basically a transformation of a recursive call to a simple loop.
C# example:
uint FactorialAccum(uint n, uint accum) {
if(n < 2) return accum;
return FactorialAccum(n - 1, n * accum);
};
uint Factorial(uint n) {
return FactorialAccum(n, 1);
};
to
uint FactorialAccum(uint n, uint accum) {
start:
if(n < 2) return accum;
accum *= n;
n -= 1;
goto start;
};
uint Factorial(uint n) {
return FactorialAccum(n, 1);
};
or even better:
uint Factorial(uint n) {
uint accum = 1;
start:
if(n < 2) return accum;
accum *= n;
n -= 1;
goto start;
};
C# not real tail recursion, this is because the return value is modified, most compilers won't break this down into a loop:
int Power(int number, uint power) {
if(power == 0) return 1;
if(power == 1) return number;
return number * Power(number, --power);
}
to
int Power(int number, uint power) {
int result = number;
start:
if(power == 0) return 1;
if(power == 1) return number;
result *= number;
power--;
goto start;
}

It should not affect performance in most cases. What you're looking for is not just tail calls, but tail call optimization (or tail call elimination). Tail call optimization is a compiler or runtime technique that figures out when a call to a function is the equivalent of 'popping the stack' to get back to the proper function instead of just returning. Generally tail call optimization of can only be done when the recursive call is the last operation in the function, so you have to be careful.

There is a problem pertaining to tail-recursion but it is not related to performance - Erlang tail-recursion optimisation also involves elimination of the stack trace for debugging.
For instance see Point 9.13 of the Erlang FAQ:
Why doesn't the stack backtrace show the right functions for this code:
-module(erl).
-export([a/0]).
a() -> b().
b() -> c().
c() -> 3 = 4. %% will cause badmatch
The stack backtrace only shows function c(), rather than a(), b() and c().
This is because of last-call-optimisation; the compiler knows it does not need
to generate a stack frame for a() or b() because the last thing it did was call another function, hence the stack frame does not appear in the stack backtrace.
This can be a bit of pain when you hit a crash (but it does kinda go with the territory of functional programming...)

A similar optimization that separates program text function calls from implementation function calls is 'inlining'. In modern/thoughtful languages function calls have little relation to machine level function calls.

Related

Dart: What exactly is the behavior of a synchronous generator function to generate a result "lazily"?

https://dart.dev/guides/language/language-tour#generators
When you need to lazily produce a sequence of values, consider using a
generator function. Dart has built-in support for two kinds of
generator functions:
As explained above, what exactly does "lazily" mean for a synchronous generator function?
Is there a difference in behavior from a normal function?
void main(){
Iterable<int> list1 = naturalsTo(3);
print(list1.toList());//[0, 1, 2]
print(naturalsTo(3).toList());//[0, 1, 2]
}
Iterable<int> naturalsTo(int n) sync* {
int k = 0;
while (k < n) yield k++;
}
Iterable<int> naturalsTo2(int n){
int k=0;
List<int> resultList = [];
for(k;k<n;k++){
resultList.add(k);
}
return resultList;
}
The results of the above code both return the same list, but I'm not sure how they behave differently.
What exactly does "lazily" mean?
Lazy evaluation is a general term to describe behavior where a result isn't computed until it's needed. It defers computation to avoid computing results that won't be used.
Your example (even after correcting it to not call the same function for both cases) does not demonstrate the difference. Instead consider a situation where:
Each loop iteration has an observable side-effect. In the example below, we'll use print, but in practice that observable side-effect usually is that each loop iteration takes some measurable amount of time.
We don't need to iterate over all elements.
void main() {
print(naturalsTo(3).take(2).toList());
print(naturalsTo2(3).take(2).toList());
}
Iterable<int> naturalsTo(int n) sync* {
int k = 0;
while (k < n) {
print(k);
yield k++;
}
}
Iterable<int> naturalsTo2(int n) {
int k = 0;
List<int> resultList = [];
for (k; k < n; k++) {
print(k);
resultList.add(k);
}
return resultList;
}
The first case (which uses the synchronous generator) prints:
0
1
[0, 1]
The second case (which uses a normal function) prints:
0
1
2
[0, 1]
lazily means it calculates the values as you need them.
The thing is though, when you say toList() you actually say, "I need them all now", so you lose all benefits of the lazy calculation.
To demonstrate how it works let's say this is the function, notice the print:
Iterable<int> naturalsTo(int n) sync* {
int k = 0;
while (k < n) {
print(k);
yield k++;
}
}
Then execute this code:
var a = naturalsTo(10).toList();
print("=====");
var b = naturalsTo(10);
print("-----");
var c = b.elementAt(5);
you will see it generates this output:
0
1
2
3
4
5
6
7
8
9
=====
-----
0
1
2
3
4
5
So, just creating the iterable doesn't execute any of the code in it. converting it to a list generates all the values. And just requesting a single value from an iterable only generates the values up to that value.
This can be beneficial if the calculations take a long time to process or even when the iterable generates an infinite sequence. It's often used in Listviews so only the rows that are shown on screen are calculated.

How to do 'function pointers' in Rascal?

Does Rascal support function pointers or something like this to do this like Java Interfaces?
Essentially I want to extract specific (changing) logic from a common logic block as separate functions. The to be used function is passed to the common block, which then call this function. In C we can do this with function pointers or with Interfaces in Java.
First I want to know how this general concept is called in the language design world.
I checked the Rascal Function Helppage, but this provide no clarification on this aspect.
So e.g. I have:
int getValue(str input) {
.... }
int getValue2(str input){
... }
Now I want to say:
WhatDatatype? func = getValue2; // how to do this?
Now I can pass this to an another function and then:
int val = invoke_function(func,"Hello"); // how to invoke?, and pass parameters and get ret value
Tx,
Jos
This page in the tutor has an example of using higher-order functions, which are the Rascal feature closest to function pointers:
http://tutor.rascal-mpl.org/Rascal/Rascal.html#/Rascal/Concepts/Functions/Functions.html
You can define anonymous (unnamed) functions, called closures in Java; assign them to variables; pass them as arguments to functions (higher-order functions); etc. Here is an example:
rascal>myfun = int(int x) { return x + 1; };
int (int): int (int);
rascal>myfun;
int (int): int (int);
rascal>myfun(3);
int: 4
rascal>int applyIntFun(int(int) f, int x) { return f(x); }
int (int (int), int): int applyIntFun(int (int), int);
rascal>applyIntFun(myfun,10);
int: 11
The first command defines an increment function, int(int x) { return x + 1; }, and assigns this to variable myfun. The rest of the code would work the same if instead this was
int myfun(int x) { return x + 1; }
The second command just shows the type, which is a function that takes and returns int. The third command calls the function with value 3, returning 4. The fourth command then shows a function which takes a function as a parameter. This function parameter, f, will then be called with argument x. The final command just shows an example of using it.

Tail recursive mergesort algorithm

I've implemented a recursive mergesort algorithm:
-module(ms).
-import(lists,[sublist/3,delete/2,min/1,reverse/1]).
-export([mergesort/1]).
mergesort([])->
[];
mergesort([N])->
N;
mergesort(L)->
mergesort(split(1,L),split(2,L),[]).
mergesort(L1,L2,[])->
case {sorted(L1),sorted(L2)} of
{true,true}->
merge(L1,L2,[]);
{true,false}->
merge(L1,mergesort(split(1,L2),split(2,L2),[]),[]);
{false,true}->
merge(mergesort(split(1,L1),split(2,L1),[]),L2,[]);
{false,false}->
merge(mergesort(split(1,L1),split(2,L1),[]),mergesort(split(1,L2),split(2,L2),[]),[])
end.
merge([],[],R)->
reverse(R);
merge(L,[],R)->
merge(delete(min(L),L),[],[min(L)|R]);
merge([],L,R)->
merge([],delete(min(L),L),[min(L)|R]);
merge([H1|T1],[H2|T2],R) when H1 < H2 ->
merge(T1,[H2|T2],[H1|R]);
merge([H1|T1],[H2|T2],R) when H1 >= H2 ->
merge([H1|T1],T2,[H2|R]).
split(1,L)->
sublist(L,1,ceiling(length(L)/2));
split(2,L)->
sublist(L,ceiling(length(L)/2+1),length(L)).
ceiling(X) when X < 0 ->
trunc(X);
ceiling(X) ->
T = trunc(X),
case X - T == 0 of
true -> T;
false -> T + 1
end.
However I'm irked by the fact that mergesort/3 is not tail recursive (TR), and is verbose.
I guess the problem here is that I'm not particularly aware of the TR 'template' that I would use here - I understand how I would implement a TR function that can be defined in terms of a series, for example - that would just move the arguments to the function up the series, however for the case in which we merge a sublist conditionally to the natural recursion of the rest of the list, I'm ignorant.
Therefore, I would like to ask:
1) How can I make mergesort/3 TR?
2) What resources can I use to understand erlang tail recursion in-depth?
Your merge-sort is not tail recursive because the last function called in mergesort/3 is merge/3. You call mergesort as arguments of merge so stack has to grow - upper called mergesort/3 is not yet finished and its stack frame can't be reused.
To write it in TR approach you need think of it as much imperatively as you can. Every TR function is easily rewritable to iterational while loop. Consider:
loop(Arg) ->
NewArg = something_happens_to(Arg),
loop(NewArg) or return NewArg.
And:
data = something;
while(1){
...
break loop or modify data block
...
} // data equals to NewArg at the end of iteration
Here is my TR merge-sort example. It's bottom-up way of thinking. I used merge/3 function from your module.
ms(L) ->
ms_iteration([[N] || N <- L], []).
ms_iteration([], []) -> % nothing to do
[];
ms_iteration([], [OneSortedList]) -> % nothing left to do
OneSortedList;
ms_iteration([], MergedLists) ->
ms_iteration(MergedLists, []); % next merging iteration
ms_iteration([L], MergedLists) -> % can't be merged yet but it's sorted
ms_iteration([], [L | MergedLists]);
ms_iteration([L1, L2 | ToMergeTail], MergedLists) -> % merging two sorted lists
ms_iteration(ToMergeTail, [merge(L1, L2, []) | MergedLists]).
It's nicely explained here: http://learnyousomeerlang.com/recursion .

F# using accumulator, still getting stack overflow exception

In the following function, I've attempted to set up tail recursion via the usage of an accumulator. However, I'm getting stack overflow exceptions which leads me to believe that the way I'm setting up my function is't enabling tail recursion correctly.
//F# attempting to make a tail recursive call via accumulator
let rec calc acc startNum =
match startNum with
| d when d = 1 -> List.rev (d::acc)
| e when e%2 = 0 -> calc (e::acc) (e/2)
| _ -> calc (startNum::acc) (startNum * 3 + 1)
It is my understanding that using the acc would allow the compiler to see that there is no need to keep all the stack frames around for every recursive call, since it can stuff the result of each pass in acc and return from each frame. There is obviously something I don't understand about how to use the accumulator value correctly so the compiler does tail calls.
Stephen Swensen was correct in noting as a comment to the question that if you debug, VS has to disable the tail calls (else it wouldn't have the stack frames to follow the call stack). I knew that VS did this but just plain forgot.
After getting bit by this one, I wonder if it possible for the runtime or compiler to throw a better exception since the compiler knows both that you are debugging and you wrote a recursive function, it seems to me that it might be possible for it to give you a hint such as
'Stack Overflow Exception: a recursive function does not
tail call by default when in debug mode'
It does appear that this is properly getting converted into a tail call when compiling with .NET Framework 4. Notice that in Reflector it translates your function into a while(true) as you'd expect the tail functionality in F# to do:
[CompilationArgumentCounts(new int[] { 1, 1 })]
public static FSharpList<int> calc(FSharpList<int> acc, int startNum)
{
while (true)
{
int num = startNum;
switch (num)
{
case 1:
{
int d = num;
return ListModule.Reverse<int>(FSharpList<int>.Cons(d, acc));
}
}
int e = num;
if ((e % 2) == 0)
{
int e = num;
startNum = e / 2;
acc = FSharpList<int>.Cons(e, acc);
}
else
{
startNum = (startNum * 3) + 1;
acc = FSharpList<int>.Cons(startNum, acc);
}
}
}
Your issue isn't stemming from the lack it being a tail call (if you are using F# 2.0 I don't know what the results will be). How exactly are you using this function? (Input parameters.) Once I get a better idea of what the function does I can update my answer to hopefully solve it.

"int -> int -> int" What does this mean in F#?

I wonder what this means in F#.
“a function taking an integer, which returns a function which takes an integer and returns an integer.”
But I don't understand this well.
Can anyone explain this so clear ?
[Update]:
> let f1 x y = x+y ;;
val f1 : int -> int -> int
What this mean ?
F# types
Let's begin from the beginning.
F# uses the colon (:) notation to indicate types of things. Let's say you define a value of type int:
let myNumber = 5
F# Interactive will understand that myNumber is an integer, and will tell you this by:
myNumber : int
which is read as
myNumber is of type int
F# functional types
So far so good. Let's introduce something else, functional types. A functional type is simply the type of a function. F# uses -> to denote a functional type. This arrow symbolizes that what is written on its left-hand side is transformed into what is written into its right-hand side.
Let's consider a simple function, that takes one argument and transforms it into one output. An example of such a function would be:
isEven : int -> bool
This introduces the name of the function (on the left of the :), and its type. This line can be read in English as:
isEven is of type function that transforms an int into a bool.
Note that to correctly interpret what is being said, you should make a short pause just after the part "is of type", and then read the rest of the sentence at once, without pausing.
In F# functions are values
In F#, functions are (almost) no more special than ordinary types. They are things that you can pass around to functions, return from functions, just like bools, ints or strings.
So if you have:
myNumber : int
isEven : int -> bool
You should consider int and int -> bool as two entities of the same kind: types. Here, myNumber is a value of type int, and isEven is a value of type int -> bool (this is what I'm trying to symbolize when I talk about the short pause above).
Function application
Values of types that contain -> happens to be also called functions, and have special powers: you can apply a function to a value. So, for example,
isEven myNumber
means that you are applying the function called isEven to the value myNumber. As you can expect by inspecting the type of isEven, it will return a boolean value. If you have correctly implemented isEven, it would obviously return false.
A function that returns a value of a functional type
Let's define a generic function to determine is an integer is multiple of some other integer. We can imagine that our function's type will be (the parenthesis are here to help you understand, they might or might not be present, they have a special meaning):
isMultipleOf : int -> (int -> bool)
As you can guess, this is read as:
isMultipleOf is of type (PAUSE) function that transforms an int into (PAUSE) function that transforms an int into a bool.
(here the (PAUSE) denote the pauses when reading out loud).
We will define this function later. Before that, let's see how we can use it:
let isEven = isMultipleOf 2
F# interactive would answer:
isEven : int -> bool
which is read as
isEven is of type int -> bool
Here, isEven has type int -> bool, since we have just given the value 2 (int) to isMultipleOf, which, as we have already seen, transforms an int into an int -> bool.
We can view this function isMultipleOf as a sort of function creator.
Definition of isMultipleOf
So now let's define this mystical function-creating function.
let isMultipleOf n x =
(x % n) = 0
Easy, huh?
If you type this into F# Interactive, it will answer:
isMultipleOf : int -> int -> bool
Where are the parenthesis?
Note that there are no parenthesis. This is not particularly important for you now. Just remember that the arrows are right associative. That is, if you have
a -> b -> c
you should interpret it as
a -> (b -> c)
The right in right associative means that you should interpret as if there were parenthesis around the rightmost operator. So:
a -> b -> c -> d
should be interpreted as
a -> (b -> (c -> d))
Usages of isMultipleOf
So, as you have seen, we can use isMultipleOf to create new functions:
let isEven = isMultipleOf 2
let isOdd = not << isEven
let isMultipleOfThree = isMultipleOf 3
let endsWithZero = isMultipleOf 10
F# Interactive would respond:
isEven : int -> bool
isOdd : int -> bool
isMultipleOfThree : int -> bool
endsWithZero : int -> bool
But you can use it differently. If you don't want to (or need to) create a new function, you can use it as follows:
isMultipleOf 10 150
This would return true, as 150 is multiple of 10. This is exactly the same as create the function endsWithZero and then applying it to the value 150.
Actually, function application is left associative, which means that the line above should be interpreted as:
(isMultipleOf 10) 150
That is, you put the parenthesis around the leftmost function application.
Now, if you can understand all this, your example (which is the canonical CreateAdder) should be trivial!
Sometime ago someone asked this question which deals with exactly the same concept, but in Javascript. In my answer I give two canonical examples (CreateAdder, CreateMultiplier) inf Javascript, that are somewhat more explicit about returning functions.
I hope this helps.
The canonical example of this is probably an "adder creator" - a function which, given a number (e.g. 3) returns another function which takes an integer and adds the first number to it.
So, for example, in pseudo-code
x = CreateAdder(3)
x(5) // returns 8
x(10) // returns 13
CreateAdder(20)(30) // returns 50
I'm not quite comfortable enough in F# to try to write it without checking it, but the C# would be something like:
public static Func<int, int> CreateAdder(int amountToAdd)
{
return x => x + amountToAdd;
}
Does that help?
EDIT: As Bruno noted, the example you've given in your question is exactly the example I've given C# code for, so the above pseudocode would become:
let x = f1 3
x 5 // Result: 8
x 10 // Result: 13
f1 20 30 // Result: 50
It's a function that takes an integer and returns a function that takes an integer and returns an integer.
This is functionally equivalent to a function that takes two integers and returns an integer. This way of treating functions that take multiple parameters is common in functional languages and makes it easy to partially apply a function on a value.
For example, assume there's an add function that takes two integers and adds them together:
let add x y = x + y
You have a list and you want to add 10 to each item. You'd partially apply add function to the value 10. It would bind one of the parameters to 10 and leaves the other argument unbound.
let list = [1;2;3;4]
let listPlusTen = List.map (add 10)
This trick makes composing functions very easy and makes them very reusable. As you can see, you don't need to write another function that adds 10 to the list items to pass it to map. You have just reused the add function.
You usually interpret this as a function that takes two integers and returns an integer.
You should read about currying.
a function taking an integer, which returns a function which takes an integer and returns an integer
The last part of that:
a function which takes an integer and returns an integer
should be rather simple, C# example:
public int Test(int takesAnInteger) { return 0; }
So we're left with
a function taking an integer, which returns (a function like the one above)
C# again:
public int Test(int takesAnInteger) { return 0; }
public int Test2(int takesAnInteger) { return 1; }
public Func<int,int> Test(int takesAnInteger) {
if(takesAnInteger == 0) {
return Test;
} else {
return Test2;
}
}
You may want to read
F# function types: fun with tuples and currying
In F# (and many other functional languages), there's a concept called curried functions. This is what you're seeing. Essentially, every function takes one argument and returns one value.
This seems a bit confusing at first, because you can write let add x y = x + y and it appears to add two arguments. But actually, the original add function only takes the argument x. When you apply it, it returns a function that takes one argument (y) and has the x value already filled in. When you then apply that function, it returns the desired integer.
This is shown in the type signature. Think of the arrow in a type signature as meaning "takes the thing on my left side and returns the thing on my right side". In the type int -> int -> int, this means that it takes an argument of type int — an integer — and returns a function of type int -> int — a function that takes an integer and returns an integer. You'll notice that this precisely matches the description of how curried functions work above.
Example:
let f b a = pown a b //f a b = a^b
is a function that takes an int (the exponent) and returns a function that raises its argument to that exponent, like
let sqr = f 2
or
let tothepowerofthree = f 3
so
sqr 5 = 25
tothepowerofthree 3 = 27
The concept is called Higher Order Function and quite common to functional programming.
Functions themselves are just another type of data. Hence you can write functions that return other functions. Of course you can still have a function that takes an int as parameter and returns something else. Combine the two and consider the following example (in python):
def mult_by(a):
def _mult_by(x):
return x*a
return mult_by
mult_by_3 = mult_by(3)
print mylt_by_3(3)
9
(sorry for using python, but i don't know f#)
There are already lots of answers here, but I'd like to offer another take. Sometimes explaining the same thing in lots of different ways helps you to 'grok' it.
I like to think of functions as "you give me something, and I'll give you something else back"
So a Func<int, string> says "you give me an int, and I'll give you a string".
I also find it easier to think in terms of 'later' : "When you give me an int, I'll give you a string". This is especially important when you see things like myfunc = x => y => x + y ("When you give curried an x, you get back something which when you give it a y will return x + y").
(By the way, I'm assuming you're familiar with C# here)
So we could express your int -> int -> int example as Func<int, Func<int, int>>.
Another way that I look at int -> int -> int is that you peel away each element from the left by providing an argument of the appropriate type. And when you have no more ->'s, you're out of 'laters' and you get a value.
(Just for fun), you can transform a function which takes all it's arguments in one go into one which takes them 'progressively' (the official term for applying them progressively is 'partial application'), this is called 'currying':
static void Main()
{
//define a simple add function
Func<int, int, int> add = (a, b) => a + b;
//curry so we can apply one parameter at a time
var curried = Curry(add);
//'build' an incrementer out of our add function
var inc = curried(1); // (var inc = Curry(add)(1) works here too)
Console.WriteLine(inc(5)); // returns 6
Console.ReadKey();
}
static Func<T, Func<T, T>> Curry<T>(Func<T, T, T> f)
{
return a => b => f(a, b);
}
Here is my 2 c. By default F# functions enable partial application or currying. This means when you define this:
let adder a b = a + b;;
You are defining a function that takes and integer and returns a function that takes an integer and returns an integer or int -> int -> int. Currying then allows you partiallly apply a function to create another function:
let twoadder = adder 2;;
//val it: int -> int
The above code predifined a to 2, so that whenever you call twoadder 3 it will simply add two to the argument.
The syntax where the function parameters are separated by space is equivalent to this lambda syntax:
let adder = fun a -> fun b -> a + b;;
Which is a more readable way to figure out that the two functions are actually chained.

Resources