I am getting myself familiar to Sequential Erlang (and the functional programming thinking) now. So I want to implement the following two functionality without the help of BIF. One is left_rotate (which I have come up with the solution) and the other is right_rotate (which I am asking here)
-export(leftrotate/1, rightrotate/1).
%%(1) left rotate a lits
leftrotate(List, 0) ->
List;
leftrotate([Head | Tail], Times) ->
List = append(Tail, Head),
leftrotate(List, Times -1).
append([], Elem)->
[Elem];
append([H|T], Elem) ->
[H | append(T, Elem)].
%%right rotate a list, how?
%%
I don't want to use BIF in this exercise. How can I achieve the right rotation?
A related question and slightly more important question. How can I know one of my implementation is efficient or not (i.e., avoid unnecessary recursion if I implement the same thing with the help of a BIF, and etc.)
I think BIF is built to provide some functions to improve efficiency that functional programming is not good at (or if we do them in a 'functional way', the performance is not optimal).
The efficiency problem you mention has nothing to do with excessive recursion (function calls are cheap), and everything to do with walking and rebuilding the list. Every time you add something to the end of a list you have to walk and copy the entire list, as is obvious from your implementation of append. So, to rotate a list N steps requires us to copy the entire list out N times. We can use lists:split (as seen in one of the other answers) to do the entire rotate in one step, but what if we don't know in advance how many steps we need to rotate?
A list really isn't the ideal data structure for this task. Lets say that instead we use a pair of lists, one for the head and one for the tail, then we can rotate easily by moving elements from one list to the other.
So, carefully avoiding calling anything from the standard library, we have:
rotate_right(List, N) ->
to_list(n_times(N, fun rotate_right/1, from_list(List))).
rotate_left(List, N) ->
to_list(n_times(N, fun rotate_left/1, from_list(List))).
from_list(Lst) ->
{Lst, []}.
to_list({Left, Right}) ->
Left ++ reverse(Right).
n_times(0, _, X) -> X;
n_times(N, F, X) -> n_times(N - 1, F, F(X)).
rotate_right({[], []}) ->
{[], []};
rotate_right({[H|T], Right}) ->
{T, [H|Right]};
rotate_right({[], Right}) ->
rotate_right({reverse(Right), []}).
rotate_left({[], []}) ->
{[], []};
rotate_left({Left, [H|T]}) ->
{[H|Left], T};
rotate_left({Left, []}) ->
rotate_left({[], reverse(Left)}).
reverse(Lst) ->
reverse(Lst, []).
reverse([], Acc) ->
Acc;
reverse([H|T], Acc) ->
reverse(T, [H|Acc]).
The module queue provides a data structure something like this. I've written this without reference to that though, so theirs is probably more clever.
First, your implementation is a bit buggy (try it with the empty list...)
Second, I would suggest you something like:
-module(foo).
-export([left/2, right/2]).
left(List, Times) ->
left(List, Times, []).
left([], Times, Acc) when Times > 0 ->
left(reverse(Acc), Times, []);
left(List, 0, Acc) ->
List ++ reverse(Acc);
left([H|T], Times, Acc) ->
left(T, Times-1, [H|Acc]).
right(List, Times) ->
reverse(foo:left(reverse(List), Times)).
reverse(List) ->
reverse(List, []).
reverse([], Acc) ->
Acc;
reverse([H|T], Acc) ->
reverse(T, [H|Acc]).
Third, for benchmarking your functions, you can do something like:
test(Params) ->
{Time1, _} = timer:tc(?MODULE, function1, Params),
{Time2, _} = timer:tc(?MODULE, function2, Params),
{{solution1, Time1}, {solution2, Time2}}.
I didn't test the code, so look at it critically, just get the idea.
Moreover, you might want to implement your own "reverse" function. It will be trivial by using tail recursion. Why not to try?
If you're trying to think in functional terms then perhaps consider implementing right rotate in terms of your left rotate:
rightrotate( List, 0 ) ->
List;
rightrotate( List, Times ) ->
lists:reverse( leftrotate( lists:reverse( List ), Times ) ).
Not saying this is the best idea or anything :)
Your implementation will not be efficient since the list is not the correct representation to use if you need to change item order, as in a rotation. (Imagine a round-robin scheduler with many thousands of jobs, taking the front job and placing it at the end when done.)
So we're actually just asking ourself what would be the way with least overhead to do this on lists anyway. But then what qualifies as overhead that we want to get rid of? One can often save a bit of computation by consing (allocating) more objects, or the other way around. One can also often have a larger than needed live-set during the computation and save allocation that way.
first_last([First|Tail]) ->
put_last(First, Tail).
put_last(Item, []) ->
[Item];
put_last(Item, [H|Tl]) ->
[H|put_last(Item,Tl)].
Ignoring corner cases with empty lists and such; The above code would cons the final resulting list directly. Very little garbage allocated. The final list is built as the stack unwinds. The cost is that we need more memory for the entire input list and the list in construction during this operation, but it is a short transient thing. My damage from Java and Lisp makes me reach for optimizing down excess consing, but in Erlang you dont risk that global full GC that kills every dream of real time properties. Anyway, I like the above approach generally.
last_first(List) ->
last_first(List, []).
last_first([Last], Rev) ->
[Last|lists:reverse(Rev)];
last_first([H|Tl], Rev) ->
last_first(Tl, [H|Rev]).
This approach uses a temporary list called Rev that is disposed of after we have passed it to lists:reverse/1 (it calls the BIF lists:reverse/2, but it is not doing anything interesting). By creating this temporary reversed list, we avoid having to traverse the list two times. Once for building a list containing everything but the last item, and one more time to get the last item.
One quick comment to your code. I would change the name of the function you call append. In a functional context append usually means adding a new list to the end of a list, not just one element. No sense in adding confusion.
As mentioned lists:split is not a BIF, it is a library function written in erlang. What a BIF really is is not properly defined.
The split or split like solutions look quite nice. As someone has already pointed out a list is not really the best data structure for this type of operation. Depends of course on what you are using it for.
Left:
lrl([], _N) ->
[];
lrl(List, N) ->
lrl2(List, List, [], 0, N).
% no more rotation needed, return head + rotated list reversed
lrl2(_List, Head, Tail, _Len, 0) ->
Head ++ lists:reverse(Tail);
% list is apparenly shorter than N, start again with N rem Len
lrl2(List, [], _Tail, Len, N) ->
lrl2(List, List, [], 0, N rem Len);
% rotate one
lrl2(List, [H|Head], Tail, Len, N) ->
lrl2(List, Head, [H|Tail], Len+1, N-1).
Right:
lrr([], _N) ->
[];
lrr(List, N) ->
L = erlang:length(List),
R = N rem L, % check if rotation is more than length
{H, T} = lists:split(L - R, List), % cut off the tail of the list
T ++ H. % swap tail and head
Related
I am working on simple list functions in Erlang to learn the syntax.
Everything was looking very similar to code I wrote for the Prolog version of these functions until I got to an implementation of 'intersection'.
The cleanest solution I could come up with:
myIntersection([],_) -> [];
myIntersection([X|Xs],Ys) ->
UseFirst = myMember(X,Ys),
myIntersection(UseFirst,X,Xs,Ys).
myIntersection(true,X,Xs,Ys) ->
[X|myIntersection(Xs,Ys)];
myIntersection(_,_,Xs,Ys) ->
myIntersection(Xs,Ys).
To me, this feels slightly like a hack. Is there a more canonical way to handle this? By 'canonical', I mean an implementation true to the spirit of what Erlang's design.
Note: the essence of this question is conditional handling of user-defined predicate functions. I am not asking for someone to point me to a library function. Thanks!
I like this one:
inter(L1,L2) -> inter(lists:sort(L1),lists:sort(L2),[]).
inter([H1|T1],[H1|T2],Acc) -> inter(T1,T2,[H1|Acc]);
inter([H1|T1],[H2|T2],Acc) when H1 < H2 -> inter(T1,[H2|T2],Acc);
inter([H1|T1],[_|T2],Acc) -> inter([H1|T1],T2,Acc);
inter([],_,Acc) -> Acc;
inter(_,_,Acc) -> Acc.
it gives the exact intersection:
inter("abcd","efgh") -> []
inter("abcd","efagh") -> "a"
inter("abcd","efagah") -> "a"
inter("agbacd","eafagha") -> "aag"
if you want that a value appears only once, simply replace one of the lists:sort/1 function by lists:usort/1
Edit
As #9000 says, one clause is useless:
inter(L1,L2) -> inter(lists:sort(L1),lists:sort(L2),[]).
inter([H1|T1],[H1|T2],Acc) -> inter(T1,T2,[H1|Acc]);
inter([H1|T1],[H2|T2],Acc) when H1 < H2 -> inter(T1,[H2|T2],Acc);
inter([H1|T1],[_|T2],Acc) -> inter([H1|T1],T2,Acc);
inter(_,_,Acc) -> Acc.
gives the same result, and
inter(L1,L2) -> inter(lists:usort(L1),lists:sort(L2),[]).
inter([H1|T1],[H1|T2],Acc) -> inter(T1,T2,[H1|Acc]);
inter([H1|T1],[H2|T2],Acc) when H1 < H2 -> inter(T1,[H2|T2],Acc);
inter([H1|T1],[_|T2],Acc) -> inter([H1|T1],T2,Acc);
inter(_,_,Acc) -> Acc.
removes any duplicate in the output.
If you know that there are no duplicate values in the input list, I think that
inter(L1,L2) -> [X || X <- L1, Y <- L2, X == Y].
is the shorter code solution but much slower (1 second to evaluate the intersection of 2 lists of 10 000 elements compare to 16ms for the previous solution, and an O(2) complexity comparable to #David Varela proposal; the ratio is 70s compare to 280ms with 2 lists of 100 000 elements!, an I guess there is a very high risk to run out of memory with bigger lists)
The canonical way ("canonical" as in "SICP") is to use an accumulator.
myIntersection(A, B) -> myIntersectionInner(A, B, []).
myIntersectionInner([], _, Acc) -> Acc;
myIntersectionInner(_, [], Acc) -> Acc;
myIntersectionInner([A|As], B, Acc) ->
case myMember(A, Bs) of
true ->
myIntersectionInner(As, Bs, [A|Acc]);
false ->
myIntersectionInner(As, Bs, [Acc]);
end.
This implementation of course produces duplicates if duplicates are present in both inputs. This can be fixed at the expense of calling myMember(A, Acc) and only appending A is the result is negative.
My apologies for the approximate syntax.
Although I appreciate the efficient implementations suggested, my intention was to better understand Erlang's implementation. As a beginner, I think #7stud's comment, particularly http://erlang.org/pipermail/erlang-questions/2009-December/048101.html, was the most illuminating. In essence, 'case' and pattern matching in functions use the same mechanism under the hood, although functions should be preferred for clarity.
In a real system, I would go with one of #Pascal's implementations; depending on whether 'intersect' did any heavy lifting.
I was reading Learn You Some Erlang and I came upon this example in the Recursion chapter.
tail_sublist(_, 0, SubList) -> SubList;
tail_sublist([], _, SubList) -> SubList;
tail_sublist([H|T], N, SubList) when N > 0 ->
tail_sublist(T, N-1, [H|SubList]).
As the author goes on to explain that there is a fatal flaw in our code. It being that the sub lists hence produced would be reverse and we would have to re-reverse them to get the correct output. In contrast, what I did was use the ++ operator to avoid reversing the list later.
sublist_tail([],_,Acc) -> Acc;
sublist_tail(_,0,Acc) -> Acc;
sublist_tail([H|T],N,Acc) -> sublist_tail(T,N-1,Acc++[H]).
My question is, is the ++ operator more expensive than the | operator? And if it is, would my solution (using ++ operator) still be slow compared to the author's solution (including reversing the list to get the correct output)?
You might want to read about this issue in the Erlang efficiency guide, since there it says that building the list via | and then reversing the result is more efficient than using the appending ++ operator. If you want to know the performance difference, use timer:tc:
1> timer:tc(fun() -> lists:reverse(lists:foldl(fun(V, Acc) -> [V|Acc] end, [], lists:seq(1,1000))) end).
{1627,
[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,
23,24,25,26,27|...]}
2> timer:tc(fun() -> lists:foldl(fun(V, Acc) -> Acc++[V] end, [], lists:seq(1,1000)) end).
{6216,
[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,
23,24,25,26,27|...]}
Both approaches create lists of 1000 integers, but these measurements based on Erlang/OTP 17.5 show the prepending/reversing version is roughly 4x faster than the appending version (YMMV of course).
is the ++ operator more expensive than the | operator?
That depends. If you use it correctly, then no. ++ is only dangerous when you have a big left-hand-side operand.
Each time a "++"-operator is invoked on a left-hand List (like: List1 ++ List2), you are creating a new List, that is a copy of your left-hand operand (List1). Each copy operation then has a runtime, that is dependent on the length of your List1 (which keeps growing with your iterations).
So, if you prepend your values 'head first', you don't have to perform a copy-operation over the whole list in each step. This also means, accumulation with ++ at the head of the List wouldn't be so bad either, since only the "H"-value is copied once in each iteration:
sublist_tail([H|T],N,Acc) -> sublist_tail(T,N-1,[H]++Acc).
But if you are already accumulating head-first (and thus have to reverse later anyhow), you can do it with the cons-operator (|)
sublist_tail([H|T],N,Acc) -> sublist_tail(T,N-1,[H|Acc]).
This is the 'proper' way, since (please correct me if I am wrong) ++ is only syntactic sugar and is implemented internally with a cons-operator (|).
I'm a completely new to erlang. As an exercise to learn the language, I'm trying to implement the function sublist using tail recursion and without using reverse. Here's the function that I took from this site http://learnyousomeerlang.com/recursion:
tail_sublist(L, N) -> reverse(tail_sublist(L, N, [])).
tail_sublist(_, 0, SubList) -> SubList;
tail_sublist([], _, SubList) -> SubList;
tail_sublist([H|T], N, SubList) when N > 0 ->
tail_sublist(T, N-1, [H|SubList]).
It seems the use of reverse in erlang is very frequent.
In Mozart/Oz, it's very easy to create such the function using unbound variables:
proc {Sublist Xs N R}
if N>0 then
case Xs
of nil then
R = nil
[] X|Xr then
Unbound
in
R = X|Unbound
{Sublist Xr N-1 Unbound}
end
else
R=nil
end
end
Is it possible to create a similar code in erlang? If not, why?
Edit:
I want to clarify something about the question. The function in Oz doesn't use any auxiliary function (no append, no reverse, no anything external or BIF). It's also built using tail recursion.
When I ask if it's possible to create something similar in erlang, I'm asking if it's possible to implement a function or set of functions in erlang using tail recursion, and iterating over the initial list only once.
At this point, after reading your comments and answers, I'm doubtful that it can be done, because erlang doesn't seem to support unbound variables. It seems that all variables need to be assigned to value.
Short Version
No, you can't have a similar code in Erlang. The reason is because in Erlang variables are Single assignment variables.
Unbound Variables are simply not allowed in Erlang.
Long Version
I can't imagine a tail recursive function similar to the one you presenting above due to differences at paradigm level of the two languages you are trying to compare.
But nevertheless it also depends of what you mean by similar code.
So, correct me if I am wrong, the following
R = X|Unbound
{Sublist Xr N-1 Unbound}
Means that the attribution (R=X|Unbound) will not be executed until the recursive call returns the value of Unbound.
This to me looks a lot like the following:
sublist(_,0) -> [];
sublist([],_) -> [];
sublist([H|T],N)
when is_integer(N) ->
NewTail = sublist(T,N-1),
[H|NewTail].
%% or
%%sublist([H|T],N)
%% when is_integer(N) -> [H|sublist(T,N-1)].
But this code isn't tail recursive.
Here's a version that uses appends along the way instead of a reverse at the end.
subl(L, N) -> subl(L, N, []).
subl(_, 0, Accumulator) ->
Accumulator;
subl([], _, Accumulator) ->
Accumulator;
subl([H|T], N, Accumulator) ->
subl(T, N-1, Accumulator ++ [H]).
I would not say that "the use of reverse in Erlang is very frequent". I would say that the use of reverse is very common in toy problems in functional languages where lists are a significant data type.
I'm not sure how close to your Oz code you're trying to get with your "is it possible to create a similar code in Erlang? If not, why?" They are two different languages and have made many different syntax choices.
It is easy to implement the algorithm using a single process, however, how can I use multiple processes to do the job?
Here is what I have done so far.
find_largest([H], _) -> H;
find_largest([H, Q | T], R) ->
if H > Q -> find_largest([H | T], [Q | R]);
true -> find_largest([Q | T], [H | R])
end.
Thanks
Given how Erlang represents lists, this is probably not a good idea to try and do in parallel. Partitioning the list implies a lot of copying (since they are linked lists) and so does sending these partitions to other processes. I expect the comparison to be far cheaper than copying everything twice and then combining the results.
The implementation is also not correct, you can find a good one in lists.erl as max/1
%% max(L) -> returns the maximum element of the list L
-spec max([T,...]) -> T.
max([H|T]) -> max(T, H).
max([H|T], Max) when H > Max -> max(T, H);
max([_|T], Max) -> max(T, Max);
max([], Max) -> Max.
If by some chance your data are already in separate processes, simply get the lists:max/1 or each of the lists and send them to a single place, and then get the lists:max/1 of the result list. You could also do the comparison as you receive the results to avoid building this intermediate list.
The single process version of your code should be replaced by lists:max/1. A useful function for parallelizing code is as follows:
pmap(Fun, List) ->
Parent = self(),
P = fun(Elem) ->
Ref = make_ref(),
spawn_link(fun() -> Parent ! {Ref, Fun(Elem)} end),
Ref
end,
Refs = [P(Elem) || Elem <- List],
lists:map(fun(Ref) -> receive {Ref, Elem} -> Elem end end, Refs).
pmap/2 applies Fun to each member of List in parallel and collects the results in input order. To use pmap with this problem, you would need to segment your original list into a list of lists and pass that to pmap. e.g. lists:max(pmap(fun lists:max/1, ListOfLists)). Of course, the act of segmenting the lists would be more expensive than simply calling lists:max/1, so this solution would require that the list be pre-segmented. Even then, it's likely that the overhead of copying the lists outweighs any benefit of parallelization - especially on a single node.
The inherent problem with your situation is that the computation of each sub-task is tiny when compared with the overhead of managing the data. Tasks which are more computationally intensive, (e.g. factoring a list of large numbers), are more easily parallelized.
This isn't to say that finding a max value can't be parallelized, but I believe it would require that your data be pre-segmented or segmented in a way that didn't require iterating over every value.
I've got a coding problem in Erlang that is probably a common design pattern, but I can't find any info on how to resolve it.
I've got a list L. I want to apply a function f to every element in L, and have it run across all elements in L concurrently. Each call to f(Element) will either succeed or fail; in the majority of cases it will fail, but occasionally it will succeed for a specific Element within L.
If/when a f(Element) succeeds, I want to return "success" and terminate all invocations of f for other elements in L - the first "success" is all I'm interested in. On the other hand, if f(Element) fails for every element in L, then I want to return "fail".
As a trivial example, suppose L is a list of integers, and F returns {success} if an element in L is 3, or {fail} for any other value. I want to find as quickly as possible if there are any 3s in L; I don't care how many 3s there are, just whether at least one 3 exists or not. f could look like this:
f(Int) ->
case Int of
3 -> {success};
_ -> {fail}
end.
How can I iterate through a list of Ints to find out if the list contains at least one 3, and return as quickly as possible?
Surely this is a common functional design pattern, and I'm just not using the right search terms within Google...
There basically two different ways of doing this. Either write your own function which iterates over the list returning true or false depending on whether it finds a 3:
contains_3([3|_]) -> true;
contains_3([_|T]) -> contains_3(T);
contains_3([]) -> false.
The second is use an a already defined function to do the actual iteration until a test on the list elements is true and provide it with the test. lists:any returns true or false depending on whether the test succeeds for at least one element:
contains_3(List) -> lists:any(fun (E) -> E =:= 3 end, List).
will do the same thing. Which you choose is up to you. The second one would probably be closer to a design pattern but I feel that even if you use it you should have an idea of how it works internally. In this case it is trivial and very close to the explicit case.
It is a very common thing to do, but whether it would classify as a design pattern I don't know. It seems so basic and in a sense "trivial" that I would hesitate to call it a design pattern.
It has been a while since I did any erlang, so I'm not going to attempt to provide you with syntax, however erlang and the OTP have the solution waiting for you.
Spawn one process representing the function; have it iterate over the list, spawning off as many processes as you feel is appropriate to perform the per-element calculation efficiently.
Link every process to the function-process, and have the function process terminate after it returns the first result.
Let erlang/otp to clean up the rest of the processes.
As has already been answered your solution is to use lists:any/2.
Seeing that you want a concurrent version of it:
any(F, List) ->
Parent = self(),
Pid = spawn(fun() -> spawner(Parent, F, List) end),
receive {Pid, Result} -> Result
end,
Result.
spawner(Parent, F, List) ->
Spawner = self(),
S = spawn_link(fun() -> wait_for_result(Spawner, Parent, length(List)) end),
[spawn_link(fun() -> run(S, F) end) || X <- List],
receive after infinity -> ok end.
wait_for_result(Spawner, Parent, 0) ->
Parent ! {Spawner, false},
exit(have_result);
wait_for_result(Spawner, Parent, Children) ->
receive
true -> Parent ! {Spawner, true}, exit(have_result);
false -> wait_for_result(Spawner, Parent, Children -1)
end.
run(S, F) ->
case catch(F()) of
true -> S ! true;
_ -> S ! false
end.
Note that all the children (the "run" processes) will die when the "wait_for_children" process does an exit(have_result).
Completely untested... Ah, what the heck. I'll do an example:
4> play:any(fun(A) -> A == a end, [b,b,b,b,b,b,b,b]).
false
5> play:any(fun(A) -> A == a end, [b,b,b,b,b,b,a,b]).
true
There could still be bugs (and there probably are).
You might want to look at the plists module: http://code.google.com/p/plists/ Though I don't know if plists:any handles
(a) on the 1st {success} received, tell the other sub-processes to stop processing & exit ASAP