I'm trying to understand request order in erlang, but I can't seem to grasp it very well. Here's the example code:
test() ->
Server = start(),
spawn(fun() -> write(Server, 2),
io:format(”Child 1 read ~p~n”, [read(Server)]) end),
write(Server, 3),
write2(Server, 5),
spawn(fun() -> write2(Server, 1),
io:format(”Child 2 read ~p~n”, [read(Server)]) end),
io:format(”Parent read ~p~n”, [read(Server)]).
And here's the server:
start() ->
spawn(fun() -> init() end).
init() -> loop(0).
loop(N) ->
receive
{read, Pid} ->
Pid ! {value, self(), N},
loop(N);
{write, Pid, M} ->
Pid ! {write_reply, self()},
loop(M);
{write2, _Pid, M} -> loop(M)
end.
read(Serv) ->
Serv ! {read, self()},
receive {value, Serv, N} -> N end.
write(Serv, N) ->
Serv ! {write, self(), N},
receive {write_reply, Serv} -> ok end.
write2(Serv, N) ->
Serv ! {write2, self(), N},
ok.
I understand that different values could be printed by the three different processes created in test/0, but I'm trying to figure out the lowest and highest values that could be printed by those Parent, Child1 and Child2 processes. The answer states:
Parent: lowest 1, highest 5
Child1: lowest 1, highest 5
Child2: lowest 1, highest 2
Can somebody explain this?
Keep in mind that Erlang guarantees message order only from one process to another. If process A sequentially sends message 1 and then message 2 to process B, then B will receive them in that order. But Erlang guarantees no specific ordering of messages arriving at B if multiple concurrent processes are sending them. In this example, Parent, Child1, and Child2 all run concurrently and all send messages concurrently to Server.
The Parent process performs the following sequential steps:
Spawns the Server process. This eventually sets the initial value in the Server loop to 0.
Spawns Child1. This eventually writes the value 2 to Server, then reads from Server and prints the result.
Uses write/2 to send the value 3 to Server. The write case in the loop/1 function first replies to the caller and then installs the value 3 for its next iteration.
Uses write2/2 to send 5 to Server. The write2/2 function just sends a message to Server and does not await a reply. The write2 case in the loop/1 function just installs the value 5 for its next iteration.
Spawns Child2, which eventually calls write2/2 with the value 1, and then reads Server and prints the result.
Reads Server and prints the result.
For Parent, step 3 sends the value 3 to Server, so as far as Parent is concerned, Server now has the value 3. In step 4, Parent calls write2/2 to send 5 to the server, and that message must arrive at Server sometime after the message sent in step 3. In step 6, Parent performs a read, but all we know is that this message has to arrive at Server after the write message in step 4. This message ordering means the highest value Parent can see is 5.
The lowest value Parent can see is 1 because if Child2 gets its write message of 1 to Server after the Parent write message of 5 but before the final Parent read message, then Parent will see the 1.
For Child1, the highest value it can see is 5 because it runs concurrently with Parent and the two messages sent by Parent could arrive at Server before its write message of 2. The lowest Child1 can see is 1 because the Child2 write message of 1 can arrive before the Child1 read message.
For Child2, the lowest value it can see is its own write of 1. The highest value it can see is 2 from the write of Child1 because the Parent writes of 3 and 5 occur before Child2 is spawned, thus Child1 is the only process writing concurrently and so only it has a chance of interleaving its write message between the Child2 write and read messages.
Related
I am writing a program that solves producers-consumers problem using Erlang multiprocessing with one process responsible for handling buffer to which I produce/consume and many producers and many consumers processes. To simplify I assume producer/consumer does not know that his operation has failed (that it is impossible to produce or consume because of buffer constraints), but the server is prepared to do this.
My code is:
Server code
server(Buffer, Capacity, CountPid) ->
receive
%% PRODUCER
{Pid, produce, InputList} ->
NumberProduce = lists:flatlength(InputList),
case canProduce(Buffer, NumberProduce, Capacity) of
true ->
NewBuffer = append(InputList, Buffer),
CountPid ! lists:flatlength(InputList),
Pid ! ok,
server(NewBuffer,Capacity, CountPid);
false ->
Pid ! tryagain,
server(Buffer, Capacity, CountPid)
end;
%% CONSUMER
{Pid, consume, Number} ->
case canConsume(Buffer, Number) of
true ->
Data = lists:sublist(Buffer, Number),
NewBuffer = lists:subtract(Buffer, Data),
Pid ! {ok, Data},
server(NewBuffer, Capacity,CountPid);
false ->
Pid ! tryagain,
server(Buffer, Capacity, CountPid)
end
end.
Producer and consumer
producer(ServerPid) ->
X = rand:uniform(9),
ToProduce = [rand:uniform(500) || _ <- lists:seq(1, X)],
ServerPid ! {self(),produce,ToProduce},
producer(ServerPid).
consumer(ServerPid) ->
X = rand:uniform(9),
ServerPid ! {self(),consume,X},
consumer(ServerPid).
Starting and auxiliary functions (I enclose as I don't know where exactly my problem is)
spawnProducers(Number, ServerPid) ->
case Number of
0 -> io:format("Spawned producers");
N ->
spawn(zad2,producer,[ServerPid]),
spawnProducers(N - 1,ServerPid)
end.
spawnConsumers(Number, ServerPid) ->
case Number of
0 -> io:format("Spawned producers");
N ->
spawn(zad2,consumer,[ServerPid]),
spawnProducers(N - 1,ServerPid)
end.
start(ProdsNumber, ConsNumber) ->
CountPid = spawn(zad2, count, [0,0]),
ServerPid = spawn(zad2,server,[[],20, CountPid]),
spawnProducers(ProdsNumber, ServerPid),
spawnConsumers(ConsNumber, ServerPid).
canProduce(Buffer, Number, Capacity) ->
lists:flatlength(Buffer) + Number =< Capacity.
canConsume(Buffer, Number) ->
lists:flatlength(Buffer) >= Number.
append([H|T], Tail) ->
[H|append(T, Tail)];
append([], Tail) ->
Tail.
I am trying to count number of elements using such process, server sends message to it whenever elements are produced.
count(N, ThousandsCounter) ->
receive
X ->
if
N >= 1000 ->
io:format("Yeah! We have produced ~p elements!~n", [ThousandsCounter]),
count(0, ThousandsCounter + 1000);
true -> count(N + X, ThousandsCounter)
end
end.
I expect this program to work properly, which means: it produces elements, increase of produced elements depends on time like f(t) = kt, k-constant and the more processes I have the faster production is.
ACTUAL QUESTION
I launch program:
erl
c(zad2)
zad2:start(5,5)
How the program behaves:
The longer production lasts the less elements in the unit of time are being produced (e.g. in first second 10000, in next 5000, in 10th second 1000 etc.
The more processes I have, the slower production is, in start(10,10) I need to wait about a second for first thousand, whereas for start(2,2) 20000 appears almost immediately
start(100,100) made me restart my computer (I work on Ubuntu) as the whole CPU was used and there was no memory available for me to open terminal and terminate erlang machine
Why does my program not behave like I expect? Am I doing something wrong with Erlang programming or is this the matter of OS or anything else?
The producer/1 and consumer/1 functions as written above don't ever wait for anything - they just loop and loop, bombarding the server with messages. The server's message queue is filling up very quickly, and the Erlang VM will try to grow as much as it can, stealing all your memory, and the looping processes will steal all available CPU time on all cores.
Hi I am doing some Erlang training by understanding some basic Erlang server modules. However I am getting stuck on this one. In this module I am supposed to tell the lowest and highest possible values that can be printed by each of the 3 processes (parent and 2 child processes) because depending on particular order of execution, different values may get printed by the 3 processes.
I executed the test and got 2 for Parent, 3 for both of the child processes but I don't know how they got those values. Can someone specifically explain the process to me? Appreciated.
Here is the module:
-module(p4).
-export([start/0, init/0, read/1, incr/1, reset/1, test/0]).
start()->
spawn(fun() -> init() end).
init() -> loop(0).
loop(N) ->
receive
{read, Pid} ->
Pid ! {value, self(), N},
loop(N);
{incr, Pid} ->
Pid ! {incr_reply, self()},
loop(N+1);
{reset, Pid} ->
Pid ! {reset_reply, self()},
loop(0)
end.
read(Serv) ->
Serv ! {read, self()},
receive {value, Serv, N} -> N end.
incr(Serv)->
Serv ! {incr, self()},
receive{incr_reply, Serv} -> ok end.
reset(Serv) ->
Serv ! {reset, self()},
receive {reset_reply, Serv} -> ok end.
test() ->
Server = start(),
spawn(fun() -> incr(Server),
io:format("Child 1 read ~p~n", [read(Server)]) end),
incr(Server),
spawn(fun() -> incr(Server),
io:format("child 2 read ~p~n", [read(Server)]) end),
io:format("Parent read ~p~n", [read(Server)]).
One precision to understand Lukasz's answer is that all Server interfaces (read,incr, reset) are synchronous: they wait for an answer from the server. This means that the process which uses these interfaces cannot do anything until the Server completes the request. It is very important to justify that child2 cannot read less than 2.
2 sequence diagrams to visualize the processes:
Try to print any message server is receiving (or use debugger tracing) for better understanding and you should get something like:
Server state was: 0 and received: {incr,<0.59.0>}
Server state was: 1 and received: {incr,<0.109.0>}
Server state was: 2 and received: {read,<0.59.0>}
Server state was: 2 and received: {incr,<0.110.0>}
Parent read 2
Server state was: 3 and received: {read,<0.109.0>}
Server state was: 3 and received: {read,<0.110.0>}
Child 1 read 3
child 2 read 3
<0.59.0> is parent process, <0.109.0> is child 1 and <0.110.0> is child 2.
It means that parent read message was delivered before read of children processes but after first child incr. But it doesn't have to be like this. It depends on process scheduling. The only guarantee you have is that messages sent from process A to process B will be delivered in the same order.
Due to synchronous nature of incr(Server) and read(Server) it doesn't really matter. Because each of these processes runs incr(Server) before read(Server) it must get at least 1 but note that child 2 was spawned after parent executed incr(Server) which is synchronous operation so it must be at least 1 when it runs its own incr(Server), so when it reads it must be at least 2. The maximum value for each of them is 3 (overall number of incr(Server) and means that any read(Server) of each of them may be delayed).
Summary of possible printed values: Parent: 1,2,3; Child 1: 1,2,3; Child 2: 2,3
Simplified execution order of:
your case (parent gets 2 and both children get 3):
parent: spawn server
parent: spawn child 1
parent: incr(Server)
child 1: incr(Server)
parent: spawn child 2
parent: io:format("parent read ~p~n",[read(Server)]) % prints 2
child 2: incr(Server)
child 1: parent: io:format("child 1 read ~p~n",[read(Server)]) % prints 3
child 2: parent: io:format("child 2 read ~p~n",[read(Server)]) % prints 3
minimum case for child 1:
parent: spawn server
parent: spawn child 1
child 1: incr(Server)
child 1: io:format("child 1 read ~p~n",[read(Server)]) % prints 1
...
maximum case for parent:
parent: spawn server
parent: spawn child 1
parent: incr(Server)
parent: spawned child 2
child 1: incr(Server)
child 2: incr(Server)
Parent: io:format("child 1 read ~p~n",[read(Server)]) % prints 3
...
I've created massive test which spawned 100000 processes which run test/0 simultaneously and created external stat_server which gets and counts every process read(Server) resault and here it is:
[{{child1,1},2}, % child 1 reads 1 only twice. Sometimes it's 1 sometimes it's 0, it varies
{{child1,2},53629},
{{child1,3},46369},
{{child2,2},107},
{{child2,3},99893},
{{parent,1},855},
{{parent,2},99112},
{{parent,3},33}]
I am developing a simple framework in Erlang to handle 2-player turn-based games. The code is the following:
-module(game).
-export([start_server/0,generate_server/0,add_player/0,remove_player/0]).
generate_server() ->
Table_num = 0,
Player_num = 0,
io:format("Server generated...~n", []),
io:format("The current number of tables is ~w~n", [Table_num]),
io:format("The current number of players is ~w~n", [Player_num]),
receive
login ->
io:format("A new player has connected!~n", []),
New = Player_num + 1,
io:format("The current number of players is ~w~n", [New]);
logout ->
io:format("You have beeen succesfully disconnected~n", [])
end.
start_server() ->
io:format("Welcome player!~nInitializing game...~n", []),
io:format("Generating server...~n", []),
register(server,spawn(game, generate_server, [])).
add_player() ->
server ! login.
remove_player() ->
server ! logout.
There are two main problems when I run this code:
When I execute add_player() and then remove_player(), this second function crashes with an exception
If I launch the program on one terminal window and then execute add_player() on a second terminal windows, I get an error. What should I do to be able to run it on more than one terminal window?
Any help would be highly appreciated.
1/
There is no loop in your server. When you start it, after some print, it waits on the receive statement.
When it receives the login message, it executes the operations, and then the server code is finished; the process die and is unregistered. All variables disappear and the VM clean up the memory...
So, later on, any process sending a message to the server will crash because you are using a name which is no more registered.
To make it works you should keep a list of connected players (in list for example) and recall the server loop with this list as parameter.
generate_server(Tlist,Plist) ->
io:format("The current number of tables is ~w~n", [length(Tlist)]),
io:format("The current number of players is ~w~n", [length(Plist)]),
receive
{login,Name} ->
io:format("A new player ~p has connected!~n", [Name]),
generate_server(Tlist,[Name|Plist]);
{logout,Name} ->
io:format("You have beeen succesfully disconnected~n", []),
generate_server(Tlist,lists:delete(Name,Plist))
end.
and the call to generate_server is done by
register(server,spawn(game, generate_server, [[],[]]))
2/
in order to use erlang messages between 2 different nodes, you need to:
share the same erlang coockie
discover the nodes (using netadm for example)
get the server pid or use golbal registered name
see example at http://learnyousomeerlang.com/distribunomicon#alone-in-the-dark
You missed out a loop body for the server. Your program crashes because the server receives only one message and exits. Consider another version of the server below:
generate_server() ->
Table_num = 0,
Player_num = 0,
io:format("Server generated...~n", []),
io:format("The current number of tables is ~w~n", [Table_num]),
io:format("The current number of players is ~w~n", [Player_num]),
loop([]).
loop(Players)->
receive
{From,{login,PlayerId}} ->
io:format("A new player has connected!~n", []),
io:format("The current number of players is ~w~n", [New]),
NewPlayers = case lists:member(PlayerId,Players) of
true ->
From ! {login_failed,exists},
Players;
false ->
From ! {login_success,true},
[PlayerId|Players]
end,
loop(NewPlayers);
{From,{logout,PlayerId}} ->
NewPlayers = case lists:member(PlayerId,Players) of
true ->
From ! {logout,ok},
Players -- [PlayerId];
false ->
From ! {logout,failed},
Players
end, loop(NewPlayers);
_ -> loop(Players)
end.
There; that looks much better.
In most applications, its hard to avoid the need to query large amounts of information which a user wants to browse through. This is what led me to cursors. With mnesia, cursors are implemented using qlc:cursor/1 or qlc:cursor/2. After working with them for a while and facing this problem many times,
11> qlc:next_answers(QC,3).
** exception error: {qlc_cursor_pid_no_longer_exists,<0.59.0>}
in function qlc:next_loop/3 (qlc.erl, line 1359)
12>
It has occured to me that the whole cursor thing has to be within one mnesia transaction: executes as a whole once. like this below
E:\>erl
Eshell V5.9 (abort with ^G)
1> mnesia:start().
ok
2> rd(obj,{key,value}).
obj
3> mnesia:create_table(obj,[{attributes,record_info(fields,obj)}]).
{atomic,ok}
4> Write = fun(Obj) -> mnesia:transaction(fun() -> mnesia:write(Obj) end) end.
#Fun<erl_eval.6.111823515>
5> [Write(#obj{key = N,value = N * 2}) || N <- lists:seq(1,100)],ok.
ok
6> mnesia:transaction(fun() ->
QC = cursor_server:cursor(qlc:q([XX || XX <- mnesia:table(obj)])),
Ans = qlc:next_answers(QC,3),
io:format("\n\tAns: ~p~n",[Ans])
end).
Ans: [{obj,20,40},{obj,21,42},{obj,86,172}]
{atomic,ok}
7>
When you attempt to call say: qlc:next_answers/2 outside a mnesia transaction, you will get an exception. Not only just out of the transaction, but even if that method is executed by a DIFFERENT process than the one which created the cursor, problems are bound to happen.
Another intresting finding is that, as soon as you get out of a mnesia transaction, one of the processes which are involved in a mnesia cursor (apparently mnesia spawns a process in the background), exits, causing the cursor to be invalid. Look at this below:
-module(cursor_server).
-compile(export_all).
cursor(Q)->
case mnesia:is_transaction() of
false ->
F = fun(QH)-> qlc:cursor(QH,[]) end,
mnesia:activity(transaction,F,[Q],mnesia_frag);
true -> qlc:cursor(Q,[])
end.
%% --- End of module -------------------------------------------
Then in shell, i use that method:
7> QC = cursor_server:cursor(qlc:q([XX || XX <- mnesia:table(obj)])).
{qlc_cursor,{<0.59.0>,<0.30.0>}}
8> erlang:is_process_alive(list_to_pid("<0.59.0>")).
false
9> erlang:is_process_alive(list_to_pid("<0.30.0>")).
true
10> self().
<0.30.0>
11> qlc:next_answers(QC,3).
** exception error: {qlc_cursor_pid_no_longer_exists,<0.59.0>}
in function qlc:next_loop/3 (qlc.erl, line 1359)
12>
So, this makes it very Extremely hard to build a web application in which a user needs to browse a particular set of results, group by group say: give him/her the first 20, then next 20 e.t.c. This involves, getting the first results, send them to the web page, then wait for the user to click NEXT then ask qlc:cursor/2 for the next 20 and so on. These operations cannot be done, while hanging inside a mnesia transaction !!! The only possible way, is by spawning a process which will hang there, receiving and sending back next answers as messages and receiving the next_answers requests as messages like this:
-define(CURSOR_TIMEOUT,timer:hours(1)).
%% initial request is made here below
request(PageSize)->
Me = self(),
CursorPid = spawn(?MODULE,cursor_pid,[Me,PageSize]),
receive
{initial_answers,Ans} ->
%% find a way of hidding the Cursor Pid
%% in the page so that the subsequent requests
%% come along with it
{Ans,pid_to_list(CursorPid)}
after ?CURSOR_TIMEOUT -> timedout
end.
cursor_pid(ParentPid,PageSize)->
F = fun(Pid,N)->
QC = cursor_server:cursor(qlc:q([XX || XX <- mnesia:table(obj)])),
Ans = qlc:next_answers(QC,N),
Pid ! {initial_answers,Ans},
receive
{From,{next_answers,Num}} ->
From ! {next_answers,qlc:next_answers(QC,Num)},
%% Problem here ! how to loop back
%% check: Erlang Y-Combinator
delete ->
%% it could have died already, so we be careful here !
try qlc:delete_cursor(QC) of
_ -> ok
catch
_:_ -> ok
end,
exit(normal)
after ?CURSOR_TIMEOUT -> exit(normal)
end
end,
mnesia:activity(transaction,F,[ParentPid,PageSize],mnesia_frag).
next_answers(CursorPid,PageSize)->
list_to_pid(CursorPid) ! {self(),{next_answers,PageSize}},
receive
{next_answers,Ans} ->
{Ans,pid_to_list(CursorPid)}
after ?CURSOR_TIMEOUT -> timedout
end.
That would create a more complex problem of managing process exits, tracking / monitoring e.t.c. I wonder why the mnesia implementers didnot see this !
Now, that brings me to my questions. I have been walking around the web for solutions and you could please check out these links from which the questions arise: mnemosyne, Ulf Wiger's Solution to Cursor Problems, AMNESIA - an RDBMS implementation of mnesia.
1. Does anyone have an idea on how to handle mnesia query cursors in a different way from what is documented, and is worth sharing ?
2. What are the reasons why mnesia implemeters decided to force the cursors within a single transaction: even the calls for the next_answers ?
3. Is there anything, from what i have presented, that i do not understand clearly (other than my bad buggy illustration code - please ignore those) ?
4. AMNESIA (on section 4.7 of the link i gave above), has a good implementation of cursors, because the subsequent calls for the next_answers, do not need to be in the same transaction, NOR by the same process. Would you advise anyone to switch from mnesia to amnesia due to this and also, is this library still supported ?
5. Ulf Wiger , (the author of many erlang libraries esp. GPROC), suggests the use of mnesia:select/4. How would i use it to solve cursor problems in a web application ? NOTE: Please do not advise me to leave mnesia and use something else, because i want to use mnesia for this specific problem. I appreciate your time to read through all this question.
The motivation is that transaction grabs (in your case) read locks.
Locks can not be kept outside of transactions.
If you want, you can run it in a dirty_context, but you loose the
transactional properties, i.e. the table may change between invocations.
make_cursor() ->
QD = qlc:sort(mnesia:table(person, [{traverse, select}])),
mnesia:activity(async_dirty, fun() -> qlc:cursor(QD) end, mnesia_frag).
get_next(Cursor) ->
Get = fun() -> qlc:next_answers(Cursor,5) end,
mnesia:activity(async_dirty, Get, mnesia_frag).
del_cursor(Cursor) ->
qlc:delete_cursor(Cursor).
I think this may help you :
use async_dirty instead of transaction
{Record,Cont}=mnesia:activity(async_dirty, fun mnesia:select/4,[md,[{Match_head,[Guard],[Result]}],Limit,read])
then read next Limit number of records:
mnesia:activity(async_dirty, fun mnesia:select/1,[Cont])
full code:
-record(md,{id,name}).
batch_delete(Id,Limit) ->
Match_head = #md{id='$1',name='$2'},
Guard = {'<','$1',Id},
Result = '$_',
{Record,Cont} = mnesia:activity(async_dirty, fun mnesia:select/4,[md,[{Match_head,[Guard],[Result]}],Limit,read]),
delete_next({Record,Cont}).
delete_next('$end_of_table') ->
over;
delete_next({Record,Cont}) ->
delete(Record),
delete_next(mnesia:activity(async_dirty, fun mnesia:select/1,[Cont])).
delete(Records) ->
io:format("delete(~p)~n",[Records]),
F = fun() ->
[ mnesia:delete_object(O) || O <- Records]
end,
mnesia:transaction(F).
remember you can not use cursor out of one transaction
new to Erlang and just having a bit of trouble getting my head around the new paradigm!
OK, so I have this internal function within an OTP gen_server:
my_func() ->
Result = ibrowse:send_req(?ROOTPAGE,[{"User-Agent",?USERAGENT}],get),
case Result of
{ok, "200", _, Xml} -> %<<do some stuff that won't interest you>>
,ok;
{error,{conn_failed,{error,nxdomain}}} -> <<what the heck do I do here?>>
end.
If I leave out the case for handling the connection failed then I get an exit signal propagated to the supervisor and it gets shut down along with the server.
What I want to happen (at least I think this is what I want to happen) is that on a connection failure I'd like to pause and then retry send_req say 10 times and at that point the supervisor can fail.
If I do something ugly like this...
{error,{conn_failed,{error,nxdomain}}} -> stop()
it shuts down the server process and yes, I get to use my (try 10 times within 10 seconds) restart strategy until it fails, which is also the desired result however the return value from the server to the supervisor is 'ok' when I would really like to return {error,error_but_please_dont_fall_over_mr_supervisor}.
I strongly suspect in this scenario that I'm supposed to handle all the business stuff like retrying failed connections within 'my_func' rather than trying to get the process to stop and then having the supervisor restart it in order to try it again.
Question: what is the 'Erlang way' in this scenario ?
I'm new to erlang too.. but how about something like this?
The code is long just because of the comments. My solution (I hope I've understood correctly your question) will receive the maximum number of attempts and then do a tail-recursive call, that will stop by pattern-matching the max number of attempts with the next one. Uses timer:sleep() to pause to simplify things.
%% #doc Instead of having my_func/0, you have
%% my_func/1, so we can "inject" the max number of
%% attempts. This one will call your tail-recursive
%% one
my_func(MaxAttempts) ->
my_func(MaxAttempts, 0).
%% #doc This one will match when the maximum number
%% of attempts have been reached, terminates the
%% tail recursion.
my_func(MaxAttempts, MaxAttempts) ->
{error, too_many_retries};
%% #doc Here's where we do the work, by having
%% an accumulator that is incremented with each
%% failed attempt.
my_func(MaxAttempts, Counter) ->
io:format("Attempt #~B~n", [Counter]),
% Simulating the error here.
Result = {error,{conn_failed,{error,nxdomain}}},
case Result of
{ok, "200", _, Xml} -> ok;
{error,{conn_failed,{error,nxdomain}}} ->
% Wait, then tail-recursive call.
timer:sleep(1000),
my_func(MaxAttempts, Counter + 1)
end.
EDIT: If this code is in a process which is supervised, I think it's better to have a simple_one_for_one, where you can add dinamically whatever workers you need, this is to avoid delaying initialization due to timeouts (in a one_for_one the workers are started in order, and having sleep's at that point will stop the other processes from initializing).
EDIT2: Added an example shell execution:
1> c(my_func).
my_func.erl:26: Warning: variable 'Xml' is unused
{ok,my_func}
2> my_func:my_func(5).
Attempt #0
Attempt #1
Attempt #2
Attempt #3
Attempt #4
{error,too_many_retries}
With 1s delays between each printed message.