I am trying to register a couple processess with atom names created dynamically, like so:
keep_alive(Name, Fun) ->
register(Name, Pid = spawn(Fun)),
on_exit(Pid, fun(_Why) -> keep_alive(Name, Fun) end).
monitor_some_processes(N) ->
%% create N processes that restart automatically when killed
for(1, N, fun(I) ->
Mesg = io_lib:format("I'm process ~p~n", [I]),
Name = list_to_atom(io_lib:format("zombie~p", [I])),
keep_alive(Name, fun() -> zombie(Mesg) end)
end).
for(N, N, Fun) -> [Fun(N)];
for(I, N, Fun) -> [Fun(I)|for(I+1, N, Fun)].
zombie(Mesg) ->
io:format(Mesg),
timer:sleep(3000),
zombie(Mesg).
That list_to_atom/1 call though is resulting in an error:
43> list_to_atom(io_lib:format("zombie~p", [1])).
** exception error: bad argument
in function list_to_atom/1
called as list_to_atom([122,111,109,98,105,101,"1"])
What am I doing wrong?
Also, is there a better way of doing this?
TL;DR
You should not dynamically generate atoms. From what your code snippet indicates you are probably trying to find some way to flexibly name processes, but atoms are not it. Use a K/V store of some type instead of register/2.
Discussion
Atoms are restrictive for a reason. They should represent something about the eternal structure of your program, not the current state of it. Atoms are so restrictive that I imagine what you really want to be able to do is register a process using any arbitrary Erlang value, not just atoms, and reference them more freely.
If that is the case, pick from one of the following four approaches:
Keep Key/Value pairs somewhere to act as your own registry. This could be a separate process or a list/tree/dict/map handler to store key/value pairs of #{Name => Pid}.
Use the global module (which, like gproc below, has features that work across a cluster).
Use a registry solution like Ulf Wiger's nice little project gproc. It is awesome for the times when you actually need it (which are, honestly, not as often as I see it used). Here is a decent blog post about its use and why it works the way it does: http://blog.rusty.io/2009/09/16/g-proc-erlang-global-process-registry/. An added advantage of gproc is that nearly every Erlanger you'll meet is at least passingly familiar with it.
A variant on the first option, structure your program as a tree of service managers and workers (as in the "Service -> Worker Pattern"). A side effect of this pattern is that very often the service manager winds up needing to monitor its process for one reason or another if you're doing anything non-trivial, and that makes it an ideal candidate for a place to keep a Key/Value registry of Pids. It is quite common for this sort of pattern to wind up emerging naturally as a program matures, especially if that program has high robustness requirements. Structuring it as a set of semi-independent services with an abstract management interface at the top of each from the outset is often a handy evolutionary shortcut.
io_lib:format returns a potentially "deep list" (i.e. it may contain other lists), while list_to_atom requires a "flat list". You can wrap the io_lib:format call in a call to lists:flatten:
list_to_atom(lists:flatten(io_lib:format("zombie~p", [1]))).
Related
As part of a solving a computationally intensive task, I wish to have 1000 gen_servers doing small task and update the global database. How can I achieve this in erlang OTP? In most of the examples the supervisor supervises only a single gen_server. Can a supervisor supervise more than a thousand instances of the same gen_server?
e.g. Say I want to find maximum of a extremely long array and each gen_server instance should create work on a part of the array and update the global minimum.
Like Pascal said, it is possible to start a set number or children but the use case you described would probably work better with a simple_one_for_one strategy as all children are the same. This lets you add as many of the same type of children as needed at a smaller cost. gen_servers have overhead, and even though it's not too big, when you're talking about 1000 processes crunching numbers it makes a difference.
If your processes will be doing something very simple and you want it to be fast I would consider not using gen_servers, but instead just spawning processes. For real power, you would have to spawn processes on different nodes using spawn/4 to make use of more cores. If you are using machines in different locations you can also use a message buss as a load balancer to distribute the work between nodes. All depends on how much work you need done.
Also keep in mind that Erlang is not the best for crunching numbers. You could use C code to do the crunching and have each Erlang process you spawn/each child call a nif.
Is it possible: yes. For example you can create a pool of 1000 processes with the following supervisor:
-module (big_supervisor).
-export([start_link/0]).
-behaviour(supervisor).
-export([init/1]).
start_link() ->
supervisor:start_link({local, ?MODULE}, ?MODULE, {}).
%% #private
init({}) ->
Children = create_child_specs(1000),
RestartStrategy = {one_for_one, 5, 10},
{ok, {RestartStrategy, Children}}.
create_child_specs(Number) ->
[{{child_process,X},{child_process, start_link, []},permanent, 5000, worker,[child_process]} || X <- lists:seq(1,Number)].
Is it a good architecture, I don't know. Until now I have found 2 kinds of architectures:
One with a limited and well identified (by role) chidren
One with a kind of process factory, creating dynamically as many children as needed on demand, using the simple_one_for_one strategy and the start_child/2 terminate_child/2 functions.
Notes also that the supervisors are not mandatory if you want to spawn processes. In your explanation it seems that the processes could be created for a very limited time, in order to compute in parallel something. Two remarks in this case:
it is not worth to spawn more processes that the number of threads that will effectively run in parallel on your VM.
one exception is if the work to achieve in each process will have to wait for an external information, for example the return of an external database. In this case it may be interesting to spawn more processes, the optimal number depending on the external access limits.
E.g.
make_orphan() ->
P = spawn(...),
ok
.
Is there a way for P to receive a message some time after make_orphan returns? Or is P destined to haunt the system (using up precious resources) for all eternity, unless it exits on its own?
A straightforward way to:
receive a message some time after make_orphan returns
is with a monitor.
make_orphan() ->
Parent = self(),
P = spawn(fun() -> monitor(process, Parent), ... end),
ok
P will then get a {'DOWN', Ref, process, Parent, Reason} message when Parent dies. Even if Parent exits before monitor/2 is called, the message will contain the reason noproc.
Communicate P to some process somewhere, register P in some way (register, global, gproc, pg2, some homebrew solution, etc.), have someone monitor it, etc. So sure, several ways. But a fundamental principle of an OTP program is that every process belongs to a supervision tree somewhere, so this becomes less of a problem.
Unless you are modeling a system that falls way outside the assumptions of OTP (like peer supervision among cellular automata) then you don't ever want to create the opportunity for orphans to exist. Orphan processes are the Erlang equivalent to memory leaks -- and that is never a good thing.
For some background information on some of the implications of writing OTP processes versus raw Erlang stuff where you're much more likely to leak processes, read the documentation for proc_lib and the "Sys and Proc_Lib" chapter of the OTP Design Principles docs.
Is there an Erlang/OTP pattern/library for the following problem(before I hack my own):
At the highest level, imagine there are three components(or processes?) such that A->B->C where -> means sends a message to.
B in terms of architecture is a composite process. It is composed of many unit processes(shown in khaki green below). Sometimes, the message chain goes from B1->B2->B3->C and sometimes it goes from B1->B4->B5->B6->B3->C.
What I would like to do is:
B can only accept the next message when all it's children processes are done i.e B receives a message I1 and depending on the message, it will choose one flow and finally C gets a message O1. Until that happens, B should not accept the message I2. This is to ensure ordering of messages so that O2 of I2 does not reach C before O1 of I1.
This has a few names. One is "dataflow" (as in "reactive programming" -- which is sort of an overblown ball of buzzwords if you look it up) and another is "signal simulation" (as in simulation of electrical signal switches). I am not aware of a framework for this in Erlang, because it is very straightforward to implement directly.
The issue of message ordering can be made to take care of itself, depending on how you want to write things. Erlang guarantees the ordering of message between two processes, so as long as messages travel in well-defined channels, this system-wide promise can be made to work for you. If you need some more interesting signal paths than straight lines you can force synch communication; though all Erlang message are asynchronous, you can introduce synchronous blocking on receive wherever you want.
If you want the "B constellation" to pass a message to C but only after its signal processing has completely run its route through the B's, you can make a signal manager which sends a message to B1, and blocks until it receives the output from B3, whence it passes the completed message on to C and checks its box for the next thing from A:
a_loop(B) ->
receive {in, Data} -> B ! Data end,
a_loop(B).
% Note the two receives here -- we are blocking for the end of processing based
% on the known Ref we send out and expect to receive back in a message match.
b_manager(B1, C) ->
Ref = make_ref(),
receive Data -> B1 ! {Ref, Data} end,
receive {Ref, Result} -> C ! Result end,
b_manager(B1, C).
b_1(B2) ->
receive
{Ref, Data} ->
Mod1 = do_processing(Data),
B2 ! {Ref, Mod1}
end,
b_1(B2).
% Here you have as many "b_#" processes as you need...
b_2(B) ->
receive
{Ref, Data} ->
Result = do_other_processing(Data),
B ! {Ref, Result}
end,
b_2(B).
c_loop() ->
receive Result -> stuff(Result) end,
c_loop().
Obviously I drastically simplified things -- as in this obviously doesn't include any concept of supervision -- I didn't even address how you would want to link these together (and with this little checking for liveness, you would need to spawn_link them so if anything dies they all die -- which is probably exactly what you want with the B subset anyway, so you can treat it as a single unit). Also, you may wind up needing a throttle in there somewhere (like at/before A, or in B). But basically speaking, this is a way of passing messages through in a way that makes B block until its segment of processing is finished.
There are other ways, like gen_event, but I find them to be less flexible than writing a actual simulation of a processing pipeline. As far as how to implement this -- I would make it a combination of OTP supervisors and gen_fsm, as these two components represent a nearly perfect parallel to signal processing components,which your system seems to be aimed at mimicking.
To discover what states you need in your gen_fsms and how you want to clump them together I would probably prototype in a very simplistic fashion in pure Erlang for a few hours, just to make sure I actually understand the problem, and then write my proper OTP supervisors and gen_fsms. This makes sure I don't get invested in some temple of gen_foo behaviors instead of getting invested in actually solving my problem (you're going to have to write it at least twice before its right anyway...).
Hopefully this gives you at least a place to start tackling your problem. In any case, this is a very natural sort of thing to do in Erlang -- and is close enough to the way the language and the problem work that it should be pretty fun to work on.
I have seen people use dict, ordict, record for maintaining state in many blogs that I have read. I find it as very vital concept.
Generally I understand the meaning of maintaining state and recursions but when it comes to Erlang..I am a little vague about how it is handled.
Any help?
State is the present arrangement of data. It is sometimes hard to remember this for two reasons:
State means both the data in the program and the program's current point of execution and "mode".
We build this up to be some magical thing unnecessarily.
Consider this:
"What is the process's state?" is asking about the present value of variables.
"What state is the process in?" usually refers to the mode, options, flags or present location of execution.
If you are a Turing machine then these are the same question; we have separated the ideas to give us handy abstractions to build on (like everything else in programming).
Let's think about state variables for a moment...
In many older languages you can alter state variables from whatever context you like, whether the modification of state is appropriate or not, because you manage this directly. In more modern languages this is a bit more restricted by imposing type declarations, scoping rules and public/private context to variables. This is really a rules arms-race, each language finding more ways to limit when assignment is permitted. If scheduling is the Prince of Frustration in concurrent programming, assignment is the Devil Himself. Hence the various cages built to manage him.
Erlang restricts the situations that assignment is permitted in a different way by setting the basic rule that assignment is only once per entry to a function, and functions are themselves the sole definition of procedural scope, and that all state is purely encapsulated by the executing process. (Think about the statement on scope to understand why many people feel that Erlang macros are a bad thing.)
These rules on assignment (use of state variables) encourage you to think of state as discreet slices of time. Every entry to a function starts with a clean slate, whether the function is recursive or not. This is a fundamentally different situation than the ongoing chaos of in-place modifications made from anywhere to anywhere in most other languages. In Erlang you never ask "what is the value of X right now?" because it can only ever be what it was initially assigned to be in the context of the current run of the current function. This significantly limits the chaos of state changes within functions and processes.
The details of those state variables and how they are assigned is incidental to Erlang. You already know about lists, tuples, ETS, DETS, mnesia, db connections, etc. Whatever. The core idea to understand about Erlang's style is how assignment is managed, not the incidental details of this or that particular data type.
What about "modes" and execution state?
If we write something like:
has_cheeseburger(BurgerName) ->
receive
{From, ask, burger_name} ->
From ! {ok, BurgerName},
has_cheeseburger(BurgerName);
{From, new_burger, _SomeBurger} ->
From ! {error, already_have_a_burger},
has_cheeseburger(BurgerName);
{From, eat_burger} ->
From ! {ok, {ate, BurgerName}},
lacks_cheeseburger()
end.
lacks_cheeseburger() ->
receive
{From, ask, burger_name} ->
From ! {error, no_burger},
lacks_cheeseburger();
{From, new_burger, BurgerName} ->
From ! {ok, thanks},
has_cheeseburger(BurgerName);
{From, eat_burger} ->
From ! {error, no_burger},
lacks_cheeseburger()
end.
What are we looking at? A loop. Conceptually its just one loop. Quite often a programmer would choose to write just one loop in code and add an argument like IsHoldingBurger to the loop and check it after each message in the receive clause to determine what action to take.
Above, though, the idea of two operating modes is both more explicit (its baked into the structure, not arbitrary procedural tests) and less verbose. We have separated the context of execution by writing basically the same loop twice, once for each condition we might be in, either having a burger or lacking one. This is at the heart of how Erlang deals with a concept called "finite state machines" and its really useful. OTP includes a tool build around this idea in the gen_fsm module. You can write your own FSMs by hand as I did above or use gen_fsm -- either way, when you identify you have a situation like this writing code in this style makes reasoning much easier. (For anything but the most trivial FSM you will really appreciate gen_fsm.)
Conclusion
That's it for state handling in Erlang. The chaos of untamed assignment is rendered impotent by the basic rules of single-assignment and absolute data encapsulation within each process (this implies that you shouldn't write gigantic processes, by the way). The supremely useful concept of a limited set of operating modes is abstracted by the OTP module gen_fsm or can be rather easily written by hand.
Since Erlang does such a good job limiting the chaos of state within a single process and makes the nightmare of concurrent scheduling among processes entirely invisible, that only leaves one complexity monster: the chaos of interactions among loosely coupled actors. In the mind of an Erlanger this is where the complexity belongs. The hard stuff should generally wind up manifesting there, in the no-man's-land of messages, not within functions or processes themselves. Your functions should be tiny, your needs for procedural checking relatively rare (compared to C or Python), your need for mode flags and switches almost nonexistant.
Edit
To reiterate Pascal's answer, in a super limited way:
loop(State) ->
receive
{async, Message} ->
NewState = do_something_with(Message),
loop(NewState);
{sync, From, Message} ->
NewState = do_something_with(Message),
Response = process_some_response_on(NewState),
From ! {ok, Response},
loop(NewState);
shutdown ->
exit(shutdown);
Any ->
io:format("~p: Received: ~tp~n", [self(), Any]),
loop(State)
end.
Re-read tkowal's response for the most minimal version of this. Re-read Pascal's for an expansion of the same idea to include servicing messages. Re-read the above for a slightly different style of the same pattern of state handling with the addition of ouputting unexpected messages. Finally, re-read the two-state loop I wrote above and you'll see its actually just another expansion on this same idea.
Remember, you can't re-assign a variable within the same iteration of a function but the next call can have different state. That is the extent of state handling in Erlang.
These are all variations on the same thing. I think you're expecting there to be something more, a more expansive mechanism or something. There is not. Restricting assignment eliminates all the stuff you're probably used to seeing in other languages. In Python you do somelist.append(NewElement) and the list you had now has changed. In Erlang you do NewList = lists:append(NewElement, SomeList) and SomeList is sill exactly the same as it used to be, and a new list has been returned that includes the new element. Whether this actually involves copying in the background is not your problem. You don't handle those details, so don't think about them. This is how Erlang is designed, and that leaves single assignment and making fresh function calls to enter a fresh slice of time where the slate has been wiped clean again.
The easiest way to maintain state is using gen_server behaviour. You can read more on Learn you some Erlang and in the docs.
gen_server is process, that can be:
initialised with given state,
can have defined synchronous and asynchronous callbacks (synchronous for querying the data in "request-response style" and asynchronous for changing the state with "fire and forget" style)
It also has couple of nice OTP mechanisms:
it can be supervised
it gives you basic logging
its code can be upgraded while the server is running without loosing the state
and so on...
Conceptually gen_server is an endless loop, that looks like this:
loop(State) ->
NewState = handle_requests(State),
loop(NewState).
where handle requests receives messages. This way all requests are serialised, so there are no race conditions. Of course it is a little bit more complicated to give you all the goodies, that I described.
You can choose what data structure you want to use for State. It is common to use records, because they have named fields, but since Erlang 17 maps can come in handy. This one depends on, what you want to store.
Variable are not mutable, so when you want to have an evolution of state, you create a new variable, and later recall the same function with this new state as parameter.
This structure is meant for processes like server, there is no base condition as in the factorial usual example, generally there is a specific message to stop the server smoothly.
loop(State) ->
receive
{add,Item} -> NewState = [Item|State], % create a new variable
loop(NewState); % recall loop with the new variable
{remove,Item} -> NewState = lists:filter(fun(X) -> X /= Item end,State) , % create a new variable
loop(NewState); % recall loop with the new variable
{items,Pid} -> Pid ! {items,State},
loop(State);
stop -> stopped; % this will be the stop condition
_ -> loop(State) % ignoring other message may be interesting in a never ending loop
end
My task is to process files inside a zip file. So I write bunch of independent functions and compose them to get the desired result. That's one way of doing things. Now instead of having it all written as functions, I write some of them as processes with selective receives and all, and every things is cool. But then pondering on this a bit further, I'm thinking like, do we need functions at all? Couldn't I replace or convert all those functions into processes that communicates to itself and to other processes? So there lies my doubt. When to use functions and when to use processes? Is there any advantage from performance standpoint in using functions (like caching)? Doesn't code blocks in the processes get cached similarly?
So in our example what's the standard idiom to proceed with? Current pseudo code below.
start() ->
FL = extract("..path"),
FPids = lists:map(open_file, FL), % get file Pids,
lists:foreach(fun(FPid) ->
CPid = spawn_compute_process(),
rpc(CPid,{compute,FPid})
end, FPids).
compute() ->
receive
{Pid,{..}} ->
Line = read_line(..),
TL = tidy_line(Line), % an independent function. But couldn't it be a guard within this process?
..
end.
extract(FilePath) -> FilesList.
read_line(FPid) -> line.
So how do you actually write code? Like, write smaller independent functions first and then wrap them up inside processes?
Thanks.
The short answer is that you use processes to exploit concurrency. Replacing functions with processes where you sequentially run one process, then send its value to another process which then does its work and sends its result to the next process etc each process terminating after its done its bit is the wrong use of processes. Here you are just evaluating something sequentially by sending data from one process to another instead of calling functions.
If, however, you intend this chain of processes to be able to process multiple sequences of "calls" concurrently then it is a different matter. Then you are using the processes for concurrency. The more general way of doing this in erlang is to create a separate process for each sequence and exploit the concurrency in that manner.
Another use of processes is to manage state.