If an Erlang application, myapp, requires mnesia to run, then mnesia should be included in its application resource file, under key applications, so that if myapp is started, mnesia would get started automatically - it's node type by default is opt_disc (OTP 18).
What if I want a disc node? I know I can set -mnesia schema_location disc at command line, but this only works if a schema already exists, which means I should perform some initialization before starting myapp, is there an "OTP-ful" way, without removing mnesia from applications, to avoid this initialization? The main objective is to turn "init-then-start" into just "start".
This is not correct from your post:
... mnesia should be included in its application resource file, under key applications, so that if myapp is started, mnesia would get started automatically.
The applications which you write as a value of applications key in .app file don't start automatically, but it says that they must be started before your application is started.
Imagine that we want to create foo application which depends on mnesia with some customization. One way is starting it in foo_app.erl file:
-module(foo_app).
-behaviour(application).
-export([start/2, stop/1]).
start(_Type, _Args) ->
mnesia:start().
mnesia:change_table_copy_type(schema, node(), disc_copies),
%% configure mnesia
%% create your tables
%% ...
foo_sup:start_link().
stop(_State) ->
ok.
This way it creates disc schema whether it was created before or not.
Note: In this solution if you write mnesia as dependency under applications key in you foo.app.src file (which in compile time would create foo.app), when starting the foo application you get {error, {not_started, mnesia}}. So you must not do it and let your application to start it in its foo_app:start/2 function.
Related
Any process modification cwd is global:
iex(1)> File.cwd
{:ok, "/home/hentioe"}
iex(2)> spawn fn -> File.cd("/home") end
#PID<0.105.0>
iex(3)> File.cwd
{:ok, "/home"}
Is there a way to isolate the current working directory (cwd) between processes?
There is a concept of file server in ErlangVM and the original :file.set_cwd/1, File.cwd/1 delegates to, is explicitly made to set the working directory of the file server.
File server on different node always differs, also there are several functions one might call to bypass the file server (grep :file documentation for “file server”.)
It is unclear why would you need another current directory for the different process and it all smells like an XY problem, but the generic answer to your question would be:
→ no, all the processes on the same node are using the same file server and hence have the same working directory across processes.
I'd like to use erlang's OTP supervisor in a distributed application I'm building. But I'm having trouble figuring out how a supervisor of this kind can monitor a process running on a remote Node. Unlike erlang's start_link function, start_child has no parameters for specifying the Node on which the child will be spawned.
Is it possible for a OTP supervisor to monitor a remote child, and if not, how can I achieve this in erlang?
supervisor:start_child/2 can be used across nodes.
The reason for your confusion is just a mix-up regarding the context of execution (which is admittedly a bit hard to keep straight sometimes). There are three processes involved in any OTP spawn:
The Requestor
The Supervisor
The Spawned Process
The context of the Requestor is the one in which supervisor:start_child/2 is called -- not the context of the supervisor itself. You would normally provide a supervisor interface by exporting a function that wraps the call to supervisor:spawn_child/2:
do_some_crashable_work(Data) ->
supervisor:start_child(sooper_dooper_sup, [Data]).
That might be defined and exported from the supervisor module, be defined internally within a "manager" sort of process according to the "service manager/supervisor/workers" idiom, or whatever. In all cases, though, some process other than the supervisor is making this call.
Now look carefully at the Erlang docs for supervisor:start_child/2 again (here, and an R19.1 doc mirror since sometimes erlang.org has a hard time for some reason). Note that the type sup_ref() can be a registered name, a pid(), a {global, Name} or a {Name, Node}. The requestor may be on any node calling a supervisor on any other node when calling with a pid(), {global, Name} or a {Name, Node} tuple.
The supervisor doesn't just randomly kick things off, though. It has a child_spec() it is going off of, and the spec tells the supervisor what to call to start that new process. This first call into the child module is made in the context of the supervisor and is a custom function. Though we typically name it something like start_link/N, it can do whatever we want as a part of startup, including declare a specific node on which to spawn. So now we wind up with something like this:
%% Usually defined in the requestor or supervisor module
do_some_crashable_work(SupNode, WorkerNode, Data) ->
supervisor:start_child({sooper_dooper_sup, SupNode}, [WorkerNode, Data]).
With a child spec of something like:
%% Usually in the supervisor code
SooperWorker = {sooper_worker,
{sooper_worker, start_link, []},
temporary,
brutal_kill,
worker,
[sooper_worker]},
Which indicates that the first call would be to sooper_worker:start_link/2:
%% The exported start_link function in the worker module
%% Called in the context of the supervisor
start_link(Node, Data) ->
Pid = proc_lib:spawn_link(Node, ?MODULE, init, [self(), Data]).
%% The first thing the newly spawned process will execute
%% in its own context, assuming here it is going to be a gen_server.
init(Parent, Data) ->
Debug = sys:debug_options([]),
{ok, State} = initialize_some_state(Data)
gen_server:enter_loop(Parent, Debug, State).
You might be wondering what all that mucking about with proc_lib was for. It turns out that while calling for a spawn from anywhere within a multi-node system to initiate a spawn anywhere else within a multi-node system is possible, it just isn't a very useful way of doing business, and so the gen_* behaviors and even proc_lib:start_link/N doesn't have a method of declaring the node on which to spawn a new process.
What you ideally want is nodes that know how to initialize themselves and join the cluster once they are running. Whatever services your system provides are usually best replicated on the other nodes within the cluster, and then you only have to write a way of picking a node, which allows you to curry out the business of startup entirely as it is now node-local in every case. In this case whatever your ordinary manager/supervisor/worker code does doesn't have to change -- stuff just happens, and it doesn't matter that the requestor's PID happens to be on another node, even if that PID is the address to which results must be returned.
Stated another way, we don't really want to spawn workers on arbitrary nodes, what we really want to do is step up to a higher level and request that some work get done by another node and not really care about how that happens. Remember, to spawn a particular function based on an {M,F,A} call the node you are calling must have access to the target module and function -- if it has a copy of the code already, why isn't it a duplicate of the calling node?
Hopefully this answer explained more than it confused.
I follow the REST API with yaws tutorial of the 'Building web application with Erlang' book.
I get the following error when starting $ yaws :
file:path_eval([".","/Users/<uername>"],".erlang"): error on line 3: 3: evaluation failed with reason error:{undefined_record,airport} and stacktrace [{erl_eval,exprs,2,[]}]
.erlang file:
application:start(mnesia).
mnesia:create_table(airport,[{attributes, record_info(fields, airport)}, {index, [country]}]).
rest.erl file can be found here.
How can I define a record? I tried to add rd(airport, {code, city, country, name}). without success.
Record 'airport' is defined in the module rest, and all functions in the 'rest' know about record 'airport'. But when you start your application, erlang executes .erlang file, which has nothing to do with the module rest. So, erlang just has no idea what is the record airport and where to find it.
Easiest workaround I believe - is to define some function (for instance 'init') in the module rest, this function must contain all that you have now in .erlang file, export it, and in the .erlang file just invoke rest:init().
The main thing I'm confused about here (I think) is what the arguments to the qfun are supposed to be and what the return value should be. The README basically doesn't say anything about this and the example it gives throws away the second and third args.
Right now I'm only trying to understand the arguments and not using Riak for anything practical. Eventually I'll be trying to rebuild our (slow, MySQL-based) financial reporting system with it. So ignoring the pointlessness of my goal here, why does the following give me a badfun exception?
The data is just tuples (pairs) of Names and Ages, with the keys being the name. I'm not doing any conversion to JSON or such before inserting the data from the Erlang console.
Now with some {Name, Age} pairs stored in <<"people">> I want to use MapReduce (for no other reason than to understand "how") to get the values back out, unchanged in this first use.
riakc_pb_socket:mapred(
Pid, <<"people">>,
[{map, {qfun, fun(Obj, _, _) -> [Obj] end}, none, true}]).
This just gives me a badfun, however:
{error,<<"{\"phase\":0,\"error\":\"{badfun,#Fun<erl_eval.18.17052888>}\",\"input\":\"{ok,{r_object,<<\\\"people\\\">>,<<\\\"elaine\\\">"...>>}
How do I just pass the data through my map function unchanged? Is there any better documentation of the Erlang client than what is in the README? That README seems to assume you already know what the inputs are.
There are 2 Riak Erlang clients that serve different purposes.
The first one is the internal Riak client that is included in the riak_kv module (riak_client.erl and riak_object.erl). This can be used if you are attached to the Riak console or if you are writing a MapReduce function or a commit hook. As it is run from within a Riak node it works quite well with qfuns.
The other client is the official Riak client for Erlang that is used by external applications and connects to Riak through the protocol buffers interface. This is what you are using in your example above. As this connects through protocol buffers, it is usually recommended that MapReduce functions in Erlang are compiled and deployed on the nodes of the cluster as named functions. This will also make them accessible from other client libraries.
I think my code is actually correct and my problem lies in the fact I'm trying to use the shell to execute the code. I need to actually compile the code before it can be run in Riak. This is a limitation of the Erlang shell and the way it compiles funs.
After a few days of playing around, here's a neat trick that makes development easier. Exploit Erlang's RPC support and the fact it has runtime code loading, to distribute your code across all the Riak nodes:
%% Call this somewhere during your app's initialization routine.
%% Assumes you have a list of available Riak nodes in your app's env.
load_mapreduce_in_riak() ->
load_mapreduce_in_riak(application:get_env(app_name, riak_nodes, [])).
load_mapreduce_in_riak([]) ->
ok;
load_mapreduce_in_riak([{Node, Cookie}|Tail]) ->
erlang:set_cookie(Node, Cookie),
case net_adm:ping(Node) of
pong ->
{Mod, Bin, Path} = code:get_object_code(app_name_mapreduce),
rpc:call(Node, code, load_binary, [Mod, Path, Bin]);
pang ->
io:format("Riak node ~p down! (ping <-> pang)~n", [Node])
end,
load_mapreduce_in_riak(Tail).
Now you can refer to any of the functions in the module app_name_mapreduce and they'll be visible to the Riak cluster. The code can be removed again with code:delete/1, if needed.
Setting:
I want to start a unique global registered gen_server process in an erlang cluster. If the process is stopped or the node running it goes down, the process is to be started on one of the other nodes.
The process is part of a supervisor. The problem is that starting the supervisor on a second node fails because the gen_server is already running and registerd globally from the first node.
Question(s):
Is it ok to check if the process is already globally registered inside gen_server's start_link function and in this case return {ok, Pid} of the already running process instead of launching a new gen_server instance?
Is it correct, that this way the one process would be part of multiple supervisors and if the one process goes down all supervisors on all other nodes would try to restart the process. The first supervisor would create a new gen_server process and the other supervisors would all link to that one process again.
Should I use some sort of global:trans() inside the gen_server's start_link function?
Example Code:
start_link() ->
global:trans({?MODULE, ?MODULE}, fun() ->
case gen_server:start_link({global, ?MODULE}, ?MODULE, [], []) of
{ok, Pid} ->
{ok, Pid};
{error, {already_started, Pid}} ->
link(Pid),
{ok, Pid};
Else -> Else
end
end).
If you return {ok, Pid} of something you don't link to it will confuse a supervisor that relies on the return value. If you're not going to have a supervisor use this as a start_link function you can get away with it.
Your approach seems like it should work as each node will try to start a new instance if the global one dies. You may find that you need to increase the MaxR value in your supervisor setup as you'll get process messages every time the member of the cluster changes.
One way I've created global singletons in the past is to run the process on all the nodes, but have one of them (the one that wins the global registration race) be the master. The other processes monitor the master and when the master exits, try to become the master. (And again, if they don't win the registration race then they monitor the pid of the one that did). If you do this, you have to handle the global name registration yourself (i.e. don't use the gen_server:start({global, ... functionality) because you want the process to start whether or not it wins the registration, it will simply behave differently in each case.
The process itself must be more complicated (it has to run in both master and non-master modes), but it stabilizes quickly and doesn't produce a lot of log spam with supervisor start attempts.
My method usually requires a few rounds of revision to shake out the corner cases, but is to my mind less hassle than writing an OTP Distributed Application. This method has another advantage over distributed applications in that you don't have to statically configure the list of nodes involved in your cluster - any node can be a candidate for running the master copy of the process. Your approach has this same property.
How about turning the gen_server into an application and using distributed applications?