Any process modification cwd is global:
iex(1)> File.cwd
{:ok, "/home/hentioe"}
iex(2)> spawn fn -> File.cd("/home") end
#PID<0.105.0>
iex(3)> File.cwd
{:ok, "/home"}
Is there a way to isolate the current working directory (cwd) between processes?
There is a concept of file server in ErlangVM and the original :file.set_cwd/1, File.cwd/1 delegates to, is explicitly made to set the working directory of the file server.
File server on different node always differs, also there are several functions one might call to bypass the file server (grep :file documentation for “file server”.)
It is unclear why would you need another current directory for the different process and it all smells like an XY problem, but the generic answer to your question would be:
→ no, all the processes on the same node are using the same file server and hence have the same working directory across processes.
Related
I have a rebar3 project. In this project, the supervisor should spawn a bunch of Erlang nodes across multiple machines. I found that the nodes were never getting brought up because of an error in the log:
sh: no such file or directory h/mberns01/..../prod
Where only the leading slash of the path is missing and the rest of the command is correct.
Where in this project is this command generated and why might it be missing the leading slash? I'm not even sure what other information I can provide here to be helpful -- let me know.
Cheers.
EDIT: So it looks like init:get_argument(progname) returns the wrong program (no leading /). Not sure why...
It appears as though the problem can be sidestepped by using slave:start/5, which allows the user to specify what Prog they want to run on the remote host:
spawn(slave, start, [Host, 'node', [], self(), "erl"])
But it does not answer the question of why there is a missing /.
init:get_argument(progname) is supposed to return {ok,[["name"]]} not a directory, so no leading /
Also, out of curiosity, how are you spawning new nodes? Are you using slave, pool or something else? If slave, what arguments are you passing it?
I have an application written in erlang, i added a supervisor for distribution and now after parsing the configFile.cfg in the supervisor, i want to pass the configuration to my old application.
i have something like this now:
-module(supervisor_sup).
start() ->
application_app:start().
what i want is:
-module(supervisor_sup).
-record(config,{param1,param2}).
%After parsing the configFile.cfg
Conf = #config{param1 = Param1,
param2 = Param2},
start(Conf) ->
application_app:start(Conf).
It is uncommon to start applications from supervisors or modules under supervisors. The preferred way is to use application dependency to make sure all applications are started, and in the right order.
However, if you want some configuration to be available from several different applications without having to parse the configuration more than once, maybe the gproc library is what you are looking for?
https://github.com/uwiger/gproc
gproc can be used to efficiently set global configuration and much more. Even in a distributed environment.
If an Erlang application, myapp, requires mnesia to run, then mnesia should be included in its application resource file, under key applications, so that if myapp is started, mnesia would get started automatically - it's node type by default is opt_disc (OTP 18).
What if I want a disc node? I know I can set -mnesia schema_location disc at command line, but this only works if a schema already exists, which means I should perform some initialization before starting myapp, is there an "OTP-ful" way, without removing mnesia from applications, to avoid this initialization? The main objective is to turn "init-then-start" into just "start".
This is not correct from your post:
... mnesia should be included in its application resource file, under key applications, so that if myapp is started, mnesia would get started automatically.
The applications which you write as a value of applications key in .app file don't start automatically, but it says that they must be started before your application is started.
Imagine that we want to create foo application which depends on mnesia with some customization. One way is starting it in foo_app.erl file:
-module(foo_app).
-behaviour(application).
-export([start/2, stop/1]).
start(_Type, _Args) ->
mnesia:start().
mnesia:change_table_copy_type(schema, node(), disc_copies),
%% configure mnesia
%% create your tables
%% ...
foo_sup:start_link().
stop(_State) ->
ok.
This way it creates disc schema whether it was created before or not.
Note: In this solution if you write mnesia as dependency under applications key in you foo.app.src file (which in compile time would create foo.app), when starting the foo application you get {error, {not_started, mnesia}}. So you must not do it and let your application to start it in its foo_app:start/2 function.
The main thing I'm confused about here (I think) is what the arguments to the qfun are supposed to be and what the return value should be. The README basically doesn't say anything about this and the example it gives throws away the second and third args.
Right now I'm only trying to understand the arguments and not using Riak for anything practical. Eventually I'll be trying to rebuild our (slow, MySQL-based) financial reporting system with it. So ignoring the pointlessness of my goal here, why does the following give me a badfun exception?
The data is just tuples (pairs) of Names and Ages, with the keys being the name. I'm not doing any conversion to JSON or such before inserting the data from the Erlang console.
Now with some {Name, Age} pairs stored in <<"people">> I want to use MapReduce (for no other reason than to understand "how") to get the values back out, unchanged in this first use.
riakc_pb_socket:mapred(
Pid, <<"people">>,
[{map, {qfun, fun(Obj, _, _) -> [Obj] end}, none, true}]).
This just gives me a badfun, however:
{error,<<"{\"phase\":0,\"error\":\"{badfun,#Fun<erl_eval.18.17052888>}\",\"input\":\"{ok,{r_object,<<\\\"people\\\">>,<<\\\"elaine\\\">"...>>}
How do I just pass the data through my map function unchanged? Is there any better documentation of the Erlang client than what is in the README? That README seems to assume you already know what the inputs are.
There are 2 Riak Erlang clients that serve different purposes.
The first one is the internal Riak client that is included in the riak_kv module (riak_client.erl and riak_object.erl). This can be used if you are attached to the Riak console or if you are writing a MapReduce function or a commit hook. As it is run from within a Riak node it works quite well with qfuns.
The other client is the official Riak client for Erlang that is used by external applications and connects to Riak through the protocol buffers interface. This is what you are using in your example above. As this connects through protocol buffers, it is usually recommended that MapReduce functions in Erlang are compiled and deployed on the nodes of the cluster as named functions. This will also make them accessible from other client libraries.
I think my code is actually correct and my problem lies in the fact I'm trying to use the shell to execute the code. I need to actually compile the code before it can be run in Riak. This is a limitation of the Erlang shell and the way it compiles funs.
After a few days of playing around, here's a neat trick that makes development easier. Exploit Erlang's RPC support and the fact it has runtime code loading, to distribute your code across all the Riak nodes:
%% Call this somewhere during your app's initialization routine.
%% Assumes you have a list of available Riak nodes in your app's env.
load_mapreduce_in_riak() ->
load_mapreduce_in_riak(application:get_env(app_name, riak_nodes, [])).
load_mapreduce_in_riak([]) ->
ok;
load_mapreduce_in_riak([{Node, Cookie}|Tail]) ->
erlang:set_cookie(Node, Cookie),
case net_adm:ping(Node) of
pong ->
{Mod, Bin, Path} = code:get_object_code(app_name_mapreduce),
rpc:call(Node, code, load_binary, [Mod, Path, Bin]);
pang ->
io:format("Riak node ~p down! (ping <-> pang)~n", [Node])
end,
load_mapreduce_in_riak(Tail).
Now you can refer to any of the functions in the module app_name_mapreduce and they'll be visible to the Riak cluster. The code can be removed again with code:delete/1, if needed.
I have several nodes running in an erlang cluster, each using the same magic cookie and trusted by each other. I want to have one master node send code and modules to the other nodes. How can I do this?
use nl(module_name). to load code on all the nodes.
Check out my etest project for an example of programmatically injecting a set of modules on all nodes and then starting it.
The core of this is pretty much the following code:
{Mod, Bin, File} = code:get_object_code(Mod),
{_Replies, _} = rpc:multicall(Nodes, code, load_binary,
[Mod, File, Bin]),