I need debug my ejabberd server and I want use pman for this purpose. But I have only access via ssh and server worked in screen.
I do:
ssh mydoman#example.com
erl -sname test#localhost
(test#localhost)1> pman:start().
<0.123.0>
and it works but I need get access to 'ejabberd#localhost' node from same machine
now I press Ctrl+G
--> r'ejabberd#localhos'
--> c
(ejabberd#localhost)1> pman:start().
** exited: {startup_timeout,pman} **
And my question is - how do I run pman properly?
Pman needs access to the screen on which it runs. I understand that you are running distributed erlang on both nodes and that they are connected and know of each other. The easiest way is then to run pman locally on your node, pman:start(). There is a Nodes menu which should contain all known nodes and if you pick ejabbered#localhost you should see all the processes on that node.
Not sure about pman, but if you want to monitor a remote node I've create entop for that purpose. It might not work exactly like pman but should be close enough.
https://github.com/mazenharake/entop
Related
I have a Docker image that needs to be run in an environment where I have no admin privileges, using Slurm 17.11.8 in RHEL. I am using udocker to run the container.
In this container, there are two applications that needs to run:
[1] ROS simulation (there is a rosnode that is a TCP client talking to [2])
[2] An executable (TCP server)
So [1] and [2] needs to run together and they shared some common files as well. Usually, I run them in separate terminals. But I have no idea how to do this with SLURM.
Possible Solution:
(A) Use two containers of the same image, but their files will be stored locally. Could use volumes instead. But this requires me to change my code significantly and maybe break compatibility when I am not running it as containers (e.g in Eclipse).
(B) Use a bash script to launch two terminals and run [1] and [2]. Then srun this script.
I am looking at (B) but have no idea how to approach it. I looked into other approaches but they address sequential executions of multiple processes. I need these to be concurrent.
If it helps, I am using xfce-terminal though I can switch to other terminals such as Gnome, Konsole.
This is a shot in the dark since I don't work with udocker.
In your slurm submit script, to be submitted with sbatch, you could allocate enough resources for both jobs to run on the same node(so you just need to reference localhost for your client/server). Start your first process in the background with something like:
udocker container_name container_args &
The & should start the first container in the background.
You would then start the second container:
udocker 2nd_container_name more_args
This would run without & to keep the process in the foreground. Ideally, when the second container completes the script would complete and slurm cleanup would kill the first container. If both containers will come to an end cleanly you can put a wait at the end of the script.
Caveats:
Depending on how Slurm is configured, processes may not be properly cleaned up at the end. You may need to capture the PID of the first udocker as a variable and kill it before you exit.
The first container may still be processing when the second completes. You may need to add a sleep command at the end of your submission script to give it time to finish.
Any number of other gotchas may exist that you will need to find and hopefully work around.
The title is what appears in my ejabberd log files after an unsuccessful start. I did some googling and have noticed it is probably related to my cookies..
Running get_cookie() in the erlang shell prints 'nocookie'. Is this typical for a fresh install of erlang? How should I ideally go about setting the cookie?
nocookie would likely mean that the node hasn't been started in distributed mode. All distributed nodes otherwise have a cookie.
You will have to make sure your node is started in distributed mode (through -name or -sname arguments) for things to work.
In case the node is already started and you have access to a shell, you can start the distributed kernel by hand with net_kernel:start([Name, shortnames]) or net_kernel:start([Name, longnames]).
I'm trying to startup epmd separately from the erlang vm, in order to do monitoring on the connection handling.
This works fine, except for cases when the vm starts up before epmd.
Is there a way to make the erlang vm start without it starting the epmd on its own?
As of Erlang/OTP 19.0, there is a -start_epmd command line option which can be set to true (the default) or false.
If you pass -start_epmd false on the command line and epmd is running, the Erlang node starts as usual. If epmd is not running, the Erlang node fails to start with this message:
$ erl -start_epmd false -sname foo
Protocol 'inet_tcp': register/listen error: econnrefused
If the Erlang node is not started as a distributed node (i.e., without passing -name or -sname), it neither starts nor attempts to connect to epmd, regardless of the -start_epmd setting.
Possible helpful questions/answers:
Is there a way to stop Erlang servers from automatically starting epmd?
Ensure epmd started
So inline with those questions/answers, I'd suggest to make the erlang vm service depend on epmd (which should be another service on its own). Also, if you run epmd as one of your very first services to run, it should be possible to make it start before erlang every time. But how to do this will actually depend on your operating system and deployment implementation details.
Also, a not-so-elegant solution, would be to change your init script, so it will wait for epmd to start, but manually. Your mileage may vary, and a very naive approach (but useful as an example) would be something like:
while [ true ]; do
pid=`pidof epmd`;
if [ "$pid" == "" ]; then
sleep 1; # Wait a bit more
else
break;
fi
done
# Continue initialization
Note that the code should contemplate a maximum number of attempts, also pidof only works on linux, etc. Not sure I like this solution, but can do the job.
And as less elegant solutions, you could replace the epmd that erlang will run with a binary of your own, that does whatever you need (like faking the epmd start or running your own, like in the code above).
Hope it helps!
I have an application which has the following requirement.
During the running of my Erlang App. on the fly I need to start one or more remote nodes either on the local host or a remote host.
I have looked at the following options
1) For starting a remote node on the local host either use the slave module or the net_kernel:start() API.
However with the latter , there seems to be no way to specify options like boot script file name etc.
2) In any case I don't need the slave configuration as I need to mimic similar behaviour of nodes spawned on local as
well as remote hosts. In my current setup, I dont have permissions to rsh to the remote host. The workaround i can think of is to have
a default node running on the remote host so as to enable remote node creation either through spawn or rpc:async_call and os:cmd
combination
Is there any other API interface to start erl ?
I am not sure this is the best or the cleanest way to solve this problem and I would like to know the Erlang approach to the same?
Thanks in advance
There is pool module which might help you, however it utilizes slave module (thereof rsh).
I'm running a distributed erlang system with one node per machine.
Since DNS is not available I all start them with the same -sname param e.g.
erl -sname foo ...
A operating-system daemon has the feature to execute shell (/bin/sh) commands when a certain event occurs (when a USB stick is pugged into the system).
I'm looking for a simple way to call a function on the erlang node local on this machine with this shell command (taking further action after the USB stick was detected and mounted).
I was thinking of calling erl -sname bar from the shell and run some code that looks like
[_,Host] = string:tokens(atom_to_list(node()), "#"),
The_node = list_to_atom("foo#" ++ Host),
spawn(The_node, My_fun),
Is this the way to go? Or is starting a whole new erlang node overkill (won't be doing it often though)
Or is it better to talk a socket opened by gen_tcp, or read a named pipe.
Or any other suggestions?
BTW this is running on a Unix system.
What you want to use is actually erl_call, an application that lets you contact currently distributed nodes and run arbitrary code.
erl_call makes it possible to start and/or communicate with a distributed Erlang node. It is built upon the erl_interface library as an example application. Its purpose is to use an Unix shell script to interact with a distributed Erlang node.
You can either give it commands, an escript or pretty much just code to evaluate and it will do it. You have more details and actual example in its documentation.