How Can an Erlang Virtual Machine be run as a Daemon? - erlang

I would like to run Erlang VM as a daemon on a UNIX server, in a non-interactive mode

The simplest thing is to give erl the -detached flag.
There are however many helpers out there for doing this, check out rebars release handling, erlrc and run_erl.

Also rebar can generate a node that can be started as a daemon (with start, stop, restart commands).

Related

Services in CentOS 7 Docker image without systemd

I'm trying to create a Docker container based on CentOS 7 that will host R, shiny-server, and rstudio-server, but to I need to have systemd in order for the services to start. I can use the systemd enabled centos image as a basis, but then I need to run the container in privileged mode and allow access to /sys/fs/cgroup on the host. I might be able to tolerate the less secure situation, but then I'm not able to share the container with users running Docker on Windows or Mac.
I found this question but it is 2 years old and doesn't seem to have any resolution.
Any tips or alternatives are appreciated.
UPDATE: SUCCESS!
Here's what I found: For shiny-server, I only needed to execute shiny-server with the appropriate parameters from the command line. I captured the appropriate call into a script file and call that using the final CMD line in my Dockerfile.
rstudio-server was more tricky. First, I needed to install initscripts to get the dependencies in place so that some of the rstudio scripts would work. After this, executing rstudio-server start would essentially do nothing and provide no error. I traced the call through the various links and found myself in /usr/lib/rstudio-server/bin/rstudio-server. The daemonCmd() function tests cat /proc/1/comm to determine how to start the server. For some reason it was failing, but looking at the script, it seems clear that it needs to execute /etc/init.d/rstudio-server start. If I do that manually or in a Docker CMD line, it seems to work.
I've taken those two CMD line requirements and put them into an sh script that gets called from a CMD line in the Dockerfile.
A bit of a hack, but not bad. I'm happy to hear any other suggestions.
You don't necessarily need to use an init system like systemd.
Essentially, you need to start multiple services, there are existing patterns for this. Check out this page about how to use supervisord to achieve the same thing: https://docs.docker.com/engine/admin/using_supervisord/

Alternative to supervisord for docker

Supervisord is really great tool even for docker environment. It helps a lot with stderr redirection and signals forwarding. But it has a couple of disadvantages:
It doesn't support delayed startup. It could be useful to delay some agent startup until main app is initializing. Priority doesn't solve this issue.
If some app enters FATAL state supervisord just logs it but continue to work. So you can't see it until look at logs of container. It could much more friendly if supervisord just stops because in that case you see the problem with docker ps -a
So what is the best alternative to supervisord?
In response to the "PID1 zombie reaping" issue, I recommended before (in "Use of Supervisor in docker") to use runit instead of supervisord
Runit uses less memory than Supervisord because Runit is written in C and Supervisord in Python.
And in some use cases, process restarts in the container are preferable over whole-container restarts.
See the phusion/baseimage-docker image for more.
As noted by Torsten Bronger in the comments:
Runit is not there to solve the reaping problem.
Rather, it's to support multiple processes. Multiple processes are encouraged for security (through process and user isolation).
Since 2015, you now can Specify an init process that should be used as the PID 1 in the container, with docker run --init
The default init process used is the first docker-init executable found in the system path of the Docker daemon process.
This docker-init binary, included in the default installation, is backed by tini.
Any new visitor:
If you are a beginner to containerization it is highly recommended to use one process per container it makes everyone's life easy.
Without any hack, all other tools will work perfectly, it will improve Observability as well.
Don't treat containers like VMs.
Not sure why you'd need supervisor if you have docker. Docker does everything supervisor does, start/stop/manage services, keep the stdout/stderr stored so you can read it later. Restart everything after reboot.

How to stop an IPython cluster without the ipcluster command

I don't start my IPython cluster with the ipcluster command but with the individual commands ipcontroller and ipengine because I use several machines over a network. When starting the cluster with the ipcluster command, stopping the cluster is rather straightforward:
ipcluster stop
However, I haven't been able to found the procedure when using the individual commands separately.
Thanks for your help
The easiest way is by connecting a Client and issuing a shutdown command:
import ipyparallel as ipp
c = ipp.Client()
c.shutdown(hub=True)
Client.shutdown() shuts down engines; adding hub=True tells it to bring down the central controller process as well.

Using SSH (Scripts, Plugins, etc) to start processes

I'm trying to finish a remote deployment by restarting the two processes that make my Python App work. Like so
process-one &
process-two &
I've tried to "Execute a Shell Script" by doing this
ssh -i ~/.ssh/id_... user#xxx.xxx ./startup.sh
I've tried using the Jekins SSH Plugin and the Publish Over SSH Plugin and doing the same thing. All of the previous steps, stopping the processes, restarting other services, pulling in new code work fine. But when I get to the part where I start the services. It executes those two lines, and none of the Plugins or the Default Script execution can get off of the server. They all either hang until I restart Jekins or time out int he case of the Publish Over SSH plugin. So my build either requires a restart of Jenkins, or is marked unstable.
Has anyone had any success doing something similar? I've tried
nohup process-one &
But the same thing has happened. It's not that the services are messing up either, because they actually start properly, it's just that Jenkins doesn't seem to understand that.
Any help would be greatly appreciated. Thank you.
What probably happens in that the process when spawned (even with the &) is consuming the same input and output as your ssh connection. Jenkins is waiting for these pipes to be emptied before the job closes, thus waits for the processes to exit. You could verify that by killing your processes and you will see that the jenkins job terminates.
Dissociating outputs and starting the process remotely
There are multiple solutions to your problem:
(preferred) use proper daemon control tools. Your target platform probably has a standard way to manage those services, e.g. init.d scripts. Note, when writing init.d scripts, make sure you detach the process in the background AND ensure the input/output of the daemon are detached from the shell that starts them. There are several techniques, like like http://www.unix.com/man-page/Linux/8/start-stop-daemon/ tools, daemonize, daemontools or something like the shell script described under https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+as+a+Unix+daemon (take note of the su -s bin/sh jenkins -c "YOUR COMMAND; ...disown" etc). I also list some python specific techniques under.
ssh server 'program < /dev/null > /dev/null 2>&1 &'
or
ssh server 'program < /dev/null >> logfile.log 2>&1 &' if you want to have a crude output management (no log rotation, etc...)
potentially using setsid (I haven't tried) https://superuser.com/questions/172043/how-do-i-fork-a-process-that-doesnt-die-when-shell-exits . In my quick tests I wasn't able to get it to work though...
Python daemons
The initial question was focused on SSH, so I didn't fully described how to run the python process as daemon. This is mostly covered in other techniques:
with start-stop-daemon: start-stop-daemon and python
with upstart on ubuntu: Run python script as daemon at boot time (Ubuntu)
some more python oriented approaches:
How to make a Python script run like a service or daemon in Linux
Can I run a Python script as a service?

Simplest way to inform a local erlang node from a shell command

I'm running a distributed erlang system with one node per machine.
Since DNS is not available I all start them with the same -sname param e.g.
erl -sname foo ...
A operating-system daemon has the feature to execute shell (/bin/sh) commands when a certain event occurs (when a USB stick is pugged into the system).
I'm looking for a simple way to call a function on the erlang node local on this machine with this shell command (taking further action after the USB stick was detected and mounted).
I was thinking of calling erl -sname bar from the shell and run some code that looks like
[_,Host] = string:tokens(atom_to_list(node()), "#"),
The_node = list_to_atom("foo#" ++ Host),
spawn(The_node, My_fun),
Is this the way to go? Or is starting a whole new erlang node overkill (won't be doing it often though)
Or is it better to talk a socket opened by gen_tcp, or read a named pipe.
Or any other suggestions?
BTW this is running on a Unix system.
What you want to use is actually erl_call, an application that lets you contact currently distributed nodes and run arbitrary code.
erl_call makes it possible to start and/or communicate with a distributed Erlang node. It is built upon the erl_interface library as an example application. Its purpose is to use an Unix shell script to interact with a distributed Erlang node.
You can either give it commands, an escript or pretty much just code to evaluate and it will do it. You have more details and actual example in its documentation.

Resources