Catching exit signals in Elixir escript - erlang

I'd like to write an escript that reloads its configuration when it receives a HUP signal. I'm on OS X and searching for any new processes in Activity Monitor when I start the escript. When I do, these pop up: inet_gethost (twice), erl_child_setup, and beam.swp. When I send a SIGHUP to erl_child_setup, it crashes with the message of "erl_child_setup closed". When I send it to beam.swp I get a message of "Hangup: 1", but my trapping code is not called.
Here's some example code that illustrates what I am trying to do:
defmodule TrapHup do
def main(args) do
Process.flag(:trap_exit, true)
main_loop()
end
def main_loop() do
receive do
{ :EXIT, _from, reason } ->
IO.puts "Caught exit!"
IO.inspect reason
main_loop()
end
end
end

I found out this is not possible with just Elixir/Erlang. Apparently it is possible through bash as illustrated in this gist: https://gist.github.com/Djo/bfa9fa75928ce432ec51
Here's the code:
#!/usr/bin/env bash
set -x
term_handler() {
echo "Stopping the server process with PID $PID"
erl -noshell -name "term#127.0.0.1" -eval "rpc:call('app#127.0.0.1', init, stop, [])" -s init stop
echo "Stopped"
}
trap 'term_handler' TERM INT
elixir --name app#127.0.0.1 -S mix run --no-halt &
PID=$!
echo "Started the server process with PID $PID"
wait $PID
# remove the trap if the first signal received or 'mix run' stopped for some reason
trap - TERM INT
# return the exit status of the 'mix run'
wait $PID
EXIT_STATUS=$?
exit $EXIT_STATUS

Related

Pass args through rebar shell to erl?

I am using "rebar shell" to test my app. This is documented as:
Start a shell with project and deps preloaded similar to
'erl -pa ebin -pa deps/*/ebin'.
How do I add extra args to the underlying invocation of 'erl'? For
example, I want to add application specific environment variables and
run a Module/Function. I want to invoke something like:
erl -pa ebin -pa deps/*/ebin -browser_spy browser_exe "/my/dir" -run bs_example test
(and I want code:priv_dir to work as it does when using rebar shell,
which the above 'erl' command does not do).
You cannot
rebar shell does not execute erl ... command actually, but only tries to replicate its behaviour.
Actually rebar just turns yourself into the shell along with mimicking -pa by adding paths with code:add_pathz
See here for implementation details:
shell(_Config, _AppFile) ->
true = code:add_pathz(rebar_utils:ebin_dir()),
%% scan all processes for any with references to the old user and save them to
%% update later
NeedsUpdate = [Pid || Pid <- erlang:processes(),
proplists:get_value(group_leader, erlang:process_info(Pid)) == whereis(user)
],
%% terminate the current user
ok = supervisor:terminate_child(kernel_sup, user),
%% start a new shell (this also starts a new user under the correct group)
_ = user_drv:start(),
%% wait until user_drv and user have been registered (max 3 seconds)
ok = wait_until_user_started(3000),
%% set any process that had a reference to the old user's group leader to the
%% new user process
_ = [erlang:group_leader(whereis(user), Pid) || Pid <- NeedsUpdate],
%% enable error_logger's tty output
ok = error_logger:swap_handler(tty),
%% disable the simple error_logger (which may have been added multiple
%% times). removes at most the error_logger added by init and the
%% error_logger added by the tty handler
ok = remove_error_handler(3),
%% this call never returns (until user quits shell)
timer:sleep(infinity).

Erlang how to start an external script in linux

I want to run an external script and get the PID of the process (once it starts) from my erlang program. Later, I will want to send TERM signal to that PID from erlang code. How do I do it?
I tried this
P = os:cmd("myscript &"),
io:format("Pid = ~s ~n",[P]).
It starts the script in background as expected, but I dont get the PID.
Update
I made the below script (loop.pl) for testing:
while(1){
sleep 1;
}
Then tried to spawn the script using open_port. The script runs OK. But, erlang:port_info/2 troughs exception:
2> Port = open_port({spawn, "perl loop.pl"}, []).
#Port<0.504>
3> {os_pid, OsPid} = erlang:port_info(Port, os_pid).
** exception error: bad argument
in function erlang:port_info/2
called as erlang:port_info(#Port<0.504>,os_pid)
I checked the script is running:
$ ps -ef | grep loop.pl
root 10357 10130 0 17:35 ? 00:00:00 perl loop.pl
You can open a port using spawn or spawn_executable, and then use erlang:port_info/2 to get its OS process ID:
1> Port = open_port({spawn, "myscript"}, PortOptions).
#Port<0.530>
2> {os_pid, OsPid} = erlang:port_info(Port, os_pid).
{os_pid,91270}
3> os:cmd("kill " ++ integer_to_list(OsPid)).
[]
Set PortOptions as appropriate for your use case.
As the last line above shows, you can use os:cmd/1 to kill the process if you wish.

Goal of zero downtime, how to use upstart with sockets & (g)unicorn:

My goal is zero downtime deployments for an ecommerce app, and I'm trying to this in the best way possible.
I'm doing this on a nginx/unicorn/django setup as well as a nginx/unicorn/rails setup for a separate server.
My strategy is to set preload_app=true in my guincorn.py/unicorn.rb file, then reload by sending a USR2 signal to the PID running the server. This forks the process and it's children and a pre_fork/before_fork can pick up on this and send a subsequent QUIT signal.
Here's an example of what my pre_fork is doing in the guincorn version:
# ...
pidfile='/opt/run/my-website/my-website.pid'
# socket doesn't come back after QUIT
bind='unix:/opt/run/my-website/my-website.socket'
# works, but I'd prefer the socket for security
# bind='localhost:8333'
# ...
def pre_fork(server, worker):
old_pid_file = '/opt/run/my-website/my-website.pid.oldbin'
if os.path.isfile(old_pid_file):
with open(old_pid_file, 'r') as pid_contents:
try:
old_pid = int(pid_contents.read())
if old_pid != server.pid:
os.kill(old_pid, signal.SIGQUIT)
except Exception as err:
pass
pre_fork=pre_fork
And here's a selection from my sysv script which performs the reload:
DESC="my website"
SITE_PATH="/opt/python/my-website"
ENV_PATH="/opt/env/my-website"
RUN_AS="myuser"
SETTINGS="my.settings"
STDOUT_LOG="/var/log/my-website/my-website-access.log"
STDERR_LOG="/var/log/my-website/my-website-error.log"
GUNICORN="/opt/env/my-website/bin/gunicorn.py"
CMD="$ENV_PATH/bin/python $SITE_PATH/manage.py run_gunicorn -c $GUNICORN >> $STDOUT_LOG 2>>$STDERR_LOG"
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
run () {
if [ "$(id -un)" = "$RUN_AS" ]; then
eval $1
else
su -c "$1" - $RUN_AS
fi
}
reload () {
echo "Reloading $DESC"
sig USR2 && echo reloaded OK && exit 0
echo >&2 "Couldn't reload, starting '$DESC' instead"
run "$CMD"
}
action="$1"
case $action in
reload)
reload
;;
esac
I chose preload_app=true, for the zero-downtime appeal. Since the workers have the app preloaded into memory, then as long as I switch processes correctly, it should simulate a zero downtime result. That's the thinking anyway.
This works where I'm listening to through a port but I haven't been able to get it work over a socket.
My questions are the following:
Is this how the rest of you are doing this?
Is there a better way, for example with HUP somehow? My understanding is you can't use preload_app=true with HUP though.
Is it possible to do this using a socket? My socket keeps going away on the QUIT and never coming back. My thinking is that a socket is more secure because you have to have access to the filesystem.
Is anyone doing this with upstart rather than sysv? I'd ideally like to do that and I saw an interesting way of accomplishing that by flocking the PID. It's a challenge with upstart because once the exec-fork from gunicorn/unicorn takes over, upstart is no longer monitoring the process it was originally managing and needs to be re-established somehow.
You should look at unicornherder from my colleagues at GDS, which is specifically designed to manage this:
Unicorn Herder is a utility designed to assist in the use of Upstart and similar supervisors with Unicorn.

Ruby suppress system true output

I am running a background rake task. (Using '&'). The thing is that I want it to stop sometimes. So I wrote this:
pinger_pid = system "ps | grep rake | awk '{print $1}'"
puts pinger_pid
system "kill -9 #{pinger_pid}"
Seems that I am getting a 'true' output garbage! How can I remove that?
output:
ERROR: garbage process ID "true".
Usage:
kill pid ... Send SIGTERM to every process listed.
kill signal pid ... Send a signal to every process listed.
kill -s signal pid ... Send a signal to every process listed.
kill -l List all signal names.
kill -L List all signal names in a nice table.
kill -l signal Convert between signal numbers and names.
System returns true or false, depending on the success of the command.
Use %x to capture output:
pinger_pid = %x(ps | grep rake | awk '{print $1}')
puts pinger_pid
system 'kill', '-9', pinger_pid

Why doesn't it work when I try to start application using erl -s

Something like
erl -s crypto start -s application start public_key
works for crypto but not application:start(..). Typically I have call application supervisor but not application itself. What's the normal way of doing it?
The -s flag expects
Module Fun Arg1 Arg2 ..
and executes it as
module:fun([Arg1, Arg2, ..]).
So, it passes the arguments as a list.
When running -s application start public_key it wil call application:start([public_key]), which isn't supported. This works: application:start(public_key)
I did not found a workaround for it without creating a module that contains a function to start up the public_key application, like:
-module(myapp).
-export([start/1]).
start([App]) -> application:start(App).
And call it like
erl -s crypto start -s myapp start public_key

Resources