Docker - Handling multiple services in a single container - docker

I would like to start two different services in my Docker container and exit the container as soon as one of them exits. I looked at supervisor, but I can't find how to get it to quit as soon as one of the managed applications exits. It tries to restart them up to three times, as is the standard setting and then just sits there doing nothing. Is supervisor able to do this or is there any other tool for this? A bonus would be if there also was a way to let both managed programs write to stdout, tagged with their application name, e.g.:
[Program 1] Some output
[Program 2] Some other output
[Program 1] Output again

Since you asked if there was another tool... we designed and wrote a powerful replacement for supervisord that is designed specifically for Docker. It automatically terminates when all applications quit, as well as has special service settings to control this behavior, plus will redirect stdout with tagged syslog-compatible output lines as well. It's open source, and being used in production.
Here is a quick start for Docker: http://garywiz.github.io/chaperone/guide/chap-docker-simple.html
There is also a complete set of tested base-images which are a good example at: https://github.com/garywiz/chaperone-docker, but these might be overkill and the earlier quickstart may do the trick.

I found solutions to both of my requirements by reading through the docs some more.
Exit supervisord on application exit
This can be achieved by using a custom eventlistener. I had to add the following segment into my supervisord configuration file:
[eventlistener:shutdownevent]
command=/shutdownhandler.sh
events=PROCESS_STATE_EXITED
supervisord will start the referenced script and upon the given event being triggered (PROCESS_STATE_EXITED is triggered after the exit of one of the managed programs and it not restarting automatically) will send a line containing data about the event on the scripts stdin.
The referenced shutdownhandler-script contains:
#!/bin/bash
while :
do
echo -en "READY\n"
read line
kill $(cat /supervisord.pid)
echo -en "RESULT 2\nOK"
done
The script has to indicate being ready by sending "READY\n" on its stdout, after which it may receive an event data line on its stdin. For my use case upon receival of a line (meaning one of the managed programs has exited), a SIGTERM is sent to the supervisord process being found by the pid it leaves in its pid file (situated in the root directory by default). For technical completeness, I also included a positive answer for the eventlistener, though that one should never matter.
Tagged output on stdout
I did this by simply starting a tail process in the background before starting supervisord, tailing the programs output log and piping the lines through ts (from the moreutils package) to prepend a tag to it. This way it shows up via docker logs with an easy way to see which program actually wrote the line.
tail -fn0 /var/log/supervisor/program1.log | ts '[Program 1]' &

Related

How does BusyBox evade my redirection of stdout, and can I work around it?

I have a BusyBox based system, and another one with vanilla Ubuntu LTS.
I made a C++ program which takes main()'s argv[1] as a command name, to fork() and execl() that command in the child process. Right before, I did dup2() to redirect the child's standard output, similar to this, so the parent process can read() its output.
Then, the text read from the child is written to the console, with markers, to see that it was the parent who output this.
Now if I run this program with, as its arg, "dd --help", then two different things happen:
on the Ubuntu system, the output clearly comes from the parent process (and only)
on the BusyBox system, the parent process reads back nothing, and the output of dd writes directly to the console, apparently bypassing my (attempt at) redirection.
Since all the little commands on the BusyBox system are symlinks to the one BusyBox executable and I thought there could be a problem, I also tried making a child process out of "busybox dd --help".
That changed nothing, though.
But: if I do "busybox --help", all of the output is caught by the child process and nothing "spilled besides" it. (note I left out the "sub command" dd here, only --help for BusyBox itself)
What's the reason for this happening, and (how) can I get this to work as intended, on the BusyBox system, too?
Busybox is outputting its own output, e.g. when calling it with options like --help, on stdout. But its implemented commands, like "dd", are output on stderr - even if it's not errors, which I found rather unintuitive and hence didn't look down that alley at first.

Spawning a process with {create_group=True} / set_pgid hangs when starting Docker

Given a Linux system, in Haskell GHCi 8.8.3, I can run a Docker command with:
System.Process> withCreateProcess (shell "docker run -it alpine sh -c \"echo hello\""){create_group=False} $ \_ _ _ pid -> waitForProcess pid
hello
ExitSuccess
However, when I switch to create_group=True the process hangs. The effect of create_group is to call set_pgid with 0 in the child, and pid in the parent. Why does that change cause a hang? Is this a bug in Docker? A bug in System.Process? Or an unfortunate but necessary interaction?
This isn't a bug in Haskell or a bug in Docker, but rather just the way that process groups work. Consider this C program:
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main(void) {
if(setpgid(0, 0)) {
perror("setpgid");
return 1;
}
execlp("docker", "docker", "run", "-it", "alpine", "echo", "hello", (char*)NULL);
perror("execlp");
return 1;
}
If you compile that and run ./a.out directly from your interactive shell, it will print "hello" as you'd expect. This is unsurprising, since the shell will have already put it in its own process group, so its setpgid is a no-op. If you run it with an intermediary program that forks a child to run it (sh -c ./a.out, \time ./a.out - note the backslash, strace ./a.out, etc.), then the setpgid will put it in a new process group, and it will hang like it does in Haskell.
The reason for the hang is explained in "Job Control Signals" in the glibc manual:
Macro: int SIGTTIN
A process cannot read from the user’s terminal while it is running as a background job. When any process in a background job tries to read from the terminal, all of the processes in the job are sent a SIGTTIN signal. The default action for this signal is to stop the process. For more information about how this interacts with the terminal driver, see Access to the Terminal.
Macro: int SIGTTOU
This is similar to SIGTTIN, but is generated when a process in a background job attempts to write to the terminal or set its modes. Again, the default action is to stop the process. SIGTTOU is only generated for an attempt to write to the terminal if the TOSTOP output mode is set; see Output Modes.
When you docker run -it something, Docker will attempt to read from stdin even if the command inside the container doesn't. Since you just created a new process group, and you didn't set it to be in the foreground, it counts as a background job. As such, Docker is getting stopped with SIGTTIN, which causes it to appear to hang.
Here's a list of options to fix this:
Redirect the process's standard input to somewhere other than the TTY
Use signal or sigaction to make the process ignore the SIGTTIN signal
Use sigprocmask to block the process from receiving the SIGTTIN signal
Call tcsetpgrp(0, getpid()) to make your new process group be the foreground process group (note: this is the most complicated, since it will itself cause SIGTTOU, so you'd have to ignore that signal at least temporarily anyway)
Options 2 and 3 will also only work if the program doesn't actually need stdin, which is the case with Docker. When SIGTTIN doesn't stop the process, reads from stdin will still fail with EIO, so if there's actually data you want to read, then you need to go with option 4 (and remember to set it back once the child exits).
If you have TOSTOP set (which is not the default), then you'd have to repeat the fix for SIGTTOU or for standard output and standard error (except for option 4, which wouldn't need to be repeated at all).

Reload environment variables PATH from chef recipes client

is it possible to reload $PATH from a chef recipe?
Im intrsted in the response about process signals given in the following thread:
How to have Chef reload global PATH
I dont understand very well that example that the omribahumi user gives.
I would like a clearer example with chef-client / recipe to understand,
with that he explains, seems it is possible with that workaround.
Thanks.
Well I see two reasons for this request:
add something to the path for immediate execution => easy, just update the ENV['PATH'] variable within the chef run.
Extend the PATH system wide to include something just installed.
For the 2, you may update /etc/environment file (for ubuntu) or add a file to /etc/profiled.d (better idea to keep control over it),
But obviously the new PATH variable won't be available to actually running processes (including your actual shell), it will work for process launched after the file update.
To explain a little more the link you gave what is done is:
create a file with export commands to set env variables
echo 'export MYVAR="my value"' > ~/my_environment
create a bash function loading env vars from a file
function reload_environment { source ~/my_environment; }
set a trap in bash to do something on a signal, here run the function when bash receive SIGHUP
trap reload_environment SIGHUP
Launch the function for a first sourcing of the env file, there's two way:
easy one: launch the function
reload_environment
complex one: Get the pid of your actual shell and send it a SIGHUP signal
kill -HUP `echo $$`
All of this is only for the current shell until you set this in your .bash_rc
Not exactly what you were asking for indeed, but I hope you'll understand there's no way to update context of an already running process.
The best you can do is: update the PATH with whatever method you wish (something in /etc/profile.d for exemple) and execute a wall (if chef run as root) to tell users to reload their envs
echo 'reload your shell env by executing: source /etc/profile' | wall
Once again, it could work for humans, not for other process already running, those will have to be restarted.

`ejabberdctl start` results in "kernel pid terminated" error -- what do I do?

I have googled for three hours but to no avail.
I have an ejabberd installation which is not installed using apt. It is installed from source and there is no program called ejabberd in it. Start and Stop and everything is through ejabberdctl.
It was running perfectly for a month and all of a sudden one day it stopped with the infamous
kernel pid terminated error
Anytime i do
sudo ejabberdctl start --node ejabberd#MasterService
A erl_crash file gets generated and when i try
ejabberdctl
i get
Failed to connect to RPC at node ejabberd#MasterService
Now what have i tried
Tried killing all running process of ejabberd, beam, epmd and starting fresh - DID NOT WORK
Checked /etc/hosts and hostname and all is well. Hostname is provided in hosts file with the IP
Checked the ejabberdctl.conf file to ensure teh host name is indeed right and the node name is right
checked .erlange.cookie file is being created with content in it
In all of web one way or another the search led me to either one of the above.
I have nowhere else to go and dont know where else to look. Any help would be much appreciated.
You'll have to analyze the crash dump to try to guess why it failed.
To carry out this task, Erlang has a special webtool (called, uh, webtool) from which a special application — Crash Dump Viewer — might be used to load a dump and inspect the stack traces of the Erlang processes at the time of the crash.
You have to
Install the necessary packages:
# apt-get install erlang-webtool erlang-observer
Start an Erlang interpreter:
$ erl
(Further actions are taken there.)
Run the webtool. In a simplest case, it will listen on the local host:
webtool:start().
(Notice the period.) It will print back an URL to navigate in your browser to reach the running tool.
If this happens on a server, and you'd rather like to have the webtool listen on some non-local-host interface, the call encantation would be trickier:
webtool:start(standard_path, [{port, 8888}, {bind_address, {0, 0, 0, 0}}, {server_name, "server.example.com"}]).
The {0, 0, 0, 0} IP spec will make it listen everywhere, and you might as well specify some more sensible octets, like {192, 168, 0, 1}. The server_name clause might use arbitrary name — this is what will be printed in the generated URL, the server's hostname.
Now connect to the tool with your browser, navigate to the "Start tools" menu entry, start crash dump viewer and make a link to it appear in the tool's top menu. Proceed there and find a link to load the crash dump.
After loading a crash dump try to mess around with the tool's interface to look at the stack traces of the active Erlang processes. At least one of them should contain something fishy, which should include an error message — that's what you're looking after to refine your question (or ask another at the ejabberd mailing list).
To quit the tool, run
webtool:stop().
in the running Erlang interpreter. And then quit it either by running
q().
and waiting a bit or pressing Ctrl-g and then entering the letter q followed by pressing the Return key.
The relevant links are: the crash dump viewer manual and the webtool manual.

launch a gui program from windows console and then make it 'detach' itself

I'm trying to modify a legacy Delphi 5 app so that it can be launched either from it's icon/via Explorer, or from the console (command-line). When it gets launched from the console, I want the program to detach itself from the console process, so that the console can continue to execute other instructions without waiting for my program to terminate.
I want to use it in a 'batch' file, such that I might have;
#echo off
rem step 1 - do some stuff here
rem
rem step 2 - launch my app
c:\myfolder\myapp
rem
rem step 3 - do some more stuff here
and that the console process moves on to step 3 straight after launching my app in step 2.
I'm sure I've done this before, many years ago, but I'm puzzled as to what exactly I did. I don't want to write a tiny console app 'launcher' for my main Windows app - I'm 95% sure that there was a way of doing this within a 'normal' Delphi GUI app.
I guess I could use vbscript or powershell or something to 'execute' my program with some kind of 'nowait' parameter but the client is familiar with batch files and I don't really want to upset the applecart by suggesting he change his scripts or install additional stuff - I'm making changes to the executable anyway and it would be great to tick this box for him too.
Anyone? :-)
I think the START command is the one you're looking for. It starts a process separately to the console and it's part of cmd.exe so no extra software required.
But I was of the opinion that GUI apps did this anyway. Maybe Delphi is different to MSVC.
Open up a console and type "start /?".
As itowlson states in the comments, GUI application do generally detach themselves. It's the actual cmd.exe shell doing trickery in that it waits for it to finish if it's running from a cmd file.
So "notepad" from the prompt will start it in the background but "notepad" within a cmd file will wait. Within the cmd file, you need to use:
start notepad.exe
or whatever your application is called (not notepad, presumably).
try: start "" c:\myfolder\myapp (with the empty quotes)
I think Microsoft has been solve this problem in Windows Power Shell.
In command prompt, even if you use "start ", you cant detach your process really from cmd. If you close the cmd, you will die, suddenly. But In windows Power Shell, you can detach your program or command from Power Shell as default.
So, if you prefer to use Windows Power Shell instead of Command Prompt, just do this:
PS: X:\> <your command>
Here's one way that I've found. It works quite cleanly and doesn't leave any extra cmd windows around (the recommendation to use start c:\myfolder\myapp does not work:
cmd /c dir && c:\myfolder\myapp
To quote the CMD help:
/C Carries out the command specified by string and then terminates
Note that multiple commands separated by the command separator '&&'
are accepted for string if surrounded by quotes.
Apparently it notices that the dir command terminates and exits, even though your app was launched on the same command. Chalk it up to one of Windows vagaries.
u should use the cd command example
cd/
cd myfolder
start myapp
exit

Resources