Running a background process versus running a pipeline in the background - docker

I want to output a logfile from a Docker container and stumbled across something that I don't understand. These two lines don't fail, but only the first one works as I would like it to:
tail --follow "/var/log/my-log" &
tail --follow "/var/log/my-log" | sed -e 's/^/prefix:/' &
Checking inside the running container, I see that the processes are running but I only see the output of the first line in the container output.
Dockerfile
FROM debian:buster-slim
COPY boot.sh /
ENTRYPOINT [ "/boot.sh" ]
boot.sh
Must be made executable (chmod +x)!
#!/bin/sh
echo "starting"
echo "start" > "/var/log/my-log"
tail --follow "/var/log/my-log" &
tail --follow "/var/log/my-log" | sed -e 's/^/prefix:/' &
echo "sleeping"
sleep inf
Running
Put the two files above into a folder.
Build the image with docker build --tag pipeline .
Run the image in one terminal with docker run --init --rm --name pipeline pipeline. Here you can also watch the output of the container.
In a second terminal, open a shell with docker exec -it pipeline bash and there, run e.g. date >> /var/log/my-log. You can also run the two tail ... commands here to see how they should work.
To stop the container use docker kill pipeline.
I would expect to find the output of both tail ... commands in the output of the container, but it already fails on the initial "start" entry of the logfile. Further entries to the logfile are also ignored by the tail command that adds a prefix.
BTW: I would welcome a workaround using pipes/FIFOs that would avoid writing a persistent logfile to begin with. I'd still like to understand why this fails. ;)

Based on what I have tested, It seems that sed is causing the issue where the output of this command tail --follow "/var/log/my-log" | sed -e 's/^/prefix:/' & does not appear while running the container. The issue can be solved by passing -u to sed which disables the buffering.
The final working boot.sh will be as follow:
#!/bin/sh
echo "starting"
echo "start" > "/var/log/my-log"
tail --follow "/var/log/my-log" &
tail --follow "/var/log/my-log" | sed -u -e 's/^/prefix:/' &
echo "sleeping"
sleep inf
And the output after running the container will be:
starting
sleeping
start
prefix:start
Appending more data to the log file will be displayed as expected too.
starting
sleeping
start
prefix:start
newlog
prefix:newlog
Also see: why cant I redirect the output from sed to a file

Related

How to silence output from docker commands

I have written a script to run some docker commands for me and I would like to silence the output from these commands. for example docker load, docker run or docker stop.
docker load does have a flag --quiet that seems like it should do what I want however when I try to use this it still prints out Loaded image: myimage. Even if this flag did work for me, not all docker commands have this flag available. I had tried to use redirection like docker run ... 2>&1 /dev/null however the redirection arguments are interpreted as commands arguments for the docker container and this seems to be the same for other docker commands as well, for example tar -Oxf myimage.img.tgz | docker load 2>&1 /dev/null assumes that the redirection are arguments and decides to print out the command usage.
This is mostly a shell question regarding standard descriptors (stdout, stderr) and redirections.
To achieve what you want, you should not write cmd 2>&1 /dev/null nor cmd 2>&1 >/dev/null but just write: cmd >/dev/null 2>&1
Mnemonics:
The intuition to easily think of this > syntax is:
>/dev/null can be read: STDOUT := /dev/null
2>&1 can be read: STDERR := STDOUT
This way, the fact that 2>&1 must be placed afterwards becomes clear.
(As an aside, redirecting both stderr to stdout to a pipe is a bit different, and would be written in the following order: cmd1 2>&1 | cmd2)
Minimal complete example to test this:
$ cmd() { echo "stdout"; echo >&2 "stderr"; }
$ cmd 2>&1 >/dev/null # does no work as intended
stderr
$ cmd >/dev/null 2>&1 # ok

Write to stdin of running docker container

Say I run a docker container as a daemon:
docker run -d foo
is there a way to write to the stdin of that container? Something like:
docker exec -i foo echo 'hi'
last time I checked the -i and -d flags were mutually exclusive when used with the docker run command.
According to another answer on ServerFault, you can use socat to pipe input to a docker container like this:
echo 'hi' | socat EXEC:"docker attach container0",pty STDIN
Note that the echo command includes a newline at the end of the output, so the line above actually sends hi\n. Use echo -n if you don't want a newline.
Let's see how this looks like with the example script from David's answer:
# Create a new empty directory
mkdir x
# Run a container, in the background, that copies its stdin
# to a file in that directory
docker run -itd --rm -v $PWD/x:/x --name cattainer busybox sh -c 'cat >/x/y'
# Send some strings in
echo 'hi' | socat EXEC:"docker attach cattainer",pty STDIN
echo 'still there?' | socat EXEC:"docker attach cattainer",pty STDIN
# Stop container (cleans up itself because of --rm)
docker stop cattainer
# See what we got out
cat x/y
# should output:
# hi
# still there?
You could also wrap it in a shell function:
docker_send() {
container="$1"; shift
echo "$#" | socat EXEC:"docker attach $container",pty STDIN
}
docker_send cattainer "Hello cat!"
docker_send cattainer -n "No newline here:" # flag -n is passed to echo
Trivia: I'm actually using this approach to control a Terraria server running in a docker container, because TerrariaServer.exe only accepts server commands (like save or exit) on stdin.
In principle you can docker attach to it. CTRL+C will stop the container (by sending SIGINT to the process); CTRL+P, CTRL+Q will detach from it and leave it running (if you started the container with docker run -it).
The one trick here is that docker attach expects to be running in a terminal of some sort; you can do something like run it under script to meet this requirement. Here's an example:
# Create a new empty directory
mkdir x
# Run a container, in the background, that copies its stdin
# to a file in that directory
docker run -itd -v $PWD/x:/x --name cat busybox sh -c 'cat >/x/y'
# Send a string in
echo foo | script -q /dev/null docker attach cat
# Note, EOF here stops the container, probably because /bin/cat
# exits normally
# Clean up
docker rm cat
# See what we got out
cat x/y
In practice, if the main way a program communicates is via text on its standard input and standard output, Docker isn't a great packaging mechanism for it. In higher-level environments like Docker Compose or Kubernetes, it becomes progressively harder to send content this way, and there's frequently an assumption that a container can run completely autonomously. Just invoking the program gets complicated quickly (as this question hints at). If you have something like, say, the create-react-app setup tool that asks a bunch of interactive questions then writes things to the host filesystem, it will be vastly easier to run it directly on the host and not in Docker.

cannot apply sed to the stdout within a docker image

I have a docker file that has an entry point which is an s2i/bin/run script:
#!/bin/bash
export_vars=$(cgroup-limits); export $export_vars
exec /opt/app-root/services.sh
The services.sh script runs php-fpm and nginx:
php-fpm 2>&1
nginx -c /opt/app-root/etc/conf.d/nginx/nginx.conf
# this echo to stdout is needed otherwise no stdout doesn't show up on the docker run output
echo date 2>&1
The php scripts are logging to stderr so that script does 2>&1 to redirected to stdout which is needed for the log aggregator.
I want to run sed or awk over the log output. Yet if I try:
php-fpm 2>&1 | sed 's/A/B/g'
or
exec /opt/app-root/services.sh | sed 's/A/B/g'
Then nothing shows up when I run the container. Without the pipe to sed the output of php-fpm shows up as the output of docker run okay.
Is there a way to sed the output of php-fpm ensuring that the output makes it to the output of docker?
Edit Note that I tried the obvious | sed 's/A/B/g' in both places and was also trying running the pipe in a subshell $(stuff|sed 's/A/B/g') in both places. Neither works so this seems to be a Docker or s2i issue.
Try keeping sed arguments in double quotes.
php-fpm 2>&1 | sed "s/A/B/g"

Tail stdout from multiple Docker containers

I have a script that starts 10 containers in background mode (fig up -d option). I want to aggregate the stdout or the log in /var/log from all of them. How can I do this?
The containers are started using different docker-compose files so I can not do docker-compose up target1 target2 target3
docker logs only accepts one container as a parameter.
I was considering creating a volume from the /var/log on all containers, mapping them to a directory outside of docker, making sure the logs do not have colliding name and than using the bash tail -f * . But I would appreciate a more elegant solution
This bash script will do what you want:
docker-logs
#!/bin/bash
if [ $# -eq 0 ]; then
echo "Usage: $(basename "$0") containerid ..."
exit 1
fi
pids=()
cleanup()
{
kill "${pids[#]}"
}
trap cleanup EXIT
while [ $# -ne 0 ]
do
(docker logs -f -t --tail=10 "$1"|sed -e "s/^/$1: /")&
pids+=($!)
shift
done
wait
Usage:
$ docker-logs containerid1 containerid2 ... containeridN
The output of this script has each line from the tracked logs prepended with the container id.
The script works in --follow mode and must be interrupted with Ctrl-C.
Note that the options of docker logs are hardcoded in the script. If you need to be able to control the options of docker logs from the command line then you will need to parse the command line arguments (for example with getopts).
Docker does not support as 1.12 yet. But I have a workaround via bash;
docker ps | grep -w <filter-text> | for i in `awk '{ print $1 }'`; do docker logs -f --tail=30 $i & done
I am using docker swarm modes comes with 1.12 and deploying many replication. So all of my containers contain common text which is same as service name. To tail all of its logs in a docker node , I am using this on each docker node. filter-text will be filtering only my containers.
If you want to stop tailing, this works for me;
pkill -f 'docker logs'

How to tell if a docker container run with -d has finished running its CMD

I want to make a simple bash script which runs one docker container with -d and then do something else if and only if the container has finished running its CMD. How can I do this while avoiding timing issues since the docker container can take a while to finish starting up?
My only thought was that the Dockerfile for the container will need to create some sort of state on the container itself when it's done and then the bash script can poll until the state file is there. Is there a better / standard way to do something like this?
Essentially I need a way for the host that ran a docker container with -d to be able to tell when it's ready.
Update
Made it work with the tailing logs method, but it seems a bit hacky:
docker run -d \
--name sauceconnect \
sauceconnect
# Tail logs until 'Sauce Connect is up'
docker logs -f sauceconnect | while read LINE
do
echo "$LINE"
if [[ "$LINE" == *"Sauce Connect is up"* ]]; then
pkill -P $$ docker
fi
done
You should be fine to check the logs via docker logs -f <containter_name_or_ID>
-f : same as tail -f
For example, the CMD is finished, and export a log as JOB ABC is successfully started.
your script can detect and run the rest jobs after get it.

Resources