How to silence output from docker commands - docker

I have written a script to run some docker commands for me and I would like to silence the output from these commands. for example docker load, docker run or docker stop.
docker load does have a flag --quiet that seems like it should do what I want however when I try to use this it still prints out Loaded image: myimage. Even if this flag did work for me, not all docker commands have this flag available. I had tried to use redirection like docker run ... 2>&1 /dev/null however the redirection arguments are interpreted as commands arguments for the docker container and this seems to be the same for other docker commands as well, for example tar -Oxf myimage.img.tgz | docker load 2>&1 /dev/null assumes that the redirection are arguments and decides to print out the command usage.

This is mostly a shell question regarding standard descriptors (stdout, stderr) and redirections.
To achieve what you want, you should not write cmd 2>&1 /dev/null nor cmd 2>&1 >/dev/null but just write: cmd >/dev/null 2>&1
Mnemonics:
The intuition to easily think of this > syntax is:
>/dev/null can be read: STDOUT := /dev/null
2>&1 can be read: STDERR := STDOUT
This way, the fact that 2>&1 must be placed afterwards becomes clear.
(As an aside, redirecting both stderr to stdout to a pipe is a bit different, and would be written in the following order: cmd1 2>&1 | cmd2)
Minimal complete example to test this:
$ cmd() { echo "stdout"; echo >&2 "stderr"; }
$ cmd 2>&1 >/dev/null # does no work as intended
stderr
$ cmd >/dev/null 2>&1 # ok

Related

Even without docker exec -t, the output is there

I understood docker's -t as a kind of virtual terminal that seems to access the terminal through /dev/pts. So, if I do echo "hello, tty" > /dev/pts/1 , I see that it is output to the connected terminal. Since the -i option is STDIN, the container understands it as an option to receive text as input. So, who does the input go to when only the -i option is applied?
Below is the result of the command given only the -i option.
~ $ docker exec -i mysql-container bash
tty
not a tty
ls
bin
boot
dev
docker-entrypoint-initdb.d
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
I didn't give the -t option, so I was expecting no results back. But it exactly missed my expectations. Why is it running like this?
The difference is subtle, but when you only use -i, you communicate with it using stdin and stdout. But there's no terminal in the container.
When you use -it you attach a terminal and that lets you do 'terminal stuff'. For instance, with -it
you get a prompt
you can send programs to the background with stuff like tail -f /dev/null &
ctrl-c works as expected
etc
The difference is hard to spot, because with -i you usually take stdin from a terminal on the host and send stdout to the a terminal on the host.
Usually, when you want to run commands interactively, you'll use -it. A scenario where you might use only -i is when you pipe the commands into the container. Something like this
echo -e 'tty\nls' | docker exec -i mysql-container bash
which will run the tty and ls commands and give you the output.

Symlink /dev/err in docker

For what purpose are such symlinks set for logs in the docker? Why is it redirecting to the main process?
ln -sf /proc/1/fd/2 /app/storage/logs/apache.log
ln -sf /proc/1/fd/2 /dev/stderr
Many software packages are designed to write their logs out to files, and don't have an obvious option to send the logs to somewhere else. So the first thing that having this symlink does is let you configure the application to write logs to "a file", but actually have it show up on the container's stdout or stderr.
For a minimal example, you could try something like
docker run -d --name test busybox \
sh -c 'ln -s /tmp/log.txt /proc/1/fd/1; echo "hello" > /tmp/log.txt'
docker wait test
docker logs test
docker rm test
In the temporary BusyBox container, we set up a symlink, and then write some text to the "log file"; since it goes to the main process's stdout, it shows up in docker logs.
Another common reason to do this is to give the operator the opportunity to actually write to a file, if that's what they want. Let's consider this minimal image:
FROM busybox
RUN mkdir /logs \
&& ln -s /logs/log.txt /proc/1/fd/1
CMD echo 'hello' > /logs/log.txt
This is the same as the previous command but recast into image form
$ docker build -t log-test .
$ docker run --rm log-test
hello
However, we also have the option of bind-mounting a host directory to receive those logs:
$ mkdir logs
$ docker run --rm -v "$PWD/logs:/logs" log-test
$ cat logs/log.txt
hello
The docker run -v bind-mount hides the /logs directory in the image, and therefore the symlink, so the echo command writes to an actual file, which is then visible on the host system.
I know in particular the standard HTTP-server containers are set up this way, sending the HTTP access log to stdout unless something else is configured as log storage, but it's not specific to this class of image.

Running a background process versus running a pipeline in the background

I want to output a logfile from a Docker container and stumbled across something that I don't understand. These two lines don't fail, but only the first one works as I would like it to:
tail --follow "/var/log/my-log" &
tail --follow "/var/log/my-log" | sed -e 's/^/prefix:/' &
Checking inside the running container, I see that the processes are running but I only see the output of the first line in the container output.
Dockerfile
FROM debian:buster-slim
COPY boot.sh /
ENTRYPOINT [ "/boot.sh" ]
boot.sh
Must be made executable (chmod +x)!
#!/bin/sh
echo "starting"
echo "start" > "/var/log/my-log"
tail --follow "/var/log/my-log" &
tail --follow "/var/log/my-log" | sed -e 's/^/prefix:/' &
echo "sleeping"
sleep inf
Running
Put the two files above into a folder.
Build the image with docker build --tag pipeline .
Run the image in one terminal with docker run --init --rm --name pipeline pipeline. Here you can also watch the output of the container.
In a second terminal, open a shell with docker exec -it pipeline bash and there, run e.g. date >> /var/log/my-log. You can also run the two tail ... commands here to see how they should work.
To stop the container use docker kill pipeline.
I would expect to find the output of both tail ... commands in the output of the container, but it already fails on the initial "start" entry of the logfile. Further entries to the logfile are also ignored by the tail command that adds a prefix.
BTW: I would welcome a workaround using pipes/FIFOs that would avoid writing a persistent logfile to begin with. I'd still like to understand why this fails. ;)
Based on what I have tested, It seems that sed is causing the issue where the output of this command tail --follow "/var/log/my-log" | sed -e 's/^/prefix:/' & does not appear while running the container. The issue can be solved by passing -u to sed which disables the buffering.
The final working boot.sh will be as follow:
#!/bin/sh
echo "starting"
echo "start" > "/var/log/my-log"
tail --follow "/var/log/my-log" &
tail --follow "/var/log/my-log" | sed -u -e 's/^/prefix:/' &
echo "sleeping"
sleep inf
And the output after running the container will be:
starting
sleeping
start
prefix:start
Appending more data to the log file will be displayed as expected too.
starting
sleeping
start
prefix:start
newlog
prefix:newlog
Also see: why cant I redirect the output from sed to a file

Write to stdin of running docker container

Say I run a docker container as a daemon:
docker run -d foo
is there a way to write to the stdin of that container? Something like:
docker exec -i foo echo 'hi'
last time I checked the -i and -d flags were mutually exclusive when used with the docker run command.
According to another answer on ServerFault, you can use socat to pipe input to a docker container like this:
echo 'hi' | socat EXEC:"docker attach container0",pty STDIN
Note that the echo command includes a newline at the end of the output, so the line above actually sends hi\n. Use echo -n if you don't want a newline.
Let's see how this looks like with the example script from David's answer:
# Create a new empty directory
mkdir x
# Run a container, in the background, that copies its stdin
# to a file in that directory
docker run -itd --rm -v $PWD/x:/x --name cattainer busybox sh -c 'cat >/x/y'
# Send some strings in
echo 'hi' | socat EXEC:"docker attach cattainer",pty STDIN
echo 'still there?' | socat EXEC:"docker attach cattainer",pty STDIN
# Stop container (cleans up itself because of --rm)
docker stop cattainer
# See what we got out
cat x/y
# should output:
# hi
# still there?
You could also wrap it in a shell function:
docker_send() {
container="$1"; shift
echo "$#" | socat EXEC:"docker attach $container",pty STDIN
}
docker_send cattainer "Hello cat!"
docker_send cattainer -n "No newline here:" # flag -n is passed to echo
Trivia: I'm actually using this approach to control a Terraria server running in a docker container, because TerrariaServer.exe only accepts server commands (like save or exit) on stdin.
In principle you can docker attach to it. CTRL+C will stop the container (by sending SIGINT to the process); CTRL+P, CTRL+Q will detach from it and leave it running (if you started the container with docker run -it).
The one trick here is that docker attach expects to be running in a terminal of some sort; you can do something like run it under script to meet this requirement. Here's an example:
# Create a new empty directory
mkdir x
# Run a container, in the background, that copies its stdin
# to a file in that directory
docker run -itd -v $PWD/x:/x --name cat busybox sh -c 'cat >/x/y'
# Send a string in
echo foo | script -q /dev/null docker attach cat
# Note, EOF here stops the container, probably because /bin/cat
# exits normally
# Clean up
docker rm cat
# See what we got out
cat x/y
In practice, if the main way a program communicates is via text on its standard input and standard output, Docker isn't a great packaging mechanism for it. In higher-level environments like Docker Compose or Kubernetes, it becomes progressively harder to send content this way, and there's frequently an assumption that a container can run completely autonomously. Just invoking the program gets complicated quickly (as this question hints at). If you have something like, say, the create-react-app setup tool that asks a bunch of interactive questions then writes things to the host filesystem, it will be vastly easier to run it directly on the host and not in Docker.

cannot apply sed to the stdout within a docker image

I have a docker file that has an entry point which is an s2i/bin/run script:
#!/bin/bash
export_vars=$(cgroup-limits); export $export_vars
exec /opt/app-root/services.sh
The services.sh script runs php-fpm and nginx:
php-fpm 2>&1
nginx -c /opt/app-root/etc/conf.d/nginx/nginx.conf
# this echo to stdout is needed otherwise no stdout doesn't show up on the docker run output
echo date 2>&1
The php scripts are logging to stderr so that script does 2>&1 to redirected to stdout which is needed for the log aggregator.
I want to run sed or awk over the log output. Yet if I try:
php-fpm 2>&1 | sed 's/A/B/g'
or
exec /opt/app-root/services.sh | sed 's/A/B/g'
Then nothing shows up when I run the container. Without the pipe to sed the output of php-fpm shows up as the output of docker run okay.
Is there a way to sed the output of php-fpm ensuring that the output makes it to the output of docker?
Edit Note that I tried the obvious | sed 's/A/B/g' in both places and was also trying running the pipe in a subshell $(stuff|sed 's/A/B/g') in both places. Neither works so this seems to be a Docker or s2i issue.
Try keeping sed arguments in double quotes.
php-fpm 2>&1 | sed "s/A/B/g"

Resources