I have a docker file that has an entry point which is an s2i/bin/run script:
#!/bin/bash
export_vars=$(cgroup-limits); export $export_vars
exec /opt/app-root/services.sh
The services.sh script runs php-fpm and nginx:
php-fpm 2>&1
nginx -c /opt/app-root/etc/conf.d/nginx/nginx.conf
# this echo to stdout is needed otherwise no stdout doesn't show up on the docker run output
echo date 2>&1
The php scripts are logging to stderr so that script does 2>&1 to redirected to stdout which is needed for the log aggregator.
I want to run sed or awk over the log output. Yet if I try:
php-fpm 2>&1 | sed 's/A/B/g'
or
exec /opt/app-root/services.sh | sed 's/A/B/g'
Then nothing shows up when I run the container. Without the pipe to sed the output of php-fpm shows up as the output of docker run okay.
Is there a way to sed the output of php-fpm ensuring that the output makes it to the output of docker?
Edit Note that I tried the obvious | sed 's/A/B/g' in both places and was also trying running the pipe in a subshell $(stuff|sed 's/A/B/g') in both places. Neither works so this seems to be a Docker or s2i issue.
Try keeping sed arguments in double quotes.
php-fpm 2>&1 | sed "s/A/B/g"
Related
Folder structure:
#root
|- deployment
| |- start-dev.sh
| |- docker-compose.yml
| |- // other files including app.Dockerfile and anything else I need
|- // everything else
Initial start-dev.sh
#!/bin/sh
docker-compose -p my-container up -d
docker-compose -p my-container exec app bash
Working state
In VS Code (opened as WSL2 remote) integrated terminal I would type
cd deployment
./start-dev.sh
and deployment is successful.
If instead, I tried just deployment/start-dev.sh it fails, since there's no docker-compose.yml in the current directory.
Desire
I want
deployment/start-dev.sh
to work.
Non-solution
The following fails since dirname is not available in my case.
#!/bin/sh
BASEDIR=$(dirname $0)
docker-compose -f "${BASEDIR}/docker-compose.yml" -p my-container up -d
docker-compose -f "${BASEDIR}/docker-compose.yml" -p my-container exec app bash
Solution 1 for start-dev.sh
#!/bin/bash
BASEDIR=$(dirname $0)
docker-compose -f "${BASEDIR}/docker-compose.yml" -p my-container up -d
docker-compose -f "${BASEDIR}/docker-compose.yml" -p my-container exec app bash
Question
How do I convert Solution 1 to be a sh script instead of bash, if dirname is not available in sh?
Solution 2
#!/bin/sh
a="/$0"; a=${a%/*}; a=${a#/}; a=${a:-.}; BASEDIR=$(cd "$a"; pwd)
docker-compose -f "${BASEDIR}/docker-compose.yml" -p my-container up -d
docker-compose -f "${BASEDIR}/docker-compose.yml" -p my-container exec app bash
Change the very first line of the script to use #!/bin/sh as the interpreter. You don't need to change anything else.
In particular, the POSIX.1 specification includes dirname and $(command) substitution, so you're not using anything that's not a POSIX shell feature. I'd expect this script to work in minimal but standard-conforming shells like BusyBox's, and so in turn to work in any Docker image that includes a shell.
There are other recipes too, though they do typically rely on dirname(1). See for example How can I set the current working directory to the directory of the script in Bash?, which includes non-bash-specific answers.
I have written a script to run some docker commands for me and I would like to silence the output from these commands. for example docker load, docker run or docker stop.
docker load does have a flag --quiet that seems like it should do what I want however when I try to use this it still prints out Loaded image: myimage. Even if this flag did work for me, not all docker commands have this flag available. I had tried to use redirection like docker run ... 2>&1 /dev/null however the redirection arguments are interpreted as commands arguments for the docker container and this seems to be the same for other docker commands as well, for example tar -Oxf myimage.img.tgz | docker load 2>&1 /dev/null assumes that the redirection are arguments and decides to print out the command usage.
This is mostly a shell question regarding standard descriptors (stdout, stderr) and redirections.
To achieve what you want, you should not write cmd 2>&1 /dev/null nor cmd 2>&1 >/dev/null but just write: cmd >/dev/null 2>&1
Mnemonics:
The intuition to easily think of this > syntax is:
>/dev/null can be read: STDOUT := /dev/null
2>&1 can be read: STDERR := STDOUT
This way, the fact that 2>&1 must be placed afterwards becomes clear.
(As an aside, redirecting both stderr to stdout to a pipe is a bit different, and would be written in the following order: cmd1 2>&1 | cmd2)
Minimal complete example to test this:
$ cmd() { echo "stdout"; echo >&2 "stderr"; }
$ cmd 2>&1 >/dev/null # does no work as intended
stderr
$ cmd >/dev/null 2>&1 # ok
I want to output a logfile from a Docker container and stumbled across something that I don't understand. These two lines don't fail, but only the first one works as I would like it to:
tail --follow "/var/log/my-log" &
tail --follow "/var/log/my-log" | sed -e 's/^/prefix:/' &
Checking inside the running container, I see that the processes are running but I only see the output of the first line in the container output.
Dockerfile
FROM debian:buster-slim
COPY boot.sh /
ENTRYPOINT [ "/boot.sh" ]
boot.sh
Must be made executable (chmod +x)!
#!/bin/sh
echo "starting"
echo "start" > "/var/log/my-log"
tail --follow "/var/log/my-log" &
tail --follow "/var/log/my-log" | sed -e 's/^/prefix:/' &
echo "sleeping"
sleep inf
Running
Put the two files above into a folder.
Build the image with docker build --tag pipeline .
Run the image in one terminal with docker run --init --rm --name pipeline pipeline. Here you can also watch the output of the container.
In a second terminal, open a shell with docker exec -it pipeline bash and there, run e.g. date >> /var/log/my-log. You can also run the two tail ... commands here to see how they should work.
To stop the container use docker kill pipeline.
I would expect to find the output of both tail ... commands in the output of the container, but it already fails on the initial "start" entry of the logfile. Further entries to the logfile are also ignored by the tail command that adds a prefix.
BTW: I would welcome a workaround using pipes/FIFOs that would avoid writing a persistent logfile to begin with. I'd still like to understand why this fails. ;)
Based on what I have tested, It seems that sed is causing the issue where the output of this command tail --follow "/var/log/my-log" | sed -e 's/^/prefix:/' & does not appear while running the container. The issue can be solved by passing -u to sed which disables the buffering.
The final working boot.sh will be as follow:
#!/bin/sh
echo "starting"
echo "start" > "/var/log/my-log"
tail --follow "/var/log/my-log" &
tail --follow "/var/log/my-log" | sed -u -e 's/^/prefix:/' &
echo "sleeping"
sleep inf
And the output after running the container will be:
starting
sleeping
start
prefix:start
Appending more data to the log file will be displayed as expected too.
starting
sleeping
start
prefix:start
newlog
prefix:newlog
Also see: why cant I redirect the output from sed to a file
I'm executing this command from docker host which is finally not giving me any error on stdout. And completes successfully on prompt but doesn't executes what it is supposed to do inside container.
Can someone please help me identify what am i doing wrong?
docker exec -dt SPSSC /bin/bash -c "grep -ril 'LOCALIZATION_ENABLED="false"' /opt/tpa/confd/config/* | grep -v 'diameter' | xargs sed -i 's/LOCALIZATION_ENABLED="false"/LOCALIZATION_ENABLED="true"/g'"
In lieu of a docker-compose save command I've resorted to using sed to read the YAML file and piping this into docker save.
I'm seeing some strange behavior with docker reading from stdin. For example, I have a command that can find all the images in a docker-compose YAML file and output it to stdout
sed -nr 's/image: "(.*)"/\1/p' docker-compose.yml | uniq | xargs -d '\n' | cat
Will output:
mysql redis python
However if I try to pipe this into docker save I get the following error
sed -nr 's/image: "(.*)"/\1/p' docker-compose.yml | uniq | xargs -d '\n' |
docker save | gzip -c > images.tar.gz
"docker save" requires at least 1 argument(s).
See 'docker save --help'.
Usage: docker save [OPTIONS] IMAGE [IMAGE...]
Save one or more images to a tar archive (streamed to STDOUT by default)
How do I get docker to read from stdin for its arguments?
To complete this with a proper answer, from William Pursell in the comments:
The same way you get any command to take arguments from stdin: sed ... | xargs docker save