Docker check if file exists in healthcheck - docker

How do I wait until a file is created in docker? I'm trying the code below, but it doesn't work. If I execute bash -c [ -f /tmp/asdasdasd ] separate from docker shell, it gives me the correct result.
Dockerfiletest:
FROM alpine:3.6
RUN apk update && apk add bash
docker-compose.yml:
version: '2.1'
services:
testserv:
build:
context: .
dockerfile: ./Dockerfiletest
command:
bash -c "rm /tmp/a && sleep 5 && touch /tmp/a && sleep 100"
healthcheck:
# I tried adding '&& exit 1', '|| exit `' it doesn't work.
test: bash -c [ -f /tmp/a ]
timeout: 1s
retries: 20
docker-compose up + wait 10s + docker ps:
: docker ps
STATUS
Up About a minute (health: starting)

I believe you are missing quotes on the command to run. bash -c only accepts one parameter, not a list, so you need to quote the rest of that line to pass it as a single parameter:
bash -c "[ -f /tmp/a ]"
To see the results of your healthcheck, you can run:
docker inspect $container_id -f '{{ json .State.Health.Log }}' | jq .

It turns out that besides missing quotes I also checked existence of socket via -f when I should do
bash -c '[ -S /tmp/uwsgi.sock ]'
Furthermore interval: 5s could be used to decrease default 5s interval.

Related

How do I pass an env/build arg from my docker-compose script to a Dockerfile ENTRYPOINT command? [duplicate]

This question already has answers here:
How to pass ARG value to ENTRYPOINT?
(5 answers)
Closed 8 months ago.
I want to pass an env/build var to my Dockerfile for use in its entryxoint and thought I could do it from the docker-compose file like so
web:
restart: "no"
build:
context: ../my-project
args:
CURRENT_ENV: "development"
In my Dockerfile, I have this defined
ARG CURRENT_ENV
ENTRYPOINT /bin/sh -c "rm -f /app/tmp/pids/*.pid && [[ $CURRENT_ENV = 'dev' ]] && /usr/local/rbenv/versions/`cat .ruby-version`/gemsets/my-project/bin/foreman start -f Procfile || /usr/local/rbenv/versions/`cat .ruby-version`/gemsets/my-project/bin/foreman start -f Procfile.hot; tail -f /dev/null"
However when I start my containers using “docker-compose up”, it doesn’t appear the entry point is picking up the variable as it has a empty string before the “= ‘dev’” section …
myco-deploy-web-1 | /bin/sh: -c: line 0: `rm -f /app/tmp/pids/*.pid && [[ = 'dev' ]] && /usr/local/rbenv/versions/2.4.5/gemsets/my-project/bin/foreman start -f Procfile || /usr/local/rbenv/versions/2.4.5/gemsets/my-project/bin/foreman start -f Procfile.hot; tail -f /dev/null'
What’s the proper way to pass the build arg/env var to my ENTRYPOINT command?
The ENTRYPOINT needs the variable to be an environment variable rather than a build arg.
You can do this
ARG CURRENT_ENV
ENV CURRENT_ENV $CURRENT_ENV
ENTRYPOINT /bin/sh -c "rm -f /app/tmp/pids/*.pid && [[ $CURRENT_ENV = 'dev' ]] && /usr/local/rbenv/versions/`cat .ruby-version`/gemsets/my-project/bin/foreman start -f Procfile || /usr/local/rbenv/versions/`cat .ruby-version`/gemsets/my-project/bin/foreman start -f Procfile.hot; tail -f /dev/null"
and it should work

Run multiple command after entrypoint with docker-compose

I have been looking to the matrix of interaction of CMD and ENTRYPOINT and I can't found my way to have container running an entrypoint THEN a cmd with multiple commands
version: '3.8'
services:
test:
image: debian:buster-slim
entrypoint: [ "/entrypoint.sh" ]
volumes:
- ./entrypoint.sh:/entrypoint.sh
command: [ "echo" ,"toto","&&","echo","tutu" ]
where entrypoint.sh is a file containing :
#!/bin/bash
set -e
set -x
echo tata
exec "$#"
"should" print
tata
toto
tutu
but it's printing
tata
toto && echo tutu
I found a solution by replacing [ "echo" ,"toto","&&","echo","tutu" ] by "bash -c 'echo toto && echo tutu'" and then it work.
but I don't get why the first method do not work since the documentation say it will do :
exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd
The problem is generated by the exec command, which synopsis is:
exec [command [argument...]]
so it will only accept one command with multiple arguments.
Solution:
The solution is the one that you pointed out, by using sh -c '':
services:
test:
image: debian:buster-slim
entrypoint: [ "/entrypoint.sh" ]
volumes:
- ./entrypoint.sh:/entrypoint.sh
command: ["sh", "-c", "echo toto && echo tutu"]
because the final result will satisfy the exec command with one command and multiple arguments
On docker side, the official documentation explains the ENTRYPOINT vs CMD very well with this table:
docker table
source
If you combine CMD and ENTRYPOINT in the array context the result would be /entrypoint.sh "echo" "toto" "&&" "echo" "tutu" because each parameter of the CMD will be a parameter for the ENTRYPOINT
Here's the output of the example above executed directly in the terminal:
# ./entrypoint.sh "echo" "toto" "&&" "echo" "tutu"
+ echo tata
tata
+ exec echo toto '&&' echo tutu
toto && echo tutu
And this is the result of the docker-compose up
# docker-compose up
test_1 | + echo tata
test_1 | tata
test_1 | + exec echo toto '&&' echo tutu
test_1 | toto && echo tutu
root_test_1 exited with code 0
As you can see each parameter is passed in the array form so the '&&' is parsed as a string (note the single quotes).
Note:
The result you expected is this one:
# ./entrypoint.sh echo toto && echo tutu
+ echo tata
tata
+ exec echo toto
toto
tutu
In this scenario as you see the only parameter passed to the exec is the first echo toto.
echo tutu is executed in bash terminal after ./entrypoint.sh script exits.
Obviously if this would be parsed by docker as a separate command it will never be executed because the ENTRYPOINT will exit before the echo tutu command.

How to Import Streamsets pipeline in Dockerfile without container exiting

I am trying to import a pipeline into streamsets, during container start up, by using the Docker CMD command in Dockerfile. The image builds, but while creating the container there is no error but it exits with code 0. So it never comes up. Here is what I did:
Dockerfile:
FROM streamsets/datacollector:3.18.1
COPY myPipeline.json /pipelinejsonlocation/
EXPOSE 18630
ENTRYPOINT ["/bin/sh"]
CMD ["/opt/streamsets-datacollector-3.18.1/bin/streamsets","cli","-U", "http://localhost:18630", \
"-u", \
"admin", \
"-p", \
"admin", \
"store", \
"import", \
"-n", \
"myPipeline", \
"--stack", \
"-f", \
"/pipelinejsonlocation/myPipeline.json"]
Build image:
docker build -t cmp/sdc .
Run image:
docker run -p 18630:18630 -d --name sdc cmp/sdc
This outputs the container id. But the container is in the Exited status as shown below.
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
537adb1b05ab cmp/sdc "/bin/sh /opt/stream…" 5 seconds ago Exited (0) 3 seconds ago sdc
When I do not specify the CMD command in the Dockerfile, the streamsets container spins up and then when I run the streamsets import command in the running container in shell, it works. But how do I get it done during provisioning itself? Is there something I am missing in the Dockerfile?
In your Dockerfile you overwrite the default CMD and ENTRYPOINT from the StreamSets Data Collector Dockerfile. So the container only executes your command during startup and exits without errors afterwards. This is the reason why your container is in Exited (0) status.
In general this is good and expected behavior. If you want to keep your container alive you need to execute another command in the foreground, which never ends. But unfortunately, you cannot run multiple CMDs in your docker file.
I dug a little deeper. The default entry point of the image is ENTRYPOINT ["/docker-entrypoint.sh"]. This script sets up a few things and starts the Data Collector.
It is required that the Data Collector is running before the pipeline is imported. So a solution could be to copy the default docker-entrypoint.sh and modify it to start the Data Collector and import the pipeline afterwards. You could to it like this:
Dockerfile:
FROM streamsets/datacollector:3.18.1
COPY myPipeline.json /pipelinejsonlocation/
# Replace docker-entrypoint.sh
COPY docker-entrypoint.sh /docker-entrypoint.sh
EXPOSE 18630
docker-entrypoint.sh (https://github.com/streamsets/datacollector-docker/blob/master/docker-entrypoint.sh):
#!/bin/bash
#
# Copyright 2017 StreamSets Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
set -e
# We translate environment variables to sdc.properties and rewrite them.
set_conf() {
if [ $# -ne 2 ]; then
echo "set_conf requires two arguments: <key> <value>"
exit 1
fi
if [ -z "$SDC_CONF" ]; then
echo "SDC_CONF is not set."
exit 1
fi
grep -q "^$1" ${SDC_CONF}/sdc.properties && sed 's|^#\?\('"$1"'=\).*|\1'"$2"'|' -i ${SDC_CONF}/sdc.properties || echo -e "\n$1=$2" >> ${SDC_CONF}/sdc.properties
}
# support arbitrary user IDs
# ref: https://docs.openshift.com/container-platform/3.3/creating_images/guidelines.html#openshift-container-platform-specific-guidelines
if ! whoami &> /dev/null; then
if [ -w /etc/passwd ]; then
echo "${SDC_USER:-sdc}:x:$(id -u):0:${SDC_USER:-sdc} user:${HOME}:/sbin/nologin" >> /etc/passwd
fi
fi
# In some environments such as Marathon $HOST and $PORT0 can be used to
# determine the correct external URL to reach SDC.
if [ ! -z "$HOST" ] && [ ! -z "$PORT0" ] && [ -z "$SDC_CONF_SDC_BASE_HTTP_URL" ]; then
export SDC_CONF_SDC_BASE_HTTP_URL="http://${HOST}:${PORT0}"
fi
for e in $(env); do
key=${e%=*}
value=${e#*=}
if [[ $key == SDC_CONF_* ]]; then
lowercase=$(echo $key | tr '[:upper:]' '[:lower:]')
key=$(echo ${lowercase#*sdc_conf_} | sed 's|_|.|g')
set_conf $key $value
fi
done
# MODIFICATIONS:
#exec "${SDC_DIST}/bin/streamsets" "$#"
check_data_collector_status () {
watch -n 1 ${SDC_DIST}/bin/streamsets cli -U http://localhost:18630 ping | grep -q 'version' && echo "Data Collector has started!" && import_pipeline
}
function import_pipeline () {
sleep 1
echo "Start to import pipeline"
${SDC_DIST}/bin/streamsets cli -U http://localhost:18630 -u admin -p admin store import -n myPipeline --stack -f /pipelinejsonlocation/myPipeline.json
echo "Finished importing pipeline"
}
# Start checking if Data Collector is up (in background) and start Data Collector
check_data_collector_status & ${SDC_DIST}/bin/streamsets $#
I commented out the last line exec "${SDC_DIST}/bin/streamsets" "$#" of the default docker-entrypoint.sh and added two functions. check_data_collector_status () pings the Data Collector service until it is available. import_pipeline () imports your pipeline.
check_data_collector_status () runs in background and ${SDC_DIST}/bin/streamsets $# is started in foreground as before. So the pipeline is imported after the Data Collector service is started.
Run this image with sleep command:
docker run -p 18630:18630 -d --name sdc cmp/sdc sleep 300
300 is the time to sleep in seconds.
Then exec your script manually within the docker container and find out what's wrong.

How to redirect command output from docker container

Just another topic on this matter, but what's the best way of outputting docker container command STDOUT/ERR to a file other than running the command such as
bash -c "node cluster.js >> /var/log/cluster/console.log 2>&1"
What I don't like of the above is the fact that it results in 1 additional process, so finally I get 2 processes instead of 1, and my master cluster process is not the one with PID=1.
If I try
exec node cluster.js >> /var/log/cluster/console.log 2>&1
I get this error:
Error response from daemon: Cannot start container node:
exec: "node cluster.js >> /var/log/cluster/console.log 2>&1": executable file not found in $PATH
I am starting my container via docker-compose:
version: '3'
services:
node:
image: custom
build:
context: .
args:
ENVIRONMENT: production
restart: always
volumes:
- ./logs:/var/log/cluster
command: bash -c "node cluster.js >> /var/log/cluster/console.log 2>&1"
ports:
- "443:443"
- "80:80"
When I docker-compose exec node ps -fax | grep -v grep | grep node I get 1 extra process:
1 ? Ss 0:00 bash -c node cluster.js >> /srv/app/cluster/cluster.js
5 ? Sl 0:00 node cluster.js
15 ? Sl 0:01 \_ /usr/local/bin/node /srv/app/cluster/cluster.js
20 ? Sl 0:01 \_ /usr/local/bin/node /srv/app/cluster/cluster.js
As you can see, the bash -c starts 1 process which on the other hand forks the main node process. In docker container the process started by the command always has PID=1, that's what I want the node process to be. But it will be 5, 6, etc.
Thanks for the reply. I managed to solve the issue by creating a bash file that starts my node cluster with exec:
# start-cluster.sh
exec node cluster.js >> /var/log/cluster/console.log 2>&1
And in docker-compose file:
# docker-compose.yml
command: bash -c "./start-cluster.sh"
Starting the cluster with exec replaces the shell with node process and this way it has always PID=1 and my logs are output to file.

docker compose Logstash - specify config file and install plugin?

Im trying to copy my Logstash config and install a plugin at the same time. Ive tried multiple methods thus far with no avail, Logstash exists with errors every time
this fails:
logstash:
image: logstash:latest
command: logstash -f /etc/logstash/conf.d/logstash.conf
command: bash -c bin/logstash-plugin install logstash-filter-translate
this fails:
command: logstash -f /etc/logstash/conf.d/logstash.conf bash -c bin/logstash-plugin install logstash-filter-translate
this fails:
command: logstash -f /etc/logstash/conf.d/logstash.conf && bash -c bin/logstash-plugin install logstash-filter-translate
this also fails
command: bash -c logstash -f /etc/logstash/conf.d/logstash.conf && bin/logstash-plugin install logstash-filter-translate
Im having no luck here and I bet the answer is simple... can anyone point me in the right direction?
Thanks
I use the image that I am having locally with the below config, then it's working fine. Hope it helps.
version: '3'
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.6.3
command: bash -c "logstash -f /etc/logstash/conf.d/logstash.conf && bin/logstash-plugin install logstash-filter-translate"
Sample output
logstash_1 | [2017-12-06T15:27:29,120][WARN ][logstash.agent ] stopping pipeline {:id=>".monitoring-logstash"}
logstash_1 | Validating logstash-filter-translate
logstash_1 | Installing logstash-filter-translate
try this if it's Ubuntu image
command: bash -c "logstash -f /etc/logstash/conf.d/logstash.conf && bin/- install logstash-filter-translate".
Else it's Alpine image then use
command : sh -c " command to run "

Resources