while infinite loop SH does not work as expected on docker startup - docker

I have sh code (DashBoardImport.sh) like down below. It checks apı response to import a kibana dashboard in a infinite loop, If it gets a reponse with success, it breaks the loop :
#!/bin/sh
# use while loop to check if kibana is running
while true
do
response=$(curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson | grep -oE "^\{\"success")
#curl -X GET elk:9200/git-demo-topic | grep -oE "^\{\"git" > /dev/null
#match=$?
echo $response
if [ '{"success' = $response ]
then
echo "Running import dashboard.."
#curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson
break
else
echo "Kibana is not running yet"
sleep 5
fi
done
I run DashBoardImport.sh via docker file:
ADD ./CityCountDashBoard.ndjson /etc/elasticsearch/CityCountDashBoard.ndjson
ADD ./DashBoardImport.sh /etc/elasticsearch/DashBoardImport.sh
#ENTRYPOINT /etc/elasticsearch/DashBoardImport.sh &
USER root
RUN chmod +x /etc/elasticsearch/DashBoardImport.sh
#RUN /etc/elasticsearch/DashBoardImport.sh &
RUN nohup bash -c "/etc/elasticsearch/DashBoardImport.sh" >/dev/null 2>&1 &
I tried many options as you can see commented out. The sh works perfectly when I run it manually on the Docker Container. I kill the kibana service. then run the code. after I started the kibana, code succesfully workes as expected and imports the dashboard. But It does not work when it start on container automatically.
Do you have any idea?
Thanks alot in advance :)

A RUN step executes in a temporary container until the command returns and then docker captures the changes to the filesystem as a new layer in your image. Nothing else remains, no environment variables, running processes, etc, only the filesystem changes.
So when you RUN nohup ... & that process immediately returns since it's in the background (nohup ... & explicitly does that), and so the container exits, killing any processes that were running in the container, and captures the filesystem changes made, if any, to your image.
If you want something to run when you start the container, add it to your ENTRYPOINT or CMD.

Related

Multiple health check curls in docker health check

In order to ensure the health check of my container, I need to perform test calls to multiple URLS.
curl -f http://example.com and curl -f http://example2.com
Is it possible to perform multiple curl calls for a docker health check?
You can set a script as the healthcheck command that contains a more complex logic to perform the healthcheck. That way you can do multiple requests via curl and let the script only return positive if all requests succeeded.
# example dockerfile
COPY files/healthcheck.sh /healthcheck.sh
RUN chmod +x /healthcheck.sh
HEALTHCHECK --interval=60s --timeout=10s --start-period=10s \
CMD /healthcheck.sh
Although I cannot test, I think you can use the following
HEALTHCHECK CMD (curl --fail http://example.com && curl --fail http://example2.com) || exit 1
If you want first to check this command manually (without exit part), you can check the last error code by
echo $? -> in linux
and
echo %errorlevel% -> in windows

PUT Elasticsearch Ingest Pipeline by default

We currently use Elasticsearch for storage of Spring Boot App logs that are sent by Filebeat and use Kibana to visualise this.
Our entire architecture is dockerized inside a docker-compose file. Currently, our when we start the stack, we have to wait for Elasticsearch to start, then PUT our Ingest Pipeline, then restart Filebeat, and only then do our logs show up properly ingested in Kibana.
I'm quite new to this, but I was wondering if there is no way to have Elasticsearch save ingest pipelines so that you do not have to load them every single time? I read about mounting volumes or running custom scripts to wait for ES and PUT when ready, but all of this seems very cumbersome for a use case that to me seems like the default?
We used a similar approach to ozlevka, by running a script during the build process of our custom Elasticsearch image.
This is our script:
#!/bin/bash
# This script sets up the Elasticsearch docker instance with the correct pipelines and templates
baseUrl='localhost:9200'
contentType='Content-Type:application/json'
# filebeat
ingestUrl=$baseUrl'/_ingest/pipeline/our-pipeline?pretty'
payload='/usr/share/elasticsearch/config/our-pipeline.json'
/usr/share/elasticsearch/bin/elasticsearch -p /tmp/pid > /dev/null &
# wait until Elasticsearch is up
# you can get logs if you change /dev/null to /dev/stderr
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -XPUT $ingestUrl -H$contentType -d#$payload)" != "200" ]]; do
echo "Waiting for Elasticsearch to start and posting pipeline..."
sleep 5
done
kill -SIGTERM $(cat /tmp/pid)
rm /tmp/pid
echo -e "\n\n\nCompleted Elasticsearch Setup, refer to logs for details"
I suggest using the startscript into filebeat container.
The script will ping elasticsearch be ready, after this create pipeline and start filebeat.
#!/usr/bin/env bash -e
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
curl -XGET -s -k --fail http://${ELASTICSEARCH_HOST}:{$ELASTICSEARCH_PORT}${path}
}
pipeline() {
curl -XPUT -s -k --fail http://${ELASTICSEARCH_HOST}:{$ELASTICSEARCH_PORT}/_ingest/pipeline/$PIPELINE_NAME -d #pipeline.json
}
while true; do
if [ -f "${START_FILE}" ]; then
pipeline
/usr/bin/filebeat -c filebeat.yaml &
exit 0
else
echo 'Waiting for elasticsearch cluster to become green'
if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
touch ${START_FILE}
fi
fi
done
This method will be good for docker-compose and docker swarm. For k8s preferable create readiness probe.

How to deal with state "Exit 0" in Docker

I have build a Docker image and afterwards run a container using Docker Compose. The following command will do the job for me:
docker-compose up -d
I have restarted the PC and now I want to start the previous container that I've created before. So I have tried the following command:
$ docker-compose start
Starting php-apache ... done
Apparently it works but it doesn't as per the output for the following command:
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------
php55devwork_php-apache_1 /bin/sh -c bash -C '/usr/l ... Exit 0
For sure something is wrong and I am trying to find out what.
How do I find why the command is failing?
Is there any place where I could see a log file or something that help me to identify and fix the error?
Here is the repository if you want to give it a try.
Update
If I remove the container: docker rm <container-id> and recreate it by running docker-compose up -d --build it works again.
Update #1
I am not able to see such weird characters:
This is what helped me to resolve this issue:
Under one of your services in the docker-compose yaml file, type in the following:
tty: true so it'll look like
version: '3'
services:
web:
tty: true
Hopefully this helps someone; thumps up if it helps you :)
I took a look into your Docker github and setup_php_settings
on line (line n. 27) there is source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND
and that runs apache2 on foreground so it shouldn't exit with status code 0.
But it seems to me like your setup_php_settings contains some weird character (when I run your image with compose)
(original is one on right side) weird character
I have changed it to new lines and it worked for me. Let us know if it helped.
If you want to debug your docker container you can run it without entrypoint like:
docker run -it yourImage bash
-- AFTER some investigation:
There were still some errors when I restart docker container - like in your case stopped container and start after reboot. There were problems: symbolic links already exist and apache2 has grumpy PID so we need to do something like in oficial php docker
This is full setup_php_settings worked for me after container restart.
#!/bin/bash -x
set -e
PHP_ERROR_REPORTING=${PHP_ERROR_REPORTING:-"E_ALL & ~E_DEPRECATED & ~E_NOTICE"}
sed -ri 's/^display_errors\s*=\s*Off/display_errors = On/g' /etc/php5/apache2/php.ini
sed -ri 's/^display_errors\s*=\s*Off/display_errors = On/g' /etc/php5/cli/php.ini
sed -ri "s/^error_reporting\s*=.*$//g" /etc/php5/apache2/php.ini
sed -ri "s/^error_reporting\s*=.*$//g" /etc/php5/cli/php.ini
echo "error_reporting = $PHP_ERROR_REPORTING" >> /etc/php5/apache2/php.ini
echo "error_reporting = $PHP_ERROR_REPORTING" >> /etc/php5/cli/php.ini
mkdir -p /data/tmp/php/uploads
mkdir -p /data/tmp/php/sessions
mkdir -p /data/tmp/php/xdebug
chown -R www-data:www-data /data/tmp/php*
ln -sf /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini
ln -sf /etc/php5/mods-available/zz-php-directories.ini /etc/php5/apache2/conf.d/zz-php-directories.ini
# Add symbolic link to get Zend out of the current install dir
ln -sf /usr/share/php/libzend-framework-php/Zend/ /usr/share/php/Zend
a2enmod rewrite
php5enmod mcrypt
# Apache gets grumpy about PID files pre-existing
: "${APACHE_PID_FILE:=${APACHE_RUN_DIR:=/var/run/apache2}/apache2.pid}"
rm -f "$APACHE_PID_FILE"
source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND "$#"
You can check logs with docker compose logs.
Looking through your repo, you have
ENTRYPOINT bash -C '/usr/local/bin/setup_php_settings';'bash'
which, without an interactive session, bash will exit immediately (with an exit code 0) after reading the end of file on stdin.
Normally getting an exit 0 should be a reason to celebrate, as it indicates that your command has ended successfully (http://www.tldp.org/LDP/abs/html/exit-status.html).
Having had a look at your Dockerfile it looks like, your just invoking bash in your entry point which then for sure will exit (as it is non blocking). In order to serve some data, you should rather be calling php (which is a blocking operation that keeps the container up), like done in the official docker files for php (see the CMD ["php", "-a"] at https://github.com/docker-library/php/blob/1c56325a69718a3e3cf76179e75d070b7e23da62/5.6/Dockerfile)

How can I auto-start a service in Docker using OpenRC (Alpine)?

I'm using an Alpine flavor from iron.io. I want to auto-run a trivial 'blink' script as a service when the Docker image starts. (I want derivative images that use this as a base to not know/care about this service--it'd just be "inherited" and run.) I was using S6 for this, and that works well, but wanted to see if something already built into Alpine would work out-of-the-box.
My Dockerfile:
FROM iron/scala
ADD blinkin /bin/
ADD blink /etc/init.d/
RUN rc-update add blink default
And my service script:
#!/sbin/openrc-run
command="/bin/blinkin"
depend()
{
need localmount
}
The /bin/blinkin script:
#!/bin/bash
for I in $(seq 1 5);
do
echo "BLINK!"
sleep 1
done
So I build the Docker image and run it. I see no output (BLINK!...) My script is in /bin and I can run it, and that works. My blink script is in /etc/init.d and symlinked to /etc/runlevels/default. So everything looks ok, but it doesn't seem as anything has run.
If I try: 'rc-service blink start' I see no "BLINK!" outbut, but do get this:
* WARNING: blink is already starting
What am I doing wrong?
You may find my dockerfy utility useful starting services, pre-running initialization commands before the primary command starts. See https://github.com/markriggins/dockerfy
For example:
RUN wget https://github.com/markriggins/dockerfy/releases/download/0.2.4/dockerfy-linux-amd64-0.2.4.tar.gz; \
tar -C /usr/local/bin -xvzf dockerfy-linux-amd64-*tar.gz; \
rm dockerfy-linux-amd64-*tar.gz;
ENTRYPOINT dockerfy
COMMAND --start bash -c "while true; do echo BLINK; sleep 1; done" -- \
--reap -- \
nginx
Would run a bash script as a service, echoing BLINK every second, while the primary command nginx runs. If nginx exits, then the BLINK service will automatically be stopped.
As an added benefit, any zombie processes left over by nginx will be automatically cleaned up.
You can also tail log files such as /var/log/nginx/error.log to stderr, edit nginx's configuration prior to startup and much more

Simple Shell Script: Dead Docker Containers

I have a very simple Docker container that runs a bash shell script that returns something. My Dockerfile:
# Docker image to get stats from a rest interface using CURL and JSON parsing
FROM ubuntu
RUN apt-get update
# Install curl and jq, a lightweight command-line JSON processor
RUN apt-get install -y curl jq
COPY ./stats.sh /
# Make sure script has execute permissions for root
RUN chmod 500 stats.sh
# Define a custom entrypoint to execute stats commands easily within the container,
# using environment substitution and the like...
ENTRYPOINT ["/stats.sh"]
CMD ["info"]
The stats.sh looks like this:
#!/bin/bash
# ElasticSearch
## Get the total size of the elasticsearch DB in bytes
## Requires the elasticsearch container to be linked with alias 'elasticsearch'
function es_size() {
local size=$(curl $ELASTICSEARCH_PORT_9200_TCP_ADDR:$ELASTICSEARCH_PORT_9200_TCP_PORT/_stats/_all 2>/dev/null|jq ._all.total.store.size_in_bytes)
echo $size
}
if [[ "$1" == "info" ]]; then
echo "Check stats.sh for available commands"
elif [[ "$1" == "es_size" ]]; then
es_size
else
echo "Unknown command: $#"
fi
So basically, I have a Docker container that I will run with --rm to exit immediately after running and returning the value I want. More precise, I run it from another shell script (in the host) with:
local size=$(docker run --name stats-es-size --rm --link $esName:elasticsearch $ENV_DOCKER_REST_STATS_IMAGE:$ENV_DOCKER_REST_STATS_VERSION es_size)
Now I'm running this periodically to gather statistics, once a minute. While it works well in general, I end up getting containers with status Dead about once a day.
Can anybody tell me what I might be doing wrong? Is there some problem with my approach or why do my containers die with a certain frequency?

Resources