How i can run sh script inside docker-compose file in background? - docker

I have multiple scripts trying to run inside a docker-compose file, these scripts are initiating services that will produce ongoing logs, is there any way that I can run these commands explicitly in the background so docker-compose can execute all commands instead of stucking with the first one
command:
sh -c './root/clone_repo.sh &&
appium -p 4723 &&
./root/start_emu.sh'

You can run a command in a background process using a single ampersand & after the command. See this comment discussing how to run multiple commands in the background using subshells.
You can use something like this to run the first two commands in the background and continue onto the final command.
command:
sh -c '(./root/clone_repo.sh &) &&
(appium -p 4723 &) &&
./root/start_emu.sh'

Related

Execute xvfb-run as docker custom command

Is there any reason why xvfb-run will not be executed as docker overridden command?
Having an image from this Dockerfile:
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y xvfb
Built with:
docker build -f Dockerfile.xvfb -t xvfb-test
If I execute a custom docker command with xfvb-run:
docker run xvfb-test bash -x /usr/bin/xvfb-run echo
It gets stuck and never ends
But, if I enter to the image docker run --rm -it xvfb-test bash, and execute the same command xvfb-run echo it finished immediately (meaning that the Xvfb server started and was able to execute the command)
This is an excerpt of xvfb-run script:
...
trap : USR1
(trap '' USR1; exec Xvfb ":$SERVERNUM" $XVFBARGS $LISTENTCP -auth $AUTHFILE >>"$ERRORFILE" 2>&1) &
XVFBPID=$!
wait || :
...
Executing with bash -x we can see what lines is the last that was executed:
+ XAUTHORITY=/tmp/xvfb-run.YwmHlq/Xauthority
+ xauth source -
+ trap : USR1
+ XVFBPID=16
+ wait
+ trap '' USR1
+ exec Xvfb :99 -screen 0 1280x1024x24 -nolisten tcp -auth /tmp/xvfb-run.YwmHlq/Xauthority
From this link
The docker --init option in the run command basically sets ENTRYPOINT to tini and passes the CMD to it or whatever you specify on the commandline. Without init, CMD becomes pid 1. In this case, /bin/bash
looks like running the command withouth init parameter, running as PID 1, is not handling the signal USR1 correctly.
Looking that the xvfb-run script and where it gets stucked seems that the signal USR1 (that Xvfb process sends) never gets propagated to the wait statement.
A way to force signal propagation is to add the --init flag to docker run command.
From the documentation, is exactly what it does.
Run an init inside the container that forwards signals and reaps processes

while infinite loop SH does not work as expected on docker startup

I have sh code (DashBoardImport.sh) like down below. It checks apı response to import a kibana dashboard in a infinite loop, If it gets a reponse with success, it breaks the loop :
#!/bin/sh
# use while loop to check if kibana is running
while true
do
response=$(curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson | grep -oE "^\{\"success")
#curl -X GET elk:9200/git-demo-topic | grep -oE "^\{\"git" > /dev/null
#match=$?
echo $response
if [ '{"success' = $response ]
then
echo "Running import dashboard.."
#curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson
break
else
echo "Kibana is not running yet"
sleep 5
fi
done
I run DashBoardImport.sh via docker file:
ADD ./CityCountDashBoard.ndjson /etc/elasticsearch/CityCountDashBoard.ndjson
ADD ./DashBoardImport.sh /etc/elasticsearch/DashBoardImport.sh
#ENTRYPOINT /etc/elasticsearch/DashBoardImport.sh &
USER root
RUN chmod +x /etc/elasticsearch/DashBoardImport.sh
#RUN /etc/elasticsearch/DashBoardImport.sh &
RUN nohup bash -c "/etc/elasticsearch/DashBoardImport.sh" >/dev/null 2>&1 &
I tried many options as you can see commented out. The sh works perfectly when I run it manually on the Docker Container. I kill the kibana service. then run the code. after I started the kibana, code succesfully workes as expected and imports the dashboard. But It does not work when it start on container automatically.
Do you have any idea?
Thanks alot in advance :)
A RUN step executes in a temporary container until the command returns and then docker captures the changes to the filesystem as a new layer in your image. Nothing else remains, no environment variables, running processes, etc, only the filesystem changes.
So when you RUN nohup ... & that process immediately returns since it's in the background (nohup ... & explicitly does that), and so the container exits, killing any processes that were running in the container, and captures the filesystem changes made, if any, to your image.
If you want something to run when you start the container, add it to your ENTRYPOINT or CMD.

docker-compose run multiple commands for a service

I am using docker on windows - version 18.03 (client)/18.05 (server). I have created docker-compose file for ELK stack. Everything is working fine. What I would like to do is, to install logtrail before kibana is started. I was thinking about copying logtrail*.zip first, then call install:
container_name: kibana
(...)
command:
- docker cp kibana:/ ./kibana/logtrail/logtrail-6.7.1-0.1.31.zip
- /bin/bash
- ./bin/kibana-plugin install/logtrail-6.7.1-0.1.31.zip
But that doesn't look like right way as first of all it doesn't work, second of all I am not sure if I can call mutliple commands like I did and third of all I'm not sure if docker cp in command is even allowed on that stage of service creation
command:
- /bin/bash
- -c
- |
echo "This is a multiline command"
echo "See how I escape $$ sign"
echo $$PATH
You can run multiple commands like above however you can not run docker cp as in your command.
You can run multiple commands for a service in docker compose by:
command: sh -c "command1 && command2 && command2"
THATS MY SOLUTION FOR THIS CASE:
# OPTION 01:
# command: >
# bash -c "chmod +x /scripts/rs-init.sh
# && sh /scripts/rs-init.sh"
# OPTION 02:
# entrypoint: [ "bash", "-c", "chmod +x /scripts/rs-init.sh && sh /scripts/rs-init.sh"]
If you're looking to install software David Maze's comment seems to be the standard path. If you want to actually run multiple commands look at the answer to this SO question Using Docker-Compose, how to execute multiple commands

How can I run script automatically after Docker container startup

I'm using Search Guard plugin to secure an elasticsearch cluster composed of multiple nodes.
Here is my Dockerfile:
#!/bin/sh
FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.3
USER root
# Install search guard
RUN bin/elasticsearch-plugin install --batch com.floragunn:search-guard-5:5.6.3-16 \
&& chmod +x \
plugins/search-guard-5/tools/hash.sh \
plugins/search-guard-5/tools/sgadmin.sh \
bin/init_sg.sh \
&& chown -R elasticsearch:elasticsearch /usr/share/elasticsearch
USER elasticsearch
To initialize SearchGuard (create internal users and assign roles). I need to run the script init_sg.sh after the container startup.
Here is the problem: Unless elasticsearch is running, the script will not initialize any security index.
The script's content is :
sleep 10
plugins/search-guard-5/tools/sgadmin.sh -cd config/ -ts config/truststore.jks -ks config/kirk-keystore.jks -nhnv -icl
Now, I just run the script manually after the container startup but since I'm running it on Kubernetes.. Pods may get killed or fail and get recreated automatically for some reason. In this case, the plugin have to be initialized automatically after the container startup!
So how to accomplish this? Any help or hint would be really appreciated.
The image itself has an entrypoint ENTRYPOINT ["/run/entrypoint.sh"] specified in the Dockerfile. You can replace it by your own script. So for example create a new script, mount it and first call /run/entrypoint.sh and then wait for start of elasticsearch before running your init_sg.sh.
Not sure this will solves your problem, but its worth check my repo'sDockerfile
I have created a simple run.sh file copied to docker image and in the Dockerfile I wrote CMD ["run.sh"]. In the same way define whatever you want in run.sh and write CMD ["run.sh"]. You can find another example like below
Dockerfile
FROM java:8
RUN apt-get update && apt-get install stress-ng -y
ADD target/restapp.jar /restapp.jar
COPY dockerrun.sh /usr/local/bin/dockerrun.sh
RUN chmod +x /usr/local/bin/dockerrun.sh
CMD ["dockerrun.sh"]
dockerrun.sh
#!/bin/sh
java -Dserver.port=8095 -jar /restapp.jar &
hostname="hostname: `hostname`"
nohup stress-ng --vm 4 &
while true; do
sleep 1000
done
This is addressed in the documentation here: https://docs.docker.com/config/containers/multi-service_container/
If one of your processes depends on the main process, then start your helper process FIRST with a script like wait-for-it, then start the main process SECOND and remove the fg %1 line.
#!/bin/bash
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1
I was trying to solve the exact problem. Here's the approach that worked for me.
Create a separate shell script that checks for ES status, and only start initialization of SG when ES is ready:
Shell Script
#!/bin/sh
echo ">>>> Right before SG initialization <<<<"
# use while loop to check if elasticsearch is running
while true
do
netstat -uplnt | grep :9300 | grep LISTEN > /dev/null
verifier=$?
if [ 0 = $verifier ]
then
echo "Running search guard plugin initialization"
/elasticsearch/plugins/search-guard-6/tools/sgadmin.sh -h 0.0.0.0 -cd plugins/search-guard-6/sgconfig -icl -key config/client.key -cert config/client.pem -cacert config/root-ca.pem -nhnv
break
else
echo "ES is not running yet"
sleep 5
fi
done
Install script in Dockerfile
You will need to install the script in container so it's accessible after it starts.
COPY sginit.sh /
RUN chmod +x /sginit.sh
Update entrypoint script
You will need to edit the entrypoint script or run script of your ES image. So that it starts the sginit.sh in the background BEFORE starting ES process.
# Run sginit in background waiting for ES to start
/sginit.sh &
This way the sginit.sh will start in the background, and will only initialize SG after ES is started.
The reason to have this sginit.sh script starts before ES in the background is so that it's not blocking ES from starting. The same logic applies if you put it after starting of ES, it will never run unless you put the starting of ES in the background.
I would suggest to put the CMD in you docker file to execute the script when the container start
FROM debian
RUN apt-get update && apt-get install -y nano && apt-get clean
EXPOSE 8484
CMD ["/bin/bash", "/opt/your_app/init.sh"]
There is other way , but before using this look at your requirement,
ENTRYPOINT "put your code here" && /bin/bash
#exemple ENTRYPOINT service nginx start && service ssh start &&/bin/bash "use && to separate your code"
You can also use wait-for-it script. It will wait on the availability of a host and TCP port. It is useful for synchronizing the spin-up of interdependent services and works like a charm with containers. It does not have any external dependencies so you can just run it as an RUN command without doing anything else.
A Dockerfile example based on this thread:
FROM elasticsearch
# Make elasticsearch write data to a folder that is not declared as a volume in elasticsearchs' official dockerfile.
RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml
# Download wait-for-it
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh
# Copy the files you may need and your insert script
# Insert data into elasticsearch
RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- path/to/insert/script.sh; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;

How can I auto-start a service in Docker using OpenRC (Alpine)?

I'm using an Alpine flavor from iron.io. I want to auto-run a trivial 'blink' script as a service when the Docker image starts. (I want derivative images that use this as a base to not know/care about this service--it'd just be "inherited" and run.) I was using S6 for this, and that works well, but wanted to see if something already built into Alpine would work out-of-the-box.
My Dockerfile:
FROM iron/scala
ADD blinkin /bin/
ADD blink /etc/init.d/
RUN rc-update add blink default
And my service script:
#!/sbin/openrc-run
command="/bin/blinkin"
depend()
{
need localmount
}
The /bin/blinkin script:
#!/bin/bash
for I in $(seq 1 5);
do
echo "BLINK!"
sleep 1
done
So I build the Docker image and run it. I see no output (BLINK!...) My script is in /bin and I can run it, and that works. My blink script is in /etc/init.d and symlinked to /etc/runlevels/default. So everything looks ok, but it doesn't seem as anything has run.
If I try: 'rc-service blink start' I see no "BLINK!" outbut, but do get this:
* WARNING: blink is already starting
What am I doing wrong?
You may find my dockerfy utility useful starting services, pre-running initialization commands before the primary command starts. See https://github.com/markriggins/dockerfy
For example:
RUN wget https://github.com/markriggins/dockerfy/releases/download/0.2.4/dockerfy-linux-amd64-0.2.4.tar.gz; \
tar -C /usr/local/bin -xvzf dockerfy-linux-amd64-*tar.gz; \
rm dockerfy-linux-amd64-*tar.gz;
ENTRYPOINT dockerfy
COMMAND --start bash -c "while true; do echo BLINK; sleep 1; done" -- \
--reap -- \
nginx
Would run a bash script as a service, echoing BLINK every second, while the primary command nginx runs. If nginx exits, then the BLINK service will automatically be stopped.
As an added benefit, any zombie processes left over by nginx will be automatically cleaned up.
You can also tail log files such as /var/log/nginx/error.log to stderr, edit nginx's configuration prior to startup and much more

Resources