Starting multiple services in docker entrypoint - docker

I am trying to start a few services in my container when it stars.
This is my entry_point script:
#!/bin/bash
set -e
mkdir -p /app/log
tail -n 0 -f /var/log/*.log &
tail -n 0 -f ./log/current.log &
# Start Gunicorn processes
#echo Starting Nginx.
#exec /etc/init.d/nginx start
echo Starting Gunicorn.
exec gunicorn app.main:app \
--name price_service \
-c config/gunicorn.conf \
"$#"
What i would like todo is to uncomment this line :
#exec /etc/init.d/nginx start
But at startup the container just hangs here.
Any solutions ?

You should read about http://man7.org/linux/man-pages/man3/exec.3.html to see what it is doing.
What you need is to either to background the process using & (not recommended), or use a process manager or init system (see Can I run multiple programs in a Docker container?, also not really recommended).
Or you can run multiple containers, and use docker-compose to manage them (recommended).

Related

docker entrypoint using dos2unix

When I run docker-compose up -d --build
The dockerfile does not trigger the CMD bash /webapp/runscript.sh
Said differently...I don't see the redis-server or celery application running when I exec into the container
If I manually run "bash /webapp/runscrip.sh"; the processes end up running
I originally tried Entrypoint ['bash', '/webapp/runscript.sh'] as well
with no luck. Not sure what I am missing
tail-end of dockerfile
RUN mkdir -p /logs
COPY runscript.sh /webapp/runscript.sh
RUN chmod -R u+x /webapp/runscript.sh
RUN dos2unix /webapp/runscript.sh
CMD bash /webapp/runscript.sh
CMD tail -f /dev/null
runscript.sh
#!/bin/bash
nohup redis-server > /logs/redis.out &
nohup celery -A app.celery worker --loglevel=info > /logs/celery_worker.out &
nohup celery flower -A app.celery > /logs/celery_flower.out &
The CMD step in a Dockerfile sets meta data on the image telling docker the default command to run when running the container. You can only have a single value for CMD so setting it a second time overwrites any previous settings.
Containers can only have a single process to start, and that process is run as pid 1 inside the container. Once that process exits, the entire container is stopped, including any background processes run.
Therefore, you need to set the CMD to whatever command or script you want to run, and that command needs to remain running for the duration of your container's lifetime. That could be as easy as running tail -f /dev/null as the last line of your script. However the recommended practice is to run your app in the foreground rather than background daemons that may crash unnoticed. I'd you really need multiple daemons inside a container, there's tools like supervisord. But in most cases, you are better off running multiple containers, one per service, and communicate between those services over a shared docker network. docker-compose is very useful to setup this network and deploy your containers with the appropriate configurations.

run multiple commands in docker container start

I have a simple docker image running on ubuntu 16.04 based on a dockerfile which CMD is "/sbin/ejabberdctl foreground". to keep the docker container alive once it started I used to run ejabberd server in foreground. However after starting the container and /sbin/ejabberdctl I need to execute another command once ejabberdctl is already running (e.g. ejabberdctl list_cluster).
Tried to add both commands to bash script, but it doesn't work. tried to run /sbin/ejbberdctl start &, it didn't work either.
Which way to dig?
Option A:
Create a simple bash script that runs container and list_cluster with out modifying entry point of ejabberd docker image.
#!/bin/bash
if [ "${1}" = "remove_old" ]; then
echo "removing old ejabberd container"
docker rm -f ejabberd
fi
docker run --rm --name ejabberd -d -p 5222:5222 ejabberd/ecs
sleep 5
echo -e "*******list_cluster******"
docker exec -it ejabberd ash -c "/home/ejabberd/bin/ejabberdctl list_cluster"
Option B
In option B you need to modify ejabberd official image entry point as it does not allow you to run multiple scripts on bootup. So add your script on boot up while a bit modification.
https://github.com/processone/docker-ejabberd/blob/master/ecs/Dockerfile
I will suggest using an official alpine image of 30 MB only of ejabberd instead of Ubuntu.
https://hub.docker.com/r/ejabberd/ecs/
The demo is here can be modified for Ubuntu too but this is tested against the alpine ejabberd official image.
Use ejabberd official image as a base image and ENV MASTER_NODE=ejabberd#ec2-10.0.0.1 is for the master node if you are interested in a cluster.
From ejabberd/ecs:latest
USER root
RUN whoami
COPY supervisord.conf /etc/supervisord.conf
RUN apk add supervisor
RUN mkdir -p /etc/supervisord.d
COPY pm2.conf /etc/supervisord.d/ejabberd.conf
COPY start.sh /opt/ejabberd/start.sh
RUN chmod +x /opt/ejabberd/start.sh
ENV MASTER_NODE=ejabberd#ec2-10.0.0.1
ENTRYPOINT ["supervisord", "--nodaemon", "--configuration", "/etc/supervisord.conf"]
Now create supervisors config file
[unix_http_server]
file = /tmp/supervisor.sock
chmod = 0777
chown= nobody:nogroup
[supervisord]
logfile = /tmp/supervisord.log
logfile_maxbytes = 50MB
logfile_backups=10
loglevel = info
pidfile = /tmp/supervisord.pid
nodaemon = true
umask = 022
identifier = supervisor
[supervisorctl]
serverurl = unix:///tmp/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[include]
files = /etc/supervisord.d/*.conf
Now create ejabberd.conf to start ejabberd using supervisorsd. Note here join cluster argument is used to join cluster if you want to join the cluster. remove it if not needed.
[supervisord]
nodaemon=true
[program:list_cluster]
command: /opt/ejabberd/start.sh join_cluster
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
[program:ejabberd]
command=/home/ejabberd/bin/ejabberdctl foreground
autostart=true
priority=1
autorestart=true
username=ejabberd
exitcodes=0 , 4
A /opt/ejabberd/start.sh bash script that will list_cluster once ejabberd is up and also capable to join_cluster if an argument is passed while calling the script.
#!/bin/sh
until nc -vzw 2 localhost 5222; do sleep 2 ;echo -e "Ejabberd is booting....."; done
if [ $? -eq 0 ]; then
########## Once ejabberd is up then list the cluster ##########
echo -e "***************List_Cluster start***********"
/home/ejabberd/bin/ejabberdctl list_cluster
echo -e "***************List_Cluster End***********"
########## If you want to join cluster once up as pass the master node as ENV then pass first param like join_cluster ##########
if [ "${1}" == "join_cluster" ]; then
echo -e "***************Joining_Cluster start***********"
/home/ejabberd/bin/ejabberdctl join_cluster ejabberd#$MASTER_NODE
echo -e "***************Joining_Cluster End***********"
fi
else
echo -e "**********Ejabberd is down************";
fi
Run docker container
docker build -t ejabberd .
docker run --name ejabberd --rm -it ejabberd
What you're searching for it's called supervisord. In the official docker documentation you can find some examples of how to use it.
Be aware that running multiple services in the same container it's discouraged unless strictly necessary, though.

Why can't I always kill a docker process with Ctrl-C?

I have a script which I want to optionally run within a container. I have observed that if I run an intermediate script it can be killed with Ctrl-C, however if I do not then it can't.
Here is an example:
test1.sh:
#!/bin/bash
if [ "${1}" = true ]; then
while true; do echo "args: $#"; sleep 1; done
else
docker run --rm -it $(docker build -f basic-Dockerfile -q .) /test2.sh $#
fi
test2.sh:
#!/bin/bash
/test1.sh true $#
basic-Dockerfile:
FROM alpine:3.7
RUN apk add --no-cache bash
COPY test1.sh test2.sh /
ENTRYPOINT ["bash"]
Running ./test1.sh true foo bar will happily print out true foo bar, and running ./test1.sh foo bar will do the same in a container. Sending Ctrl-C will kill the process and delete the container as expected.
However if I try to remove the need for an extra file by changing /test2.sh $# to /test1.sh true $#:
test1.sh
#!/bin/bash
if [ "${1}" = true ]; then
while true; do echo "args: $#"; sleep 1; done
else
docker run --rm -it $(docker build -f basic-Dockerfile -q .) /test1.sh true $#
fi
then the process can no longer be terminated with Ctrl-C, and instead must be stopped with docker kill.
Why is this happening?
Docker version 18.06.1-ce running on Windows 10 in WSL
That's a common misunderstanding in docker but it's for a good reason.
When a process run as PID 1 in Linux it behaves a little different. Specifically, it ignores signals as SIGTERM (which you send when hitting Ctrl-C), unless the script is coded to do so. This doesn't occur when PID > 1.
And that's why your second scenario works (The PID 1 is script2.sh, which delegates the signal in script1.sh, which stops because it is not PID1) but not the first one (script1.sh is PID 1 and thus it doesn't stop with SIGTERM).
To solve that, you can trap the signal in script1.sh and exit:
exit_func() {
echo "SIGTERM detected"
exit 1
}
trap exit_func SIGTERM SIGINT
Or tell docker run to init the container with a different process as PID 1. Specifically, if you add --init to docker run with no more arguments, it uses a default program, tini, prepared to handle these situations:
docker run --rm -it --init $(docker build -f basic-Dockerfile -q .) /test1.sh true $#
You can also you can also use exec to replace the current shell with a new one which can be stopped with ctrl-c
For example start.sh script which starts nginx server and runs uwsgi
#!/usr/bin/env bash
service nginx start
uwsgi --ini uwsgi.ini
should changed to
#!/usr/bin/env bash
service nginx start
exec uwsgi --ini uwsgi.ini
After theese changes ctrl c will stop the container

Enabling ssh at docker build time

Docker version 17.11.0-ce, build 1caf76c
I need to run Ansible to build & deploy to wildfly some java projects during docker build time, so that when I run docker image I have everything setup. However, Ansible needs ssh to localhost. So far I was unable to make it working. I've tried different docker images and now I ended up with phusion (https://github.com/phusion/baseimage-docker#login_ssh). What I have atm:
FROM phusion/baseimage
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd -d &
RUN ssh -tt root#127.0.0.1
CMD ["/bin/bash"]
But I still get
Step 11/12 : RUN ssh -tt root#127.0.0.1
---> Running in cf83f9906e55
ssh: connect to host 127.0.0.1 port 22: Connection refused
The command '/bin/sh -c ssh -tt root#127.0.0.1' returned a non-zero code: 255
Any suggestions what could be wrong? Is it even possible to achieve that?
RUN /usr/sbin/sshd -d &
That will run a process in the background using a shell. As soon as the shell that started the process returns from running the background command, it exits with no more input, and the container used for that RUN command terminates. The only thing saved from a RUN is the change to the filesystem. You do not save running processes, environment variables, or shell state.
Something like this may work, but you may also need a sleep command to give sshd time to finish starting.
RUN /usr/sbin/sshd -d & \
ssh -tt root#127.0.0.1
I'd personally look for another way to do this without sshd during the build. This feels very kludgy and error prone.
There are multiple problems in that Dockerfile
First of all, you can't run a background process in a RUN statement and expect that process in another RUN. Each statement of a Dockerfile are run in a different containers so processes don't persist between them.
Other issue was that 127.0.0.1 is not in known_hosts.
And finally, you must give some time to sshd to start.
Here is a working Dockerfile:
FROM phusion/baseimage
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN printf "Host 127.0.0.1\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd & sleep 5 && ssh -tt root#127.0.0.1 'ls -al'
CMD ["/bin/bash"]
Anyway, I would rather find another solution than provisioning you image with Ansible in Dockerfile. Check out ansible-container

creating new managed server containers in Weblogic 12c docker

I followed the Oracle docs and managed to set up a running Weblogic Fusion Middleware Infrastructure Container with one Managed server..
I deployed an ADF application and it works perfectly fine ..
But now I am stuck because I cant add more Managed servers in the cluster.
The following command was used to start the managedserver1 which works perfectly..
docker run -d -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraServer.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.x startManagedServer.sh
here is the startManagedServer.sh script :
#!/bin/bash
# Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
#
#Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
export adminhostname=$adminhostname
export adminport=$adminport
# First Update the server in the domain
export server="infra_server1"
export DOMAIN_ROOT="/u01/oracle/user_projects/domains"
export DOMAIN_HOME="/u01/oracle/user_projects/domains/InfraDomain"
echo $adminhostname
echo $adminport
echo "DOMAIN_HOME: $DOMAIN_HOME"
/u01/oracle/oracle_common/common/bin/wlst.sh -skipWLSModuleScanning /u01/oracle/container-scripts/update_listenaddress.py $server
retval=$?
echo "RetVal from Update listener call $retval"
if [ $retval -ne 0 ];
then
echo "Update listener Failed.. Please check the Logs"
exit
fi
# Start Infra server
mkdir -p /u01/oracle/logs
$DOMAIN_HOME/bin/startManagedWebLogic.sh $server "http://"$adminhostname:$adminport > /u01/oracle/logs/startManagedWebLogic$$.log 2>&1 &
statusfile=/tmp/notifyfifo.$$
mkfifo "${statusfile}" || exit 1
{
# run tail in the background so that the shell can kill tail when notified that grep has exited
tail -f /u01/oracle/logs/startManagedWebLogic$$.log &
# remember tail's PID
tailpid=$!
# wait for notification that grep has exited
read templine <${statusfile}
echo ${templine}
# grep has exited, time to go
kill "${tailpid}"
} | {
grep -m 1 "<Notice> <WebLogicServer> <BEA-000360> <The server started in RUNNING mode.>"
# notify the first pipeline stage that grep is done
echo "RUNNING"> /u01/oracle/logs/startManagedWebLogic$$.status
echo "Infra server is running"
echo >${statusfile}
}
# clean up
rm "${statusfile}"
if [ -f /u01/oracle/logs/startManagedWebLogic$$.status ]; then
echo "Infra server has been started"
fi
#Display the logs
tail -f $DOMAIN_HOME/servers/infra_server1/logs/infra_server1.log
childPID=$!
wait $childPID
I did manage to add the Managed-Servers in the weblogic admin console by editing createorstartInfraDomain.sh and createInfraDomain.py
However editing the StartManagedServer.sh file for Infra_Server2 is not working..
Even after editing or even completely deleting the file startManagedServer.sh from the admin container the following command still works:
docker run -d -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraServer.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.x startManagedServer.sh
The following is what i get in the console :
root#Linux-Vostro-3250:/home/amalv/FMW-Infrastructure_Docker# docker run -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list oracle/fmw-infrastructure:12.2.1.0 startManagedServer.shInfraAdminContainer
7001
DOMAIN_HOME: /u01/oracle/user_projects/domains/InfraDomain
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
/u01/oracle/container-scripts/update_listenaddress.py called with the following sys.argv array:
sys.argv[0] = /u01/oracle/container-scripts/update_listenaddress.py
sys.argv[1] = infra_server1
c697c81b15c8
172.18.0.4
/u01/oracle/user_projects/domains/InfraDomain
INFO: SeedingConfigurationProcessor.start, finished.
INFO: SeedingConfigurationProcessor.end, finished.
whatever i do with the startManagedServer.sh I get the above log with "sys.argv[1] = infra_server1".
Can someone help me with this!!
Thanks a lot
Here is what i did that helped me to set up Multiple managed servers .
Initially i ran the following command which is from the instructions
in container-registry.oracle.com:
docker run -d -p 9001:7001 --network=InfraNET -v $HOST_VOLUME:/u01/oracle/user_projects --name InfraAdminContainer --env-file ./infraDomain.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.2
Then I copied my edited Container-scripts into the Container-scripts
in the Admin container.
Deleted the already created domain directory from the container.
committed the container to a new image.
alternatively you can mount the edited files into container in docker run command.
I edited the createInfraDomain.py and createOrStartInfradomain.sh files in the container-scripts to create 6 infra-servers.This will create 6 infraserver instances and you will be able to see it in the weblogic console.
Now use the following command to start the first Managed Server container:
docker run -d -p 9802:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list previously-committed-image startManagedServer.sh
For startning a new managed server container I edited the startManagedServer.sh file and changed the server value to infra_server2 and ran the following command:
docker run -d -p 9802:8001 -v /path(or)location/of/edited/cotainer-scripts/in/your/hostSystem:/u01/oracle/container-scripts --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list previously-committed-image startManagedServer.sh
For every new container I changed the server name in startNodeManager.sh and mounted it to the container in docker run command..
I am sure there is a much simpler way to add more servers may be by using WLST scripting to add server instances in weblogic..
and also to start new managed server containers..
if anyone knows please let us know.
Thanks!!

Resources