How can I run script after starting process in docker? - docker

I want to run this command in docker.
./kafka_2.12-2.2.1/bin/connect-distributed.sh","./kafka_2.12-2.2.1/config/connectdistributed.properties"
This command starts Kafka connect and connected to my azure event hub.
After the Kafka is connected then I want to run the script file in docker.
./script.sh
In this script checks if Kafka is connected successfully. If it is not connected then wait for connection. After the connection is done check if XYZ name connectors not available then create it otherwise script is terminated. I am able to do these things on a local machine but I' not in docker.

I needed to do something similar to wait for db to initialize before server starts otherwise it would fail/exit before whole compose process was done. I add an entrypoint script to the server with something like below.
I imagine there is a way to call your service and send the command to quit immediately. You might have to install the basic tools on that other image.
So, I have a db image. I need to install basic db connect (psql, sqlcmd) on server image to be able to contact my db container to ensure it is running.
host="$1"
shift
cmd="$#"
if [ "${PROVIDER}" == "postgres" ]; then
echo -e "Running under Postgres"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "$POSTGRES_USER" "$POSTGRES_DB" -c '\q'; do
>&2 echo -e "Postgres is unavailable - sleeping"
sleep 2
done
# https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-docker?view=sql-server-ver15
# The IP address in the connection string is the IP address of the host machine that is running the container.
else
echo -e "Running under Sql Server"
# sqlcmd -?
# sqlcmd -S db.mssql -U rust -P -d rust_test -q "Select * from rust.content_categories"
# sqlcmd -S 192.168.1.108 -U test -P -d shortpoetdb -q "Select * from vcc.admin_users"
until /opt/mssql-tools/bin/sqlcmd -S db.mssql -U "${MSSQL_USER}" -P "${MSSQL_PASSWORD}" -d "${MSSQL_DB}" -q ":exit"; do
>&2 echo -e "$Mssql is $unavailable - sleeping"
sleep 2
done
fi
>&2 echo -e "${PROVIDER} Database is up - executing command"
exec $cmd
docker docs wait
ms docs sql server linux setup

want to run this command in docker.
./kafka_2.12-2.2.1/bin/connect-distributed.sh
confluentinc/cp-kafka-connect images run this already
You should not run Connect and Brokers on the same machines or multiple processes in a container

Run your script as background process which will keep on checking Kafka is connected or not. Once connected it will run its commands and exit and your main command will keep on running and docker won't exit
Eg.
CMD nohup bash -c "script.sh &" && your-command

Related

How to kill/stop remote Docker container after disconnecting SSH

I have a remote docker container that I access over SSH. I start the container normally with the docker start command.
sudo docker start 1db52045d674
sudo docker exec -it 1db52045d674; bash
This starts an interactive tty in the container, which I access over ssh.
I'd like the container to kill itself if I close the SSH connection. Is there anyway to do this?
.bash_logout is executed every time you use exit command to end a terminal session.
So you can use this file to run the docker stop command when you exit the ssh connection on the remote server.
Create ~/.bash_logout file if not existing.
Add following command to stop the docker container in this file.
Example :
docker stop container_name
Note: If a user closes the terminal window instead of writing the exit command, this file is not executed.
I was hoping for a more elegant solution but in the end I launched a bash script over ssh to trap for a SIGHUP
something like:
trap 'docker stop CONTAINER_NAME' SIGHUP;
while sleep 5;
do echo "foo";
done;
so when the operator closes the SSH connection, the trap gets trigger and docker nicely stops
You can use the --init parameter for initializing. This way, your container will be able to take over the init process and you can send a kill signal to it: https://docs.docker.com/engine/reference/run/#specify-an-init-process
Start the server:
docker run --init \
-p 2222:2222 \
-e USER_NAME=user \
-e USER_PASSWORD=pass \
-e PASSWORD_ACCESS=true \
-e SUDO_ACCESS=true \
linuxserver/openssh-server
Just note the --init and -e SUDO_ACCESS=true parameters here.
In another (client) shell,
ssh into container:
$ ssh user#127.0.0.1 -p 2222 -oStrictHostKeyChecking=accept-new
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
user#127.0.0.1's password:
Welcome to OpenSSH Server
2a. send kill signal to PID1 (docker-init):
$ sudo kill -s SIGINT 1
[sudo] password for user:
$ Connection to 127.0.0.1 closed by remote host.
Connection to 127.0.0.1 closed.
Container is gone.
I hope this helps.

Enabling ssh at docker build time

Docker version 17.11.0-ce, build 1caf76c
I need to run Ansible to build & deploy to wildfly some java projects during docker build time, so that when I run docker image I have everything setup. However, Ansible needs ssh to localhost. So far I was unable to make it working. I've tried different docker images and now I ended up with phusion (https://github.com/phusion/baseimage-docker#login_ssh). What I have atm:
FROM phusion/baseimage
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd -d &
RUN ssh -tt root#127.0.0.1
CMD ["/bin/bash"]
But I still get
Step 11/12 : RUN ssh -tt root#127.0.0.1
---> Running in cf83f9906e55
ssh: connect to host 127.0.0.1 port 22: Connection refused
The command '/bin/sh -c ssh -tt root#127.0.0.1' returned a non-zero code: 255
Any suggestions what could be wrong? Is it even possible to achieve that?
RUN /usr/sbin/sshd -d &
That will run a process in the background using a shell. As soon as the shell that started the process returns from running the background command, it exits with no more input, and the container used for that RUN command terminates. The only thing saved from a RUN is the change to the filesystem. You do not save running processes, environment variables, or shell state.
Something like this may work, but you may also need a sleep command to give sshd time to finish starting.
RUN /usr/sbin/sshd -d & \
ssh -tt root#127.0.0.1
I'd personally look for another way to do this without sshd during the build. This feels very kludgy and error prone.
There are multiple problems in that Dockerfile
First of all, you can't run a background process in a RUN statement and expect that process in another RUN. Each statement of a Dockerfile are run in a different containers so processes don't persist between them.
Other issue was that 127.0.0.1 is not in known_hosts.
And finally, you must give some time to sshd to start.
Here is a working Dockerfile:
FROM phusion/baseimage
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN printf "Host 127.0.0.1\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd & sleep 5 && ssh -tt root#127.0.0.1 'ls -al'
CMD ["/bin/bash"]
Anyway, I would rather find another solution than provisioning you image with Ansible in Dockerfile. Check out ansible-container

creating new managed server containers in Weblogic 12c docker

I followed the Oracle docs and managed to set up a running Weblogic Fusion Middleware Infrastructure Container with one Managed server..
I deployed an ADF application and it works perfectly fine ..
But now I am stuck because I cant add more Managed servers in the cluster.
The following command was used to start the managedserver1 which works perfectly..
docker run -d -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraServer.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.x startManagedServer.sh
here is the startManagedServer.sh script :
#!/bin/bash
# Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
#
#Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
export adminhostname=$adminhostname
export adminport=$adminport
# First Update the server in the domain
export server="infra_server1"
export DOMAIN_ROOT="/u01/oracle/user_projects/domains"
export DOMAIN_HOME="/u01/oracle/user_projects/domains/InfraDomain"
echo $adminhostname
echo $adminport
echo "DOMAIN_HOME: $DOMAIN_HOME"
/u01/oracle/oracle_common/common/bin/wlst.sh -skipWLSModuleScanning /u01/oracle/container-scripts/update_listenaddress.py $server
retval=$?
echo "RetVal from Update listener call $retval"
if [ $retval -ne 0 ];
then
echo "Update listener Failed.. Please check the Logs"
exit
fi
# Start Infra server
mkdir -p /u01/oracle/logs
$DOMAIN_HOME/bin/startManagedWebLogic.sh $server "http://"$adminhostname:$adminport > /u01/oracle/logs/startManagedWebLogic$$.log 2>&1 &
statusfile=/tmp/notifyfifo.$$
mkfifo "${statusfile}" || exit 1
{
# run tail in the background so that the shell can kill tail when notified that grep has exited
tail -f /u01/oracle/logs/startManagedWebLogic$$.log &
# remember tail's PID
tailpid=$!
# wait for notification that grep has exited
read templine <${statusfile}
echo ${templine}
# grep has exited, time to go
kill "${tailpid}"
} | {
grep -m 1 "<Notice> <WebLogicServer> <BEA-000360> <The server started in RUNNING mode.>"
# notify the first pipeline stage that grep is done
echo "RUNNING"> /u01/oracle/logs/startManagedWebLogic$$.status
echo "Infra server is running"
echo >${statusfile}
}
# clean up
rm "${statusfile}"
if [ -f /u01/oracle/logs/startManagedWebLogic$$.status ]; then
echo "Infra server has been started"
fi
#Display the logs
tail -f $DOMAIN_HOME/servers/infra_server1/logs/infra_server1.log
childPID=$!
wait $childPID
I did manage to add the Managed-Servers in the weblogic admin console by editing createorstartInfraDomain.sh and createInfraDomain.py
However editing the StartManagedServer.sh file for Infra_Server2 is not working..
Even after editing or even completely deleting the file startManagedServer.sh from the admin container the following command still works:
docker run -d -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraServer.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.x startManagedServer.sh
The following is what i get in the console :
root#Linux-Vostro-3250:/home/amalv/FMW-Infrastructure_Docker# docker run -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list oracle/fmw-infrastructure:12.2.1.0 startManagedServer.shInfraAdminContainer
7001
DOMAIN_HOME: /u01/oracle/user_projects/domains/InfraDomain
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
/u01/oracle/container-scripts/update_listenaddress.py called with the following sys.argv array:
sys.argv[0] = /u01/oracle/container-scripts/update_listenaddress.py
sys.argv[1] = infra_server1
c697c81b15c8
172.18.0.4
/u01/oracle/user_projects/domains/InfraDomain
INFO: SeedingConfigurationProcessor.start, finished.
INFO: SeedingConfigurationProcessor.end, finished.
whatever i do with the startManagedServer.sh I get the above log with "sys.argv[1] = infra_server1".
Can someone help me with this!!
Thanks a lot
Here is what i did that helped me to set up Multiple managed servers .
Initially i ran the following command which is from the instructions
in container-registry.oracle.com:
docker run -d -p 9001:7001 --network=InfraNET -v $HOST_VOLUME:/u01/oracle/user_projects --name InfraAdminContainer --env-file ./infraDomain.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.2
Then I copied my edited Container-scripts into the Container-scripts
in the Admin container.
Deleted the already created domain directory from the container.
committed the container to a new image.
alternatively you can mount the edited files into container in docker run command.
I edited the createInfraDomain.py and createOrStartInfradomain.sh files in the container-scripts to create 6 infra-servers.This will create 6 infraserver instances and you will be able to see it in the weblogic console.
Now use the following command to start the first Managed Server container:
docker run -d -p 9802:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list previously-committed-image startManagedServer.sh
For startning a new managed server container I edited the startManagedServer.sh file and changed the server value to infra_server2 and ran the following command:
docker run -d -p 9802:8001 -v /path(or)location/of/edited/cotainer-scripts/in/your/hostSystem:/u01/oracle/container-scripts --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list previously-committed-image startManagedServer.sh
For every new container I changed the server name in startNodeManager.sh and mounted it to the container in docker run command..
I am sure there is a much simpler way to add more servers may be by using WLST scripting to add server instances in weblogic..
and also to start new managed server containers..
if anyone knows please let us know.
Thanks!!

Starting multiple services in docker entrypoint

I am trying to start a few services in my container when it stars.
This is my entry_point script:
#!/bin/bash
set -e
mkdir -p /app/log
tail -n 0 -f /var/log/*.log &
tail -n 0 -f ./log/current.log &
# Start Gunicorn processes
#echo Starting Nginx.
#exec /etc/init.d/nginx start
echo Starting Gunicorn.
exec gunicorn app.main:app \
--name price_service \
-c config/gunicorn.conf \
"$#"
What i would like todo is to uncomment this line :
#exec /etc/init.d/nginx start
But at startup the container just hangs here.
Any solutions ?
You should read about http://man7.org/linux/man-pages/man3/exec.3.html to see what it is doing.
What you need is to either to background the process using & (not recommended), or use a process manager or init system (see Can I run multiple programs in a Docker container?, also not really recommended).
Or you can run multiple containers, and use docker-compose to manage them (recommended).

How to tell if a docker container run with -d has finished running its CMD

I want to make a simple bash script which runs one docker container with -d and then do something else if and only if the container has finished running its CMD. How can I do this while avoiding timing issues since the docker container can take a while to finish starting up?
My only thought was that the Dockerfile for the container will need to create some sort of state on the container itself when it's done and then the bash script can poll until the state file is there. Is there a better / standard way to do something like this?
Essentially I need a way for the host that ran a docker container with -d to be able to tell when it's ready.
Update
Made it work with the tailing logs method, but it seems a bit hacky:
docker run -d \
--name sauceconnect \
sauceconnect
# Tail logs until 'Sauce Connect is up'
docker logs -f sauceconnect | while read LINE
do
echo "$LINE"
if [[ "$LINE" == *"Sauce Connect is up"* ]]; then
pkill -P $$ docker
fi
done
You should be fine to check the logs via docker logs -f <containter_name_or_ID>
-f : same as tail -f
For example, the CMD is finished, and export a log as JOB ABC is successfully started.
your script can detect and run the rest jobs after get it.

Resources