docker compose to delay container build and start - docker

i have couple of container running in sequence.
i am using depends on to make sure the next one only starts after current one running.
i realize one of container has some cron job to be finished ,
so the next container has the proper data to be imported....
in this case, i cannot just rely on depends on parameter.
how do i delay the next container to starts? say wait for 5 minutes.
sample docker compose:
test1:
networks:
- test
image: test1
ports:
- "8115:8115"
container_name: test1
test2:
networks:
- test
image: test2
depends_on:
- test1
ports:
- "8160:8160"

You can use entrypoint script, something like this (need to install netcat):
until nc -w 1 -z test1 8115; do
>&2 echo "Service is unavailable - sleeping"
sleep 1
done
sleep 2
>&2 echo "Service is up - executing command"
And execute it by command instruction in service (in docker-compose file) or in the Dockerfile (CMD directive).

I added this in the Dockerfile (since it was just for a quick test):
CMD sleep 60 && node server.js
A 60 seconds sleep did the trick, since the node.js part was executing before a database dump init script could finish executing fully.

Related

Submit Flink job detached in session model cluster , but Flink Cli will not return after showing the job id

I'm using Flink cli to submit a detached job in session model Flink cluster ,it success and showed the job id , but cli would not return and block the console.
My docker-compose file segment:
jobmanager:
image: flink:1.15.3-scala_2.12-java8
logging: *default-logging
network_mode: host
command: jobmanager
volumes:
- /usr/share/zoneinfo/Asia/Shanghai:/etc/localtime
- ./libs:/opt/flink/lib/libs
- ./jobmanager/processWorkDir:/processWorkDir
- ./jobmanager/checkpointsDir:/checkpointsDir
- ./jobmanager/savepointsDir:/savepointsDir
- ./jobmanager/jobs:/opt/flink/usrlib
environment:
- |
FLINK_PROPERTIES=
process.working-dir: /processWorkDir
jobmanager.rpc.address: 10.0.9.75
jobmanager.host: 10.0.9.75
state.backend: rocksdb
state.checkpoints.dir: file:///checkpointsDir/
state.checkpoint-storage: filesystem
state.savepoints.dir: file:///savepointsDir/
execution.checkpointing.interval: 60000
My java code just like
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnvironment = StreamTableEnvironment.create(env, streamSettings);
tableEnvironment.executeSql("create TEMPORARY table source_table ....");
tableEnvironment.executeSql("create TEMPORARY table redis_sink ....");
final TableResult sinkResult = tableEnvironment.executeSql("insert into redis_sink select * from source_table");
sinkResult.print();
env.execute("test job");
My command just like:
docker-compose exec jobmanager /opt/flink/bin/flink run --detached -t remote -m 10.0.9.75:8081 -c com.test.TestJob /opt/flink/usrlib/dev2/test-0.0.1-RELEASE.jar
I had use --detached option. but the command never return after showed the job id, Althougth the job graph had been created successfully, I can use Ctrl+C to break the command and job still running well for hours , but what I want is use this command in my CI pipeline, I need the command show the job id and return normally。

Does docker-compose support init container?

init container is a great feature in Kubernetes and I wonder whether docker-compose supports it? it allows me to run some command before launch the main application.
I come cross this PR https://github.com/docker/compose-cli/issues/1499 which mentions to support init container. But I can't find related doc in their reference.
This was a discovery for me but yes, it is now possible to use init containers with docker-compose since version 1.29 as can be seen in the PR you linked in your question.
Meanwhile, while I write those lines, it seems that this feature has not yet found its way to the documentation
You can define a dependency on an other container with a condition being basically "when that other container has successfully finished its job". This leaves the room to define containers running any kind of script and exit when they are done before an other dependent container is launched.
To illustrate, I crafted an example with a pretty common scenario: spin up a db container, make sure the db is up and initialize its data prior to launching the application container.
Note: initializing the db (at least as far as the official mysql image is concerned) does not require an init container so this example is more an illustration than a rock solid typical workflow.
The complete example is available in a public github repo so I will only show the key points in this answer.
Let's start with the compose file
---
x-common-env: &cenv
MYSQL_ROOT_PASSWORD: totopipobingo
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
environment:
<<: *cenv
init-db:
image: mysql:8.0
command: /initproject.sh
environment:
<<: *cenv
volumes:
- ./initproject.sh:/initproject.sh
depends_on:
db:
condition: service_started
my_app:
build:
context: ./php
environment:
<<: *cenv
volumes:
- ./index.php:/var/www/html/index.php
ports:
- 9999:80
depends_on:
init-db:
condition: service_completed_successfully
You can see I define 3 services:
The database which is the first to start
The init container which starts only once db is started. This one only runs a script (see below) that will exit once everything is initialized
The application container which will only start once the init container has successfuly done its job.
The initproject.sh script run by the db-init container is very basic for this demo and simply retries to connect to the db every 2 seconds until it succeeds or reaches a limit of 50 tries, then creates a db/table and insert some data:
#! /usr/bin/env bash
# Test we can access the db container allowing for start
for i in {1..50}; do mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "show databases" && s=0 && break || s=$? && sleep 2; done
if [ ! $s -eq 0 ]; then exit $s; fi
# Init some stuff in db before leaving the floor to the application
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create database my_app"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create table my_app.test (id int unsigned not null auto_increment primary key, myval varchar(255) not null)"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "insert into my_app.test (myval) values ('toto'), ('pipo'), ('bingo')"
The Dockerfile for the app container is trivial (adding a mysqli driver for php) and can be found in the example repo as well as the php script to test the init was succesful by calling http://localhost:9999 in your browser.
The interesting part is to observe what's going on when launching the service with docker-compose up -d.
The only limit to what can be done with such a feature is probably your imagination ;) Thanks for making me discovering this.

Docker wait untill a service is completely ready

I'm dockerizing my existing Django application.
I have an entrypoint.sh script which run as entrypoint by the Dockerfile
ENTRYPOINT ["/app/scripts/docker/entrypoint.sh"]
It's content contains script to run migration when environment variable is set to migrate
#!/bin/sh
#set -e
# Run the command and exit with the custom message when the comamnd fails to run
safeRunCommand() {
cmnd="$*"
echo cmnd="$cmnd"
eval "$cmnd"
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error : [code: %d] when executing command: '$cmnd'\n" $ret_code
exit $ret_code
else
echo "Command run successfully: $cmnd"
fi
}
runDjangoMigrate() {
echo "Migrating database"
cmnd="python manage.py migrate --noinput"
safeRunCommand "$cmnd"
echo "Done: Migrating database"
}
# Run Django migrate command.
# The command is run only when environment variable `DJANGO_MANAGE_MIGRATE` is set to `on`.
if [ "x$DJANGO_MANAGE_MIGRATE" = 'xon' ] && [ ! "x$DEPLOYMENT_MODE" = 'xproduction' ]; then
runDjangoMigrate
fi
# Accept other commands
exec "$#"
Now, in the docker-compose file, I have the services like
version: '3.7'
services:
database:
image: mysql:5.7
container_name: 'qcg7_db_mysql'
restart: always
web:
build: .
command: ["./wait_for_it.sh", "database:3306", "--", "./docker_start.sh"]
volumes:
- ./src:/app
depends_on:
- database
environment:
DJANGO_MANAGE_MIGRATE: 'on'
But when I build the image using
docker-compose up --build
It fails to run the migration command from entrypoint script with error
(2002, "Can't connect to MySQL server on 'database' (115)")
This is due to the fact that the database server has not still started.
How can I make web service to wait untill the database service is completely started and is ready to accept connections?
Unfortunately, there is not a native way in Docker to wait for the database service to be ready before Django web app attempts to connect. Depends_on will only ensure that the web app is started after the database container is launched.
Because of this limitation you will need to solve this problem in how your container runs. The easiest solution is to modify the entrypoint.sh to sleep for 10-30 seconds so that your database has time to initialize before executing any additional commands. This official MySQL entrypoint.sh shows an example of how to block until the database is ready.

How to check if a container is running and trigger another container as a result?

I have container 1 that takes a bit of time to spin up and get ready.
And I have container 2 that needs to run once container 1 is ready.
How can container 2 make sure container 1 is ready before it runs?
This needs to happen using a single Cron job.
Thanks!
You can do this :
PATH=:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
0 0 * * * docker ...¹; until docker ps | grep '<SOME SPECIFIC STRING>; do sleep 1; done; docker run ...²
Replace docker ...¹ and docker ...² by the real docker commands.
You can use docker swarm or docker-compose.
docker-stack.yml
version: "3.7"
services:
slow-to-start-service:
image: xxx
needy-service:
image: yyy
depends_on: # <-- this will not start until slow-to-start-service us up
- slow-to-start-service
crontab -e
0 * * * * docker stack deploy -c /path/to/docker-stack.yml --prune my-stack

How to run `args` as `command` in kubernetes

I have a python script I want to run in a kubernetes job. I have used a configMap to upload it to the container located for example in dir/script.py.
The container is run normally with the args["load"].
I have tried using a postStart lifecycle in the Job manifest but it appears not to run.
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- /usr/bin/python /opt/config-init/db/tls_generator.py
Below is the snippet of the manifest
containers:
- name: {{ template "gluu.name" . }}-load
image: gluufederation/config-init:4.0.0_dev
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- /usr/bin/python /opt/config-init/db/tls_generator.py
volumeMounts:
- mountPath: /opt/config-init/db/
name: {{ template "gluu.name" . }}-config
- mountPath: /opt/config-init/db/generate.json
name: {{ template "gluu.fullname" . }}-mount-gen-file
subPath: generate.json
- mountPath: /opt/config-init/db/tls_generator.py
name: {{ template "gluu.fullname" . }}-tls-script
envFrom:
- configMapRef:
name: {{ template "gluu.fullname" . }}-config-cm
args: [ "load" ]
How can I run the tls_generator.py scipt after the args["load"].
The dockerFile part looks like
ENTRYPOINT ["tini", "-g", "--", "/app/scripts/entrypoint.sh"]
CMD ["--help"]
You are using Container Lifecycle Hooks, to be more specific PreStop.
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others.
If you want to execute a command when pod is starting you should consider using PostStart.
This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
Another option would be using Init Containers, and here are few ideas with examples:
Wait for a Service to be created, using a shell one-line command like:
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
Register this Pod with a remote server from the downward API with a command like:
curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d >'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'
Wait for some time before starting the app container with a command like
sleep 60
Please read the documentation on how to use Init containers for more details.
My end goal was to run tls_generator.py right after the load command has completed. This is what I came with and is working fine.
command: ["/bin/sh", "-c"]
args: ["tini -g -- /app/scripts/entrypoint.sh load && /usr/bin/python
/scripts/tls_generator.py"]
In this case the default command when running "tini -g -- /app/scripts/entrypoint.sh" will be the --help command. But adding load passes it as a command.

Resources