I've read lots for similar topics;
For example this:
Docker Compose wait for container X before starting Y etc.
So, I'm trying to implement a wait for only using compose and health checks.
I have only one docker-compose file. There is a section for the app and for test for this app.
Test (service) can be executed only when app(jar, db etc) will be ready.
Also after test (service) execution, I want to get status for this execution via $?
So my compose file is next:
version: "2.1"
networks:
myNetwork:
services:
rabbitmq:
...
networks:
- myNetwork
healthcheck:
test: ["CMD", "rabbitmqctl", "status"]
interval: 100s
timeout: 10s
retries: 10
app:
...
depends_on:
rabbitmq:
condition: service_healthy
networks:
- myNetwork
healthcheck:
test: ["CMD", "wget", "http://localhost:8080/ping"]
interval: 100s
timeout: 10s
retries: 10
tests:
build:
context: .
dockerfile: TestDocker
depends_on:
app:
condition: service_healthy
networks:
- myNetwork
Im trying to run:
docker-compose up -d app && docker-compose run tests;
or
docker-compose run tests
But docker-compose run tests don't waits for condition: service_healthy; (app and rabbitmq containers is started but service_healthy was not checked);
And this is the problem.
When I do something like this:
docker-compose up -d app && sleep 20 && docker-compose run tests;
So is there is any possibility to do sleep or wait for, for command: docker-compose run tests ?
Related
I have two postgres containers one mdhillon/postgis and another postgrest/postgrest. And the python app depends on both the healthchecks of the postgres containers. Please help
In terminal after docker-compose up
Creating compose_postgis_1 ... done
Creating compose_postgrest_1 ... done
Error for app Container <postgrest_container_id> is unhealthy. And the terminal exits
Showing docker-compose.yml file
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
If you are using the official Postgres Docker Image, there is an option to RUN your postgres on a specific port. You need to add ENV variable PGPORT for postgree docker container to run on a different port. Try the below one...
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
PGPORT: 3000
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
By default, Postgres container runs on Port 5432 inside Docker Network. Since, you are not changing the port of Postgres Container, both the containers are trying to run on same port inside docker network, and due to this, one container will run and the other will not. You can check the logs of docker container for better understanding.
Hence, adding PGPORT env var to the container to run Postgres on diff port will resolve your issue...
I have single host docker swarm application set to global mode (in order to have only 1 replica of each service). For some reason after updating the swarm some of the services are showing 2/2 replicas. It looks like the old container wasn't stopped after the new one started.
What I have found is that it happens when mysql container is being replaced (and it's the only service that has order: stop-first inside update config). The services that tend to get too many replicas are dependent on the DB and on deploy they are failing until DB is ready (but for some reason at this point there are two replicas of these - old and new one). To fix this I need to run the deploy again.
My env is deployed by CI/CD which does it in order:
docker-compose -f build-images.yml build
docker-compose -f build-images.yml push to private docker registry (also on the same host and swarm)
docker image prune -a
docker stack deploy -c test-swarm.yml test
Now I actually have 2 problems:
Firstly mysql most of the time is being updated even though nothing has changed in the code. It builds new image (which is understandable since I did image prune -a), then it is pushed to registry for some reason as new layer and then it replaces the old mysql container with exact same one. This behavior causes that almost everytime I change any other service, the problem with replicas I described above appears.
Secondly old replica of container stays even when new one is created and running when DB is being updated, making too many replicas (and the old version gets all the action like api calls).
There is part of my test-swarm.yml with DB and one of the services that get duplicated:
services:
#BACKEND
db:
image: registry.address/db:latest
user: "${UID}:${GID}"
deploy:
mode: global
update_config:
failure_action: pause
order: stop-first
healthcheck:
test: [ "CMD-SHELL", "mysqladmin --defaults-file=/home/.my.cnf -u root status || exit 1" ]
interval: 60s
timeout: 5s
retries: 3
start_period: 30s
ports:
- 3319:3306
env_file:
- prod-env/db.env
volumes:
- db:/var/lib/mysql
networks:
- test-backend
core:
image: registry.address/core:latest
user: "${UID}:${GID}"
deploy:
mode: global
update_config:
failure_action: pause
order: start-first
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost/api/admin/status || exit 1"]
interval: 60s
timeout: 5s
retries: 5
start_period: 30s
depends_on:
- db
networks:
- test-backend
- test-api
environment:
- ASPNETCORE_ENVIRONMENT=Docker
volumes:
- app-data:/src/app/files
and there is part of the build-images.yml with these services:
services:
db:
image: registry.address/db:latest
build:
context: .
dockerfile: db-prod.Dockerfile
args:
UID: ${UID}
GID: ${GID}
core:
image: registry.address/core:latest
build:
context: .
dockerfile: Core/Dockerfile
args:
UID: ${UID}
GID: ${GID}
DB dockerfile:
FROM mysql:latest
ARG UID
ARG GID
COPY ./init/.my.cnf /home/
RUN chown $UID:$GID /home/.my.cnf
COPY ./init/01-databases.sql /docker-entrypoint-initdb.d/
USER $UID:$GID
RUN chmod 600 /home/.my.cnf
I'm using gitlab-ci and I have two docker containers, where one shall have access to the other one. But when calling a link inside one container "http://server:3100", then it doesn't work.
I'm inside another docker container. This is the .gitlab-ci.yml
# Current Version 1.0
image: tiangolo/docker-with-compose:2021-09-18
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
full:
script:
- docker-compose up --build --abort-on-container-exit --exit-code-from tests
only:
- schedules
- web
develop:
script:
- docker-compose up --build --abort-on-container-exit --exit-code-from tests
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop"'
when: always
This is the docker-compose.yml
version: '3.9'
services:
tests:
build:
context: ./code
dockerfile: tests.Dockerfile
depends_on:
server:
condition: service_healthy
server:
build:
context: ./code
dockerfile: test.server.Dockerfile
ports:
- "3100:3100"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3100"]
interval: 1m10s
timeout: 10s
retries: 300
In the Documentation
Within the web container, your connection string to db would look like postgres://db:5432
So I thought, it would be http://server:3100
Thank you for your answers!
i'm using docker docker-compose for running web application. I want to change inside my container and modify some config file and restarting the container without losing modification .
I'm creating a container using
sudo docker-compose up
Then i run
sudo -it -u 0 <container-id> bash
After changing in config files everything looks good. If I restart the container executing
docker container restart $(docker ps -a -q)
all changes where discarded. Can someone explain to me the best way for doing this without losing modifications after restart ?
A useful technique here is to store a copy of the configuration files on the host and then inject them using a Docker-Compose volumes: directive.
version: '3'
services:
myapp:
image: me/myapp
ports: ['8080:8080']
volumes:
- './myapp.ini:/app/myapp.ini'
It is fairly routine to destroy and recreate containers, and you want things to be set up so that everything is ready to go immediately once you docker run or docker-compose up.
Other good uses of bind-mounted directories like this are to give a container a place to publish log files back out, and if your container happens to need persistent data on a filesystem, giving a place to store that across container runs.
docker exec is a useful debugging tool, but it is not intended to be part of your core Docker workflow.
thanks #David Maze for your reply in my case i have à script for changing many parameters in my app and generating ssl certificat after execution of script in my container i have to restart the contianer
my docker-compose.yml
version: '2.3'
services:
wso2iot-mysql:
image: mysql:5.7.20
container_name: wso2iot-mysql
hostname: wso2iot-mysql
ports:
- 3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./mysql/scripts:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-proot"]
interval: 10s
timeout: 60s
retries: 5
wso2iot-broker:
image: docker.wso2.com/wso2iot-broker:3.3.0
container_name: wso2iot-broker
hostname: wso2iot-broker
ports:
- "9446:9446"
- "5675:5675"
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "9446"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./broker:/home/wso2carbon/volumes/wso2/broker
wso2iot-analytics:
image: docker.wso2.com/wso2iot-analytics:3.3.0
container_name: wso2iot-analytics
hostname: wso2iot-analytics
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9445/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./analytics:/home/wso2carbon/volumes/wso2/analytics
ports:
- "9445:9445"
wso2iot-server:
image: docker.wso2.com/wso2iot-server:3.3.0
container_name: wso2iot-server
hostname: wso2iot-server
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9443/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./iot-server:/home/wso2carbon/volumes
ports:
- "9443:9443"
links:
- wso2iot-mysql
docker-compose 2.1 offers the nice feature to specify a condition with depends_on. The current docker-compose documentation states:
Version 3 no longer supports the condition form of depends_on.
Unfortunately the documentation does not explain, why the condition form was removed and is lacking any specific recommondation on how to implement that behaviour using V3 upwards.
There's been a move away from specifying container dependencies in compose. They're only valid at startup time and don't work when dependent containers are restarted at run time. Instead, each container should include mechanism to retry to reconnect to dependent services when the connection is dropped. Many libraries to connect to databases or REST API services have configurable built-in retries. I'd look into that. It is needed for production code anyway.
From 1.27.0, 2.x and 3.x are merged with COMPOSE_SPEC schema.
version is now optional. So, you can just remove it and specify a condition as before:
services:
web:
build: .
depends_on:
redis:
condition: service_healthy
redis:
image: redis
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 1s
timeout: 3s
retries: 30
There are some external tools that let you mimic this behaviour. For example, with the dockerize tool you can wrap your CMD or ENTRYPOINT with dockerize -wait and that will prevent running your application until specified services are ready.
If your docker-compose file used to look like this:
version: '2.1'
services:
kafka:
image: spotify/kafka
healthcheck:
test: nc -z localhost 9092
webapp:
image: foo/bar # your image
healthcheck:
test: curl -f http://localhost:8080
tests:
image: bar/foo # your image
command: YOUR_TEST_COMMAND
depends_on:
kafka:
condition: service_healthy
webapp:
condition: service_healthy
then you can use dockerize in your v3 compose file like this:
version: '3.0'
services:
kafka:
image: spotify/kafka
webapp:
image: foo/bar # your image
tests:
image: bar/foo # your image
command: dockerize -wait tcp://kafka:9092 -wait web://webapp:8080 YOUR_TEST_COMMAND
Just thought I'd add my solution for when running postgres and an application via docker-compose where I need the application to wait for the init sql script to complete before starting.
dockerize seems to wait for the db port to be available (port 5432) which is the equivilant of depends_on which can be used in docker 3:
version: '3'
services:
app:
container_name: back-end
depends_on:
- postgres
postgres:
image: postgres:10-alpine
container_name: postgres
ports:
- "5432:5432"
volumes:
- ./docker-init:/docker-entrypoint-initdb.d/
The Problem:
If you have a large init script the app will start before that completes as the depends_on only waits for the db port.
Although I do agree that the solution should be implemented in the application logic, the problem we have is only for when we want to run tests and prepopulate the database with test data so it made more sense to implement a solution outside the code as I tend not like introducing code "to make tests work"
The Solution:
Implement a healthcheck on the postgres container.
For me that meant checking the command of pid 1 is postgres as it will be running a different command on pid 1 while the init db scripts are running
Write a script on the application side which will wait for postgres to become healthy. The script looks like this:
#!/bin/bash
function check {
STATUS=\`curl -s --unix-socket /var/run/docker.sock http:/v1.24/containers/postgres/json | python -c 'import sys, json; print json.load('sys.stdin')["State"]["Health"]["Status"]'\`
if [ "$STATUS" = "healthy" ]; then
return 0
fi
return 1
}
until check; do
echo "Waiting for postgres to be ready"
sleep 5
done
echo "Postgres ready"
Then the docker-compose should mount the directories of the scripts so that we don't edit the Dockerfile for the application and if we're using a custom postgres image, this way we can continue to use the docker files for your published images.
We're also overriding the entry point defined in the docker file of the app so that we can run the wait script before the app starts
version: '3'
services:
app:
container_name: back-end
entrypoint: ["/bin/sh","-c","/opt/app/wait/wait-for-postgres.sh && <YOUR_APP_START_SCRIPT>"]
depends_on:
- postgres
volumes:
- //var/run/docker.sock:/var/run/docker.sock
- ./docker-scripts/wait-for-postgres:/opt/app/wait
postgres:
image: postgres:10-alpine
container_name: postgres
ports:
- "5432:5432"
volumes:
- ./docker-init:/docker-entrypoint-initdb.d/
- ./docker-scripts/postgres-healthcheck:/var/lib
healthcheck:
test: /var/lib/healthcheck.sh
interval: 5s
timeout: 5s
retries: 10
I reached this page because one container would not wait for the one depending upon and I had to run a docker system prune to get it working. There was an orphaned container error that prompted me to run the prune.