I have a web site running SSL done using lets encrypt. I have written/used a script following this guide but the cert are not renewed automatically. Every 90 days I need to manually run the lets encrypt renewal command to get new certs for my website.
This is how my docker-compose looks like for nginx and certbot
nginx:
build: nginx-image
image: km-nginx
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- 80:80
- 443:443
depends_on:
- keycloak
- km-app
links:
- keycloak
- km-app
environment:
- PRODUCTION=true
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot -w /var/www/certbot; sleep 12h & wait $${!}; done;'"
I would like to pass global variables to my nginx app.conf via a app.conf.template file using docker and docker-compose.
When using an app.conf.template file with no commands in docker-compose.yaml my variables translate successfully and my redirects via nginx work as expected. But when I use a command in docker-compose my nginx and redirects fail.
My set up is per the instructions on the documentation, under section 'Using environment variables in nginx configuration (new in 1.19)':
Out-of-the-box, nginx doesn't support environment variables inside
most configuration blocks. But this image has a function, which will
extract environment variables before nginx starts.
Here is an example using docker-compose.yml:
web: image: nginx volumes:
./templates:/etc/nginx/templates ports:
"8080:80" environment:
NGINX_HOST=foobar.com
NGINX_PORT=80
By default, this function reads template files in
/etc/nginx/templates/*.template and outputs the result of executing
envsubst to /etc/nginx/conf.d
... more ...
My docker-compose.yaml works when it looks like this:
version: "3.5"
networks:
collabora:
services:
nginx:
image: nginx
depends_on:
- certbot
- collabora
volumes:
- ./data/nginx/templates:/etc/nginx/templates
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
env_file: .env
networks:
- collabora
On host I have a conf file ./data/nginx/templates/app.conf.template which contains a conf file with global variables throughout in the form ${variable_name}.
With this set up I'm able to run the container and my redirects work as expected. When I exec into the container I can cat /etc/nginx/conf.d/app.conf and see the file with the correct variables swapped in from the .env file.
But I need to add a command to my docker-compose.yaml:
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
When I add that command the set up fails and the global variables are not swapped into the app.conf file within the container.
On another forum it was suggested I move the command into it's own file in the container. I then gave this a try and created a shell script test.sh:
#/bin/sh
while :;
do sleep 6h & wait $${!};
nginx -s reload;
done;
My new docker-compose:
version: "3.5"
networks:
collabora:
services:
nginx:
image: nginx
depends_on:
- certbot
- collabora
volumes:
- ./data/nginx/templates:/etc/nginx/templates
- ./test.sh:/docker-entrypoint.d/test.sh # new - added test.sh into the container here
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
env_file: .env
networks:
- collabora
This fails. Although when I exec into the container and cat /etc/nginx/conf.d/app.conf I DO see the correct config, it just does not seem to be working in that my redirects, which otherwise do work when I don't include this test.sh script within /docker-entrypoint.d/.
I asked nearly same question yesterday and was given a working solution. However, it 'feels more correct' to add a shell script to the container at /docker-entrypoint.d/ and go that route instead like I've attempted in this post.
For what you're trying to do, I think the best solution is to create a sidecar container, like this:
version: "3.5"
networks:
collabora:
volumes:
shared_run:
services:
nginx:
image: nginx:1.19
volumes:
- "shared_run:/run"
- ./data/nginx/templates:/etc/nginx/templates
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
env_file: .env
networks:
- collabora
nginx_reloader:
image: nginx:1.19
pid: service:nginx
volumes:
- "shared_run:/run"
entrypoint:
- /bin/bash
- -c
command:
- |
while :; do
sleep 60
echo reloading
nginx -s reload
done
This lets you use the upstream nginx image without needing to muck about with its mechanics. The key here is that (a) we run the nginx_reloader container in the same PID namespace as the nginx container itself, and (b) we arrange for the two containers to share a /run directory so that the nginx -s reload command can find the pid of the nginx process in the expected location.
The ideal is to have one process per container, but there is a strong affinity between Flask+uwsgi and Nginx.
Currently we run them together, but should we refactor ?
Yes, it's a good idea to refactor. Try to make service ephemeral and run only one main process in it. So, in the end, you need to have something like this:
version: '3.4'
services:
web:
build:
dockerfile: Dockerfile
context: .
ports:
- 8000:8000
volumes:
- .:/app/
env_file:
- common.env
nginx:
restart: always
image: nginx:1.18-alpine
ports:
- 80:80
- 443:443
volumes:
- ./deployment/nginx.conf:/etc/nginx/conf.d/default.conf
- ./deployment/config.conf:/etc/nginx/nginx.conf
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\";'"
depends_on:
- web
It's designed to have only one main process in a container, in that case if your application fails the container will be down.
I have a dockerized Laravel application and use docker-compose to run the application. When I run the application using docker and make a simple ping API call, it takes less than 200ms to respond. But when I run it using docker-compose, it takes more than 3 seconds to respond.
I used docker run -it --rm -p 8080:8080 senik_laravel:latest command to run the container and here is the response time:
curl 127.0.0.1:8080/ping -w %{time_total}
The response is:
PONG
0.180260
You see that it takes 0.180260 second to respond.
When I run the application using the docker-compose file, it takes more than 3 seconds to respond.
curl 127.0.0.1:8080/ping -w %{time_total}
The response is:
PONG
3.834007
You see that it takes 3.834007 seconds to respond.
Here is the full docker-compose file:
version: '3.7'
networks:
app_net:
driver: bridge
services:
laravel:
build:
context: ./laravel
dockerfile: Dockerfile
container_name: senik_laravel
volumes:
- ./laravel:/var/www/html
working_dir: /var/www/html
ports:
- '80:8080'
networks:
- app_net
mysql-master:
image: 'bitnami/mysql:8.0.19'
container_name: senik_mysql_master
restart: always
ports:
- '3306:3306'
volumes:
- ./mysql_master_data:/bitnami/mysql
- ./docker-configs/mysql/init:/docker-entrypoint-initdb.d
environment:
- MYSQL_DATABASE=appdb
- MYSQL_ROOT_PASSWORD=pass
- MYSQL_AUTHENTICATION_PLUGIN=mysql_native_password
networks:
- app_net
phpmyadmin:
image: 'bitnami/phpmyadmin:latest'
container_name: senik_phpmyadmin
ports:
- '8080:80'
environment:
DATABASE_HOST: mysql-master
PHPMYADMIN_PASSWORD: pass
restart: always
volumes:
- 'phpmyadmin_data:/bitnami'
depends_on:
- mysql-master
networks:
- app_net
volumes:
phpmyadmin_data:
driver: local
This ping API does not make any database call. It just returns pong.
I've tested an API with database call and it takes about 19 seconds to respond.
What's wrong? Is it due to the network configurations?
I have the following docker-compose, where I need to wait for the service jhipster-registry to be up and accepting connections before starting myprogram-app.
I tried the healtcheck way, following the official doc https://docs.docker.com/compose/compose-file/compose-file-v2/
version: '2.1'
services:
myprogram-app:
image: myprogram
mem_limit: 1024m
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:postgresql://myprogram-postgresql:5432/myprogram
- JHIPSTER_SLEEP=0
- SPRING_DATA_ELASTICSEARCH_CLUSTER_NODES=myprogram-elasticsearch:9300
- JHIPSTER_REGISTRY_PASSWORD=53bqDrurQAthqrXG
- EMAIL_USERNAME
- EMAIL_PASSWORD
ports:
- 8080:8080
networks:
- backend
depends_on:
- jhipster-registry:
"condition": service_started
- myprogram-postgresql
- myprogram-elasticsearch
myprogram-postgresql:
image: postgres:9.6.5
mem_limit: 256m
environment:
- POSTGRES_USER=myprogram
- POSTGRES_PASSWORD=myprogram
networks:
- backend
myprogram-elasticsearch:
image: elasticsearch:2.4.6
mem_limit: 512m
networks:
- backend
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
mem_limit: 512m
ports:
- 8761:8761
networks:
- backend
healthcheck:
test: "exit 0"
networks:
backend:
driver: "bridge"
but I get the following error when running docker-compose up:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.myprogram-app.depends_on contains {"jhipster-registry": {"condition": "service_started"}}, which is an invalid type, it should be a string
Am I doing something wrong, or this feature is no more supported? How to achieve this sync between services?
Updated version
version: '2.1'
services:
myprogram-app:
image: myprogram
mem_limit: 1024m
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger
- EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka
- SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config
- SPRING_DATASOURCE_URL=jdbc:postgresql://myprogram-postgresql:5432/myprogram
- JHIPSTER_SLEEP=0
- SPRING_DATA_ELASTICSEARCH_CLUSTER_NODES=myprogram-elasticsearch:9300
- JHIPSTER_REGISTRY_PASSWORD=53bqDrurQAthqrXG
- EMAIL_USERNAME
- EMAIL_PASSWORD
ports:
- 8080:8080
networks:
- backend
depends_on:
jhipster-registry:
condition: service_healthy
myprogram-postgresql:
condition: service_started
myprogram-elasticsearch:
condition: service_started
#restart: on-failure
myprogram-postgresql:
image: postgres:9.6.5
mem_limit: 256m
environment:
- POSTGRES_USER=myprogram
- POSTGRES_PASSWORD=tuenemreh
networks:
- backend
myprogram-elasticsearch:
image: elasticsearch:2.4.6
mem_limit: 512m
networks:
- backend
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
mem_limit: 512m
ports:
- 8761:8761
networks:
- backend
healthcheck:
test: ["CMD", "curl", "-f", "http://jhipster-registry:8761", "|| exit 1"]
interval: 30s
retries: 20
#start_period: 30s
networks:
backend:
driver: "bridge"
The updated version gives me a different error,
ERROR: for myprogram-app Container "8ebca614590c" is unhealthy.
ERROR: Encountered errors while bringing up the project.
saying that the container of jhipster-registry is unhealthy, but it's reachable via browser. How can I fix the command in the healthcheck to make it work?
Best Approach - Resilient App Starts
While docker does support startup dependencies, they officially recommend updating your app start logic to test for the availability of external dependencies and retry. This has lots of benefits for robust applications that may restart in the wild on the fly in addition to circumventing the race condition in docker compose up
depends_on & service_healthy - Compose 1.27.0+
depends_on is back in docker compose v1.27.0+ (was deprecated in v3) in the Compose Specification
Each service should also implement a service_healthy check to be able to report if it's fully setup and ready for downstream dependencies.
version: '3.0'
services:
php:
build:
context: .
dockerfile: tests/Docker/Dockerfile-PHP
depends_on:
redis:
condition: service_healthy
redis:
image: redis
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 1s
timeout: 3s
retries: 30
wait-for-it.sh
The recommended approach from docker according to their docs on Control startup and shutdown order in Compose is to download wait-for-it.sh which takes in the domain:port to poll and then executes the next set of commands if successful.
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Note: This requires overriding the startup command of the image, so make sure you know what wanted to pass to maintain parity of the default startup.
Further Reading
Docker Compose wait for container X before starting Y
Difference between links and depends_on in docker_compose.yml
How can I wait for a docker container to be up and running?
Docker Compose Wait til dependency container is fully up before launching
depends_on doesn't wait for another service in docker-compose 1.22.0
The documentation suggests that, in Docker Compose version 2 files specifically, depends_on: can be a list of strings, or a mapping where the keys are service names and the values are conditions. For the services where you don't have (or need) health checks, there is a service_started condition.
depends_on:
# notice: these lines don't start with "-"
jhipster-registry:
condition: service_healthy
myprogram-postgresql:
condition: service_started
myprogram-elasticsearch:
condition: service_started
Depending on how much control you have over your program and its libraries, it's better still if you can arrange for the service to be able to start without its dependencies necessarily being available (equivalently, to function if its dependencies die while the service is running), and not use the depends_on: option. You might return an HTTP 503 Service Unavailable error if the database is down, for instance. Another strategy that often is helpful is to immediately exit if your dependencies aren't available but use a setting like restart: on-error to ask the orchestrator to restart the service.
Update to version 3+.
Please follow the documents from version 3:
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before
starting web - only until they have been started. If you need to wait
for a service to be ready, see Controlling startup order for more on
this problem and strategies for solving it. Version 3 no longer
supports the condition form of depends_on.
The depends_on option is
ignored when deploying a stack in swarm mode with a version 3 Compose
file.
I would consider using the restart_policy option for configuring your myprogram-app to restart until the jhipster-registry is up and accepting connections:
restart_policy:
condition: on-failure
delay: 3s
max_attempts: 5
window: 60s
With the new docker compose API, we can now use the new --wait option:
docker compose up --wait
If your service has a healthcheck, Docker waits until it has the "healthy" status; otherwise, it waits for the service to be started. That's why it is crucial to have relevant healthchecks for all your services.
Note that this option automatically activate the --detach option.
Check out the documentation here.
The best approach I found is to check for the desired port in the entrypoint. There are different ways to do that e.g. wait-for-it but I like to use this solution that is cross-platform between apline and bash images and doesn't download custom scripts from GitHub:
Install netcat-openbsd (works with apt and apk). Then in the entrypoint (works with both #!/bin/bash and #!/bin/sh):
#!/bin/bash
wait_for()
{
echo "Waiting $1 seconds for $2:$3"
timeout $1 sh -c 'until nc -z $0 $1; do sleep 0.1; done' $2 $3 || return 1
echo "$2:$3 available"
}
wait_for 10 db 5432
wait_for 10 redis 6379
You can also make this into a 1-liner if you don't want to print anything.
Although you already got an answer, it should be mentioned that what you are trying to achieve have some nasty risks.
Ideally a service should be self sufficient and smart enough to retry and await for dependencies to be available (before a going down). Otherwise you will be more exposed to one failure propagating to other services. Also consider that a system reboot, unlike a manual start might ignore the dependencies order.
If one service crash causes all your system to go down, you might have a tool to restart everything again, but it would be better having services that resist that case.
After trying several approaches, IMO the simplest and most elegant option is using the jwilder/dockerize (dockerize) utility image with its -wait flag. Here is a simple example where I need a PostgreSQL database to be ready before starting my app:
version: "3.8"
services:
# Start Postgres.
db:
image: postgres
# Wait for Postgres to be joinable.
check-db-started:
image: jwilder/dockerize:0.6.1
depends_on:
- db
command: 'dockerize -wait=tcp://db:5432'
# Only start myapp once Postgres is joinable.
myapp:
image: myapp:latest
depends_on:
- check-db-started