Should Nginx and Flask run in the same container - docker

The ideal is to have one process per container, but there is a strong affinity between Flask+uwsgi and Nginx.
Currently we run them together, but should we refactor ?

Yes, it's a good idea to refactor. Try to make service ephemeral and run only one main process in it. So, in the end, you need to have something like this:
version: '3.4'
services:
web:
build:
dockerfile: Dockerfile
context: .
ports:
- 8000:8000
volumes:
- .:/app/
env_file:
- common.env
nginx:
restart: always
image: nginx:1.18-alpine
ports:
- 80:80
- 443:443
volumes:
- ./deployment/nginx.conf:/etc/nginx/conf.d/default.conf
- ./deployment/config.conf:/etc/nginx/nginx.conf
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\";'"
depends_on:
- web
It's designed to have only one main process in a container, in that case if your application fails the container will be down.

Related

When scaling, run some docker container startup commands only once

I have the following docker-compose.yml.
version: "3.1"
services:
db:
container_name: ${MYSQL_CONTAINER}
image: mysql:5.7.30
volumes:
- ${VOLUMES_DIR}/mysql_data:/var/lib/mysql
- ./slow_log.cnf:/etc/mysql/my.cnf
- ${VOLUMES_DIR}/mysql_logs:/var/log/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld --default-authentication-plugin=mysql_native_password"
restart: on-failure
backend:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command A
&& command B
&& ... "
restart: unless-stopped
I am scaling backend service so my startup command is sudo docker-compose -p ${COMPOSE_PROJECT_NAME} up -d --scale backend=10.
The problem I am facing is command A, command B in service backend was running for all 10 containers startup(means they were being run 10 times).
But I want command A to run only once for all the backend service-related containers but Command B should run for all containers.
Any suggestions in accomplishing this?
I'm not entirely sure that there would be an out-of-the-box solution for your requirement.
However, I can suggest you a workaround like this. You can duplicate your backend service in docker-compose and run one backend service with both Command A and Command B, while the other backend has only Command B.
Then when you want to scale, you scale the backend which has only Command B.
version: "3.1"
services:
db:
container_name: ${MYSQL_CONTAINER}
image: mysql:5.7.30
volumes:
- ${VOLUMES_DIR}/mysql_data:/var/lib/mysql
- ./slow_log.cnf:/etc/mysql/my.cnf
- ${VOLUMES_DIR}/mysql_logs:/var/log/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld --default-authentication-plugin=mysql_native_password"
restart: on-failure
backend_default:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command A
&& command B
&& ... "
restart: unless-stopped
backend:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command B
&& ... "
restart: unless-stopped
Now you can use the scale option like below:
sudo docker-compose -p ${COMPOSE_PROJECT_NAME} up -d --scale backend=9
Now if there happens to be a scenario, where you need only 1 backend to be run, you can use profiles in docker-compose to only run backend when there is a specific profile is given with docker-compose command. That means only default_backend will run if that profile is not given and hence the scale is 1.
Hope this helps you. Cheers 🍻 !!!
If BACKEND_IMAGE is being built by you, you should do RUN command A in your Dockerfile. The RUN line will be executed only once during build time — so you will also need to make sure that this meshes with your needs — while the ENTRYPOINT and CMD lines will only be run upon execution of the container. The command in the docker-compose file overrides the CMD line.

How to add image name to docker-compose up

I have a docker-compose file that I use the image block in the service to name. For example
version: '3'
services:
redis:
image: redis
restart: unless-stopped
ports:
- "6379"
worker:
image: worker:production
build: .
user: root
command: celery -A ExactEstate worker --loglevel=info
env_file: ./.env.prod
restart: unless-stopped
links:
- redis
depends_on:
- redis
beats:
image: beats:production
build: .
user: root
command: celery --pidfile= -A ExactEstate beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
env_file: ./.env.prod
restart: unless-stopped
links:
- redis
depends_on:
- redis
web:
image: web:production
build: .
user: root
command: daphne -b 0.0.0.0 -p 8000 ExactEstate.asgi:application
ports:
- "8000:8000"
env_file: ./.env.prod
restart: unless-stopped
links:
- redis
- worker
- beats
depends_on:
- redis
- worker
- beats
This gives a docker ps of:
0ad78269a9ce beats:production "celery --pidfile= -…" 7 minutes ago Up 7 minutes exactestate_beats_1
1a44f7c98b50 worker:production "celery -A ExactEsta…" 7 minutes ago Up 7 minutes exactestate_worker_1
f3a09723ba66 redis "docker-entrypoint.s…" 7 minutes ago Up 7 minutes 0.0.0.0:32769->6379/tcp exactestate_redis_1
Let's suppose I also have built containers from a different compose file (i.e. staging) How can I use docker-compose to on pull up the exact service/image I want?
For example: docker-compose up web:production or docker-compose up web:staging
You can achieve this by using environment variables. Variables for docker-compose up could be passed as .env file or set by export (or set on Windows) command (docker documentation).
web:
image: web:production
Should be changed to
web:
image: web:${ENV}
And then you can run your application by running
$ export ENV=production && docker-compose up
Or you can create .env file containing line ENV=production. Then you can simply run application with docker-compose up.

Add a file to `/docker-entrypoint.d/` as part of a app.conf.template file to swap global variables into nginx set up

I would like to pass global variables to my nginx app.conf via a app.conf.template file using docker and docker-compose.
When using an app.conf.template file with no commands in docker-compose.yaml my variables translate successfully and my redirects via nginx work as expected. But when I use a command in docker-compose my nginx and redirects fail.
My set up is per the instructions on the documentation, under section 'Using environment variables in nginx configuration (new in 1.19)':
Out-of-the-box, nginx doesn't support environment variables inside
most configuration blocks. But this image has a function, which will
extract environment variables before nginx starts.
Here is an example using docker-compose.yml:
web: image: nginx volumes:
./templates:/etc/nginx/templates ports:
"8080:80" environment:
NGINX_HOST=foobar.com
NGINX_PORT=80
By default, this function reads template files in
/etc/nginx/templates/*.template and outputs the result of executing
envsubst to /etc/nginx/conf.d
... more ...
My docker-compose.yaml works when it looks like this:
version: "3.5"
networks:
collabora:
services:
nginx:
image: nginx
depends_on:
- certbot
- collabora
volumes:
- ./data/nginx/templates:/etc/nginx/templates
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
env_file: .env
networks:
- collabora
On host I have a conf file ./data/nginx/templates/app.conf.template which contains a conf file with global variables throughout in the form ${variable_name}.
With this set up I'm able to run the container and my redirects work as expected. When I exec into the container I can cat /etc/nginx/conf.d/app.conf and see the file with the correct variables swapped in from the .env file.
But I need to add a command to my docker-compose.yaml:
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
When I add that command the set up fails and the global variables are not swapped into the app.conf file within the container.
On another forum it was suggested I move the command into it's own file in the container. I then gave this a try and created a shell script test.sh:
#/bin/sh
while :;
do sleep 6h & wait $${!};
nginx -s reload;
done;
My new docker-compose:
version: "3.5"
networks:
collabora:
services:
nginx:
image: nginx
depends_on:
- certbot
- collabora
volumes:
- ./data/nginx/templates:/etc/nginx/templates
- ./test.sh:/docker-entrypoint.d/test.sh # new - added test.sh into the container here
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
env_file: .env
networks:
- collabora
This fails. Although when I exec into the container and cat /etc/nginx/conf.d/app.conf I DO see the correct config, it just does not seem to be working in that my redirects, which otherwise do work when I don't include this test.sh script within /docker-entrypoint.d/.
I asked nearly same question yesterday and was given a working solution. However, it 'feels more correct' to add a shell script to the container at /docker-entrypoint.d/ and go that route instead like I've attempted in this post.
For what you're trying to do, I think the best solution is to create a sidecar container, like this:
version: "3.5"
networks:
collabora:
volumes:
shared_run:
services:
nginx:
image: nginx:1.19
volumes:
- "shared_run:/run"
- ./data/nginx/templates:/etc/nginx/templates
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
env_file: .env
networks:
- collabora
nginx_reloader:
image: nginx:1.19
pid: service:nginx
volumes:
- "shared_run:/run"
entrypoint:
- /bin/bash
- -c
command:
- |
while :; do
sleep 60
echo reloading
nginx -s reload
done
This lets you use the upstream nginx image without needing to muck about with its mechanics. The key here is that (a) we run the nginx_reloader container in the same PID namespace as the nginx container itself, and (b) we arrange for the two containers to share a /run directory so that the nginx -s reload command can find the pid of the nginx process in the expected location.

Nginx do not start with docker-compose run command

I am dockerizing my existing application. But there's a strange issue. When i start my application with
docker-compose up
each service in the docker-compose runs successfully with no issues. But there are some services which i don't want to run sometimes (celery, celerybeat etc). For that i run
docker-compose run nginx
the above command should run nginx, web, db services as configured with docker-compose.yml but it only runs web and db not nginx.
Here's my yml file
docker-compose.yml
version: '3'
services:
db:
image: postgres:12
env_file: .env
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- "5431:5432"
volumes:
- dbdata:/var/lib/postgresql/data
nginx:
image: nginx:1.14
ports:
- "443:443"
- "80:80"
volumes:
- ./config/nginx/:/etc/nginx/conf.d
- ./MyAPP/static:/var/www/MyAPP.me/static/
depends_on:
- web
web:
restart: always
build: ./MyAPP
command: bash -c "
python manage.py collectstatic --noinput
&& python manage.py makemigrations
&& python manage.py migrate
&& gunicorn --certfile=/etc/certs/localhost.crt --keyfile=/etc/certs /localhost.key MyAPP.wsgi:application --bind 0.0.0.0:443 --reload"
expose:
- "443"
depends_on:
- db
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
- ./config/nginx/certs/:/etc/certs
- ./MyAPP/static:/var/www/MyAPP.me/static/
broker:
image: redis:alpine
expose:
- "6379"
celery:
build: ./MyAPP
command: celery -A MyAPP worker -l info
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
depends_on:
- broker
- db
celery-beat:
build: ./MyAPP
command: celery -A MyAPP beat -l info
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
depends_on:
- broker
- db
comment-classifier:
image: codait/max-toxic-comment-classifier
volumes:
dbdata:
TL;dr: docker-compose up nginx
There's a distinct difference between docker-compose up and docker-compose run. The first builds, (re)creates, starts, and attaches to containers for a service. The second runs a one-time command against a service. When you do docker-compose run, it starts db and web because nginx depends on them, then it runs a single command on nginx and exits. So you have to use docker-compose up nginx in order to get what you want.

Update. LetsEncrypt certificates in two different docker containers

I have few containers, which supports HTTPS connections:
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
# Update certificates every 12 hours.
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
It mounts LetsEncrypt certificates and updates it every 12 hours.
Also, I have two additional containers:
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
- "443:443"
volumes:
- db-data:/var/lib/postgresql
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/app.conf:/etc/nginx/conf.d/app.conf:ro
- frontend-webroot:/var/www/app.com/public_html/:ro
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
depends_on:
- api
- frontend
# Reloads nginx every 6 hours to make it sure everything is OK.
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
Which mounts certificates and reloads nginx one per 6 hours (as it's recommended)
and another container:
api:
container_name: api
restart: always
build: ./web
ports:
- "9002:9002"
volumes:
- /usr/src/app/app/static
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
depends_on:
- postgres
This container provided REST API and I'm not sure, it's a great idea to reload this container every X hours to update LetsEncrypt certificates.
What is wrong in this architecture? And how it might be improved to avoid reloading of container with REST API, but to make it possible to work with certificates?
UPDATE
What I expect from my app:
There should be available as home pages, etc. as API calls like:
GET https://localhost:9000/api/endpoint # API call
GET https://localhost:443/home # Home page from another container with React app
etc.

Resources