HTTPS not working when putting ssl certificate with nginx in docker - docker

I want to pass my server in HTTPS with docker nginx image, but it doesn't work, I have create a SSL certificate with openSSL like this on my server:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt
and I well put the ip adress of my server by answering the questions.
On my server I have a docker-compose.yml which create the nginx container, my docker-compose.yml is like follow:
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx
depends_on:
- postgres
- monapp
volumes:
- /home/deployer/util/config/nginx-conf/nginx.conf:/etc/nginx/conf.d/default.conf
- /etc/ssl/certs:/etc/ssl/certs
- /etc/ssl/private:/etc/ssl/private
network_mode: host
# ports:
# - 80:80
# - 443:443
........
volumes with certificates are well passed to my container (I verified by navigating in the container with docker exec -it nginx bash) and IP adress of my container is the same as my server IP because there is network_mode: host.
My nginx configuration file nginx.conf is :
#-------- To redirect to ssl -------
#server {
# listen 80;
# listen [::]:80;
# server_name 159.65.124.219;
# location / {
# proxy_pass http://159.65.124.219:3000;
# }
#}
server {
listen 443 ssl;
server_name 159.65.124.219;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
proxy_pass http://159.65.124.219:3001/;
error_log /var/log/front_end_errors.log;
}
}
So I just want that when I go to https://159.65.124.219:443/ it brings me to http://159.65.124.219:3001/ but this is unfortunately not working and I don't know why.... I have search on a lot of forum since more than 1 week.... please help.... Thanks by advance

Related

Can I deploy a container registry and host a HTTPS webserver on the same system?

I am running a HTTPS webserver. On the same host, I would like to run a docker container registry.
According to this tutorial, I need to run this command:
docker run -d \
--restart=always \
--name registry \
-v "$(pwd)"/certs:/certs \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
-p 443:443 \
registry:2
But my nginx server already has 443 bound. So I guess I can't run the container registry with this port. What are my options here? Can I just use something other than 443?
You could use Nginx as a proxyserver and have (sub)domains pointing to the two different services (Webserver and Docker Container Registry)
Step 1 : Set up domainnames
DNS: registry.mycompany.com to IP address of the Host
DNS: www.mycompany.com to IP address of the Host
Step 2 : Config Nginx as a proxyserver
Nginx sites.conf
# Main Server
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
# The Webserver
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name www.mycompany.com;
ssl_certificate /etc/ssl/private/star_mycompany_com.chained.crt;
ssl_certificate_key /etc/ssl/private/star_mycompany_com.key;
access_log /var/log/nginx/webserver_access.log;
error_log /var/log/nginx/webserver_error.log;
location / {
proxy_pass http://172.30.0.3:80/;
}
}
# The Registry
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name registry.mycompany.com;
ssl_certificate /etc/ssl/private/star_mycompany_com.chained.crt;
ssl_certificate_key /etc/ssl/private/star_mycompany_com.key;
access_log /var/log/nginx/registry_access.log;
error_log /var/log/nginx/registry_error.log;
location / {
proxy_pass http://172.30.0.2:80/;
}
}
In your docker-compose.yml of the registry, put it on IP 172.30.0.2,
and in the docker-compose.yml of the webserver, put it on IP 172.30.0.3
Step 3 : Run Nginx itself in a Docker Container
docker-compose.yml
version: "3.9"
services:
proxyserver:
image: nginx:latest
container_name: Proxyserver
working_dir: /usr/share/nginx/html
ports:
- "80:80"
- "443:443"
volumes:
- ./etc/nginx/conf.d:/etc/nginx/conf.d:ro
- ./etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./etc/ssl/private:/etc/ssl/private
- ./var/log/nginx:/var/log/nginx
networks:
my-net:
ipv4_address: 172.30.0.254
networks:
my-net:
external: true
name: cops-net
Step 4 : Create your external Docker Network
create_network.sh
docker network create \
--driver=bridge \
--subnet=172.30.0.0/16 \
--attachable \
--gateway=172.30.0.1 \
my-net
Step 5 : Start everything up
Start the container with the webserver
Start the container with the registry
Start the proxyserver

How to enable HTTPS on AWS EC2 running an NGINX Docker container?

I have an EC2 instance on AWS that runs Amazon Linux 2.
On it, I installed Git, docker, and docker-compose. Once done, I cloned my repository and ran docker-compose up to get my production environment up. I go to the public DNS, and it works.
I now want to enable HTTPS onto the site.
My project has a frontend using React to run on an Nginx-alpine server. The backend is a NodeJS server.
This is my nginx.conf file:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://${PROJECT_NAME}_backend:${NODE_PORT}/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Here's my docker-compose.yml file:
version: "3.7"
services:
##############################
# Back-End Container
##############################
backend: # Node-Express backend that acts as an API.
container_name: ${PROJECT_NAME}_backend
init: true
build:
context: ./backend/
target: production
restart: always
environment:
- NODE_PATH=${EXPRESS_NODE_PATH}
- AWS_REGION=${AWS_REGION}
- NODE_ENV=production
- DOCKER_BUILDKIT=1
- PORT=${NODE_PORT}
networks:
- client
##############################
# Front-End Container
##############################
nginx:
container_name: ${PROJECT_NAME}_frontend
build:
context: ./frontend/
target: production
args:
- NODE_PATH=${REACT_NODE_PATH}
- SASS_PATH=${SASS_PATH}
restart: always
environment:
- PROJECT_NAME=${PROJECT_NAME}
- NODE_PORT=${NODE_PORT}
- DOCKER_BUILDKIT=1
command: /bin/ash -c "envsubst '$$PROJECT_NAME $$NODE_PORT' < /etc/nginx/conf.d/nginx.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
expose:
- "80"
ports:
- "80:80"
depends_on:
- backend
networks:
- client
##############################
# General Config
##############################
networks:
client:
I know there's a Docker image for certbot, but I'm not sure how to use it. I'm also worried about the way I'm proxying requests to /api/ to the server over http. Will that also give me any problems?
Edit:
Attempt #1: Traefik
I created a Traefik container to route all traffic through HTTPS.
version: '2'
services:
traefik:
image: traefik
restart: always
ports:
- 80:80
- 443:443
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/traefik/traefik.toml:/traefik.toml
- /opt/traefik/acme.json:/acme.json
container_name: traefik
networks:
web:
external: true
For the toml file, I added the following:
debug = false
logLevel = "ERROR"
defaultEntryPoints = ["https","http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[retry]
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "ec2-00-000-000-00.eu-west-1.compute.amazonaws.com"
watch = true
exposedByDefault = false
[acme]
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
I added this to my docker-compose production file:
labels:
- "traefik.docker.network=web"
- "traefik.enable=true"
- "traefik.basic.frontend.rule=Host:ec2-00-000-000-00.eu-west-1.compute.amazonaws.com"
- "traefik.basic.port=80"
- "traefik.basic.protocol=https"
I ran docker-compose up for the Traefik container, and then ran docker-compose up on my production image. I got the following error:
unable to obtain acme certificate
I'm reading the Traefik docs and apparently there's a way to configure the toml file specifically for Amazon ECS: https://docs.traefik.io/configuration/backends/ecs/
Am I on the right track?
Easiest way would be to setup a ALB and use it for HTTPS.
Create ALB
Add 443 Listener to ALB
Generate Certificate using AWS Certificate Manager
Set the Certificate to the default cert for the load balancer
Create Target Group
Add your EC2 Instance to the Target Group
Point the ALB to the Target Group
Requests will be served using the ALB with https
Enabling SSL is done through following the tutorial on Nginx and Let's Encrypt with Docker in Less Than 5 Minutes. I ran into some issues while following it, so I will try to clarify some things here.
The steps include adding the following to the docker-compose.yml:
##############################
# Certbot Container
##############################
certbot:
image: certbot/certbot:latest
volumes:
- ./frontend/data/certbot/conf:/etc/letsencrypt
- ./frontend/data/certbot/www:/var/www/certbot
As for the Nginx Container section of the docker-compose.yml, it should be amended to include the same volumes added to the Certbot Container, as well as add the ports and expose configurations:
service_name:
container_name: container_name
image: nginx:alpine
command: /bin/ash -c "exec nginx -g 'daemon off;'"
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
expose:
- "80"
- "443"
ports:
- "80:80"
- "443:443"
networks:
- default
The data folder may be saved anywhere else, but make sure to know where it is and make sure to reference it properly when reused later. In this example, I am simply saving it in the same directory as the docker-compose.yml file.
Once the above configurations are put into place, a couple of steps are to be taken in order to initialize the issuance of the certificates.
Firstly, your Nginx configuration (default.conf) is to be changed to accommodate the domain verification request:
server {
listen 80;
server_name example.com www.example.com;
server_tokens off;
location / {
return 301 https://$server_name$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name example.com www.example.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Once the Nginx configuration file is amended, a dummy certificate is created to allow for Let's Encrypt validation to take place. There is a script that does all of this automatically, which can be downloaded, into the root of the project, using CURL, before being amended to suit the environment. The script would also need to be made executable using the chmod command:
curl -L https://raw.githubusercontent.com/wmnnd/nginx-certbot/master/init-letsencrypt.sh > init-letsencrypt.sh && chmod +x init-letsencrypt.sh
Once the script is downloaded, it is to be amended as follows:
#!/bin/bash
if ! [ -x "$(command -v docker-compose)" ]; then
echo 'Error: docker-compose is not installed.' >&2
exit 1
fi
-domains=(example.org www.example.org)
+domains=(example.com www.example.com)
rsa_key_size=4096
-data_path="./data/certbot"
+data_path="./data/certbot"
-email="" # Adding a valid address is strongly recommended
+email="admin#example.com" # Adding a valid address is strongly recommended
staging=0 # Set to 1 when testing setup to avoid hitting request limits
if [ -d "$data_path" ]; then
read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then
exit
fi
fi
if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
echo "### Downloading recommended TLS parameters ..."
mkdir -p "$data_path/conf"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/tls_configs/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem"
echo
fi
echo "### Creating dummy certificate for $domains ..."
path="/etc/letsencrypt/live/$domains"
mkdir -p "$data_path/conf/live/$domains"
-docker-compose run --rm --entrypoint "\
+docker-compose -f docker-compose.yml run --rm --entrypoint "\
openssl req -x509 -nodes -newkey rsa:1024 -days 1\
-keyout '$path/privkey.pem' \
-out '$path/fullchain.pem' \
-subj '/CN=localhost'" certbot
echo
echo "### Starting nginx ..."
-docker-compose up --force-recreate -d nginx
+docker-compose -f docker-compose.yml up --force-recreate -d service_name
echo
echo "### Deleting dummy certificate for $domains ..."
-docker-compose run --rm --entrypoint "\
+docker-compose -f docker-compose.yml run --rm --entrypoint "\
rm -Rf /etc/letsencrypt/live/$domains && \
rm -Rf /etc/letsencrypt/archive/$domains && \
rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot
echo
echo "### Requesting Let's Encrypt certificate for $domains ..."
#Join $domains to -d args
domain_args=""
for domain in "${domains[#]}"; do
domain_args="$domain_args -d $domain"
done
# Select appropriate email arg
case "$email" in
"") email_arg="--register-unsafely-without-email" ;;
*) email_arg="--email $email" ;;
esac
# Enable staging mode if needed
if [ $staging != "0" ]; then staging_arg="--staging"; fi
-docker-compose run --rm --entrypoint "\
+docker-compose -f docker-compose.yml run --rm --entrypoint "\
certbot certonly --webroot -w /var/www/certbot \
$staging_arg \
$email_arg \
$domain_args \
--rsa-key-size $rsa_key_size \
--agree-tos \
--force-renewal" certbot
echo
echo "### Reloading nginx ..."
-docker-compose exec nginx nginx -s reload
+docker-compose exec service_name nginx -s reload
I have made sure to always include the -f flag with the docker-compose command just in case someone doesn't know what to change if they had a custom named docker-compose.yml file. I have also made sure to set the service name as service_name to make sure to differentiate between the service name and the Nginx command, unlike the tutorial.
Note: If unsure about the fact that the setup is working, make sure to set staging as 1 to avoid hitting request limits. It is important to remember to set it back to 0 once testing is done and redo all steps from amending the init-letsencrypt.sh file. Once testing is done and the staging is set to 0, it is important to stop previous running containers and delete the data folder for the proper initial certification to ensue:
$ docker-compose -f docker-compose.yml down && yes | docker system prune -a --volumes && sudo rm -rf ./data
Once the certificates are ready to be initialized, the script is to be run using sudo; it is very important to use sudo, as issues will occur with the permissions inside the containers if run without it.
$ sudo ./init-letsencrypt.sh
After the certificate is issued, there is the matter of automatically renewing the certificate; two things need to be done:
In the Nginx Container, Nginx would reload the newly obtained certificates through the following ammendment:
service_name:
...
- command: /bin/ash -c "exec nginx -g 'daemon off;'"
+ command: /bin/ash -c "while :; do sleep 6h & wait $${!}; nginx -s reload; done & exec nginx -g 'daemon off;'"
...
In the Certbot Container section, the following is to be add to check if the certificate is up for renewal every twelve hours, as recommended by Let's Encrypt:
certbot:
...
+ entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot -w /var/www/certbot; sleep 12h & wait $${!}; done;'"
Before running docker-compose -f docker-compose.yml up, the ownership of the data should be changed folder to the ec2-user; this is to avoid running into permission errors when running docker-compose -f docker-compose.yml up, or running it in sudo mode:
sudo chown ec2-user:ec2-user -R /path/to/data/
Don't forget to add a CAA record in your DNS provider for Let's Encrypt. You may read here for more information on how to do so.
If you run into any issues with the Nginx container because you are substituting variables and $server_name and $request_uri are not appearing properly, you may refer to this issue.

Docker: "Cannot assign requested address while connecting to upstream"

I have docker running on a machine with the IP address fd42:1337::31. One container is a nginx reverse proxy with the port mapping 443:443, in its configuration file it proxy_pass-es depending on the server name to other ports on the same machine, e.g.
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name plex.mydomain.tld;
location / {
proxy_pass http://[fd42:1337::31]:32400;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name file.mydomain.tld;
location / {
proxy_pass http://[fd42:1337::31]:2020;
}
}
These other ports refer to bottle py servers or other containers with mapped ports.
I've started this container with the command
docker run -d -p 443:443 (volume mappings) --name reverseproxy nginx
and it has served me well for a year.
I've now decided to work with docker-compose and have the following configuration file:
version: '3'
services:
reverseproxy:
image: "nginx"
ports:
- "443:443"
volumes:
(volume mappings)
When I shut down the original container and start my new one with docker-compose up, it starts, but every request gives me something like this:
2019/02/13 17:04:43 [crit] 6#6: *1 connect() to [fd42:1337::31]:32400 failed (99: Cannot assign requested address) while connecting to upstream, client: 192.168.178.126, server: plex.mydomain.tld, request: "GET / HTTP/1.1", upstream: "http://[fd42:1337::31]:32400/", host: "plex.mydomain.tld"
Why is the new container behaving differently? What do I have to change?
(I know I can have a virtual network mode to connect to other containers directly, but my proxy is supposed to connect to some services that are not inside containers (but on the same metal).)

Traefik and Nginx with HTTPS on Docker / 400 Bad Request

I'm trying to build stack with Traefik and Nginx based on Docker. Without HTTPS is everything fine, but I get an error as soon as I put on HTTPS configuration.
I'm getting this error from Nginx on example.com: 400 Bad Request / The plain HTTP request was sent to HTTPS port. In the address bar I can see the green lock saying connection is secure.
Certbot works fine so I have real SSL certificate inside the proper folder.
I can get to the Traefik dasboard when I visit traefik.example.com but I have to accept no SSL browser warning and dasboard is also working without HTTPS.
docker-compose.yml
version: '3.4'
services:
traefik:
image: traefik:latest
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.toml:/etc/traefik/traefik.toml
- ../letsencrypt:/etc/letsencrypt
labels:
- traefik.backend=traefik
- traefik.frontend.rule=Host:traefik.example.com
- traefik.port=8080
networks:
- traefik
nginx:
image: nginx:latest
volumes:
- ../www:/var/www
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ../letsencrypt:/etc/letsencrypt
labels:
- traefik.backend=nginx
- traefik.frontend.rule=Host:example.com
- traefik.port=80
- traefik.port=443
networks:
- traefik
networks:
traefik:
driver: overlay
external: true
attachable: true
traefik.toml
defaultEntryPoints = ["http", "https"]
[web]
address = ":8080"
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/letsencrypt/live/example.com/fullchain.pem"
keyFile = "/etc/letsencrypt/live/example.com/privkey.pem"
[docker]
domain="example.com"
watch = true
exposedByDefault = true
swarmMode = false
nginx.conf
server {
listen 80;
server_name example.com www.example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
root /var/www/public;
index index.html;
}
Thanks for your help.
First there is no need to have SSL redirection configured in both Traefik and Nginx. Also Traefik frontend is matching only non www variant but backend app expects www. Finally Traefik web provider is deprecated so there should be newer api provider.
As I just stumbled upon a similar problem with Traefik v2
400 Bad Request / The plain HTTP request was sent to HTTPS port
with an Nginx error log stating
400 client sent plain HTTP request to HTTPS port while reading client request headers
and scratching my head over it, I finally found the source of that error. It's not that the TLS certs were invalid or something in the transport broken, but that the wiring between routers, services and port mappings were off.
Previously I did not see, that the Docker Compose stack had an Nginx container only listening on 80/tcp. I assumed everything's ok as I attached the ports to Traefik load balancers attached to a separate service per http/https endpoints with separated routers. This somehow did not work:
- "traefik.http.services.proxy.loadbalancer.server.port=80"
- "traefik.http.services.proxy-secure.loadbalancer.server.port=443"
Intermediary I now opened port: - "8008:80" - "8443:443" and got it working. Investigating further what's wrong with Traefik ports as those should get exposed per default. This is not a solution as those ports are now available to the outside world, but I am leaving this explanation here as I could not find anything on this topic that would point me in the right direction, so hopefully it's helpful for someone else later on.

docker-compose --scale X nginx.conf configuration

My nginx.conf file currently has the routes defined directly:
worker_processes auto;
events { worker_connections 1024; }
http {
upstream wordSearcherApi {
least_conn;
server api1:61370 max_fails=3 fail_timeout=30s;
server api2:61370 max_fails=3 fail_timeout=30s;
server api3:61370 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
server_name server_name 0.0.0.0;
location / {
proxy_pass http://wordSearcherApi;
}
}
}
Is there any way to create just one service in docker-compose.yml and when docker-compose up --scale api=3, does nginx do automatic load balance?
Nginx
Dynamic upstreams are possible in Nginx (normal, sans Plus) but with tricks and limitations.
You give up on upstream directive and use plain proxy_pass.
It gives round robin load balancing and failover, but no extra feature of the directive like weights, failure modes, timeout, etc.
Your upstream hostname must be passed to proxy_pass by a variable and you must provide a resolver.
It forces Nginx to re-resolve the hostname (against Docker networks' DNS).
You lose location/proxy_pass behaviour related to trailing slash.
In the case of reverse-proxying to bare / like in the question, it does not matter. Otherwise you have to manually rewrite the path (see the references below).
Let's see how it works.
docker-compose.yml
version: '2.2'
services:
reverse-proxy:
image: nginx:1.15-alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- 8080:8080
app:
# A container that exposes an API to show its IP address
image: containous/whoami
scale: 4
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
access_log /dev/stdout;
error_log /dev/stderr;
server {
listen 8080;
server_name localhost;
resolver 127.0.0.11 valid=5s;
set $upstream app;
location / {
proxy_pass http://$upstream:80;
}
}
}
Then...
docker-compose up -d
seq 10 | xargs -I -- curl -s localhost:8080 | grep "IP: 172"
...produces something like the following which indicates the requests are distributed across 4 app containers:
IP: 172.30.0.2
IP: 172.30.0.2
IP: 172.30.0.3
IP: 172.30.0.3
IP: 172.30.0.6
IP: 172.30.0.5
IP: 172.30.0.3
IP: 172.30.0.6
IP: 172.30.0.5
IP: 172.30.0.5
References:
Nginx with dynamic upstreams
Using Containers to Learn Nginx Reverse Proxy
Dynamic Nginx configuration for Docker with Python
Traefik
Traefik relies on Docker API directly and may be a simpler and more configurable option. Let's see it in action.
docker-compose.yml
version: '2.2'
services:
reverse-proxy:
image: traefik
# Enables the web UI and tells Traefik to listen to docker
command: --api --docker
ports:
- 8080:80
- 8081:8080 # Traefik's web UI, enabled by --api
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
app:
image: containous/whoami
scale: 4
labels:
- "traefik.frontend.rule=Host:localhost"
Then...
docker-compose up -d
seq 10 | xargs -I -- curl -s localhost:8080 | grep "IP: 172"
...also produces something the output that indicates the requests are distributed across 4 app containers:
IP: 172.31.0.2
IP: 172.31.0.5
IP: 172.31.0.6
IP: 172.31.0.4
IP: 172.31.0.2
IP: 172.31.0.5
IP: 172.31.0.6
IP: 172.31.0.4
IP: 172.31.0.2
IP: 172.31.0.5
In the Traefik UI (http://localhost:8081/dashboard/ in the example) you can see it recognised the 4 app containers:
References:
The Traefik Quickstart (Using Docker)
It's not possible with your current config since it's static. You have two options -
1. Use docker engine swarm mode - You can define replicas & swarm internal DNS will automatically balance the load across those replicas.
Ref - https://docs.docker.com/engine/swarm/
2. Use famous Jwilder nginx proxy - This image listens to the docker sockets, uses templates in GO to dynamically change your nginx configs when you scale your containers up or down.
Ref - https://github.com/jwilder/nginx-proxy

Resources