Balancing several gateways in Docker - docker

Is there any possibility to create several gateway docker containers with balanced access via the same URL?
I have now the following compose yaml file
version: "3.9"
services:
gateway:
image: "foo/gateway"
ports:
- "8888"
networks:
- "my-net"
deploy:
replicas: 5
x-scaling: "4-7"
factorial:
image: "foo/factorial"
expose:
- "8081"
networks:
- "my-net"
deploy:
replicas: 3
x-scaling: "2-4"
fibonacci:
image: "foo/fibonacci"
expose:
- "8082"
networks:
- "my-net"
deploy:
replicas: 2
x-scaling: "1-3"
networks:
my-net:
driver: "bridge"
Now I have 3 factorial containers, 2 fibonacci containers, and 5 gateway containers with different ports. Each gateway container has access to some of the factorial/fibonacci replicas, but I need to specify a port of the gateway.
Is there any way to provide one URL to a gateway and have requests balanced between gateway replicas by Docker?
Actually, I'm going to create an ECS cluster with docker-compose and ecs context, but this config doesn't provide me any balancing as I could see.

You can user nginx. Put you backend behind nginx and load balance them in your nginx. Something like below.
upstream backend
{
server factorial:8001;
server fibonacci:8002;
}
location /
{
proxy_pass http://backend/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
}

Related

Nginx upstream doesn't work with docker deploy stack

I'm trying to deploy a stack with docker.
Here is how my stack works:
nginx-proxy (redirect user requests to the good container)
website (simple nginx serving a website)
api (django application, launch with gunicorn)
nginx-api (serving static files and uploaded files and redirect to the API container if it is an endpoint)
This is my docker-compose.yml:
version: '3.2'
services:
website:
container_name: nyl2pronos-website
image: nyl2pronos-website
restart: always
build:
context: nyl2pronos_webapp
dockerfile: Dockerfile
volumes:
- ./logs/nginx-website:/var/log/nginx
expose:
- "80"
deploy:
replicas: 10
update_config:
parallelism: 5
delay: 10s
api:
container_name: nyl2pronos-api
build:
context: nyl2pronos_api
dockerfile: Dockerfile
image: nyl2pronos-api
restart: always
ports:
- 8001:80
expose:
- "80"
depends_on:
- db
- memcached
environment:
- DJANGO_PRODUCTION=1
volumes:
- ./data/api/uploads:/code/uploads
- ./data/api/static:/code/static
nginx-api:
image: nginx:latest
container_name: nyl2pronos-nginx-api
restart: always
expose:
- "80"
volumes:
- ./data/api/uploads:/uploads
- ./data/api/static:/static
- ./nyl2pronos_api/config:/etc/nginx/conf.d
- ./logs/nginx-api:/var/log/nginx
depends_on:
- api
nginx-proxy:
image: nginx:latest
container_name: nyl2pronos-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./proxy:/etc/nginx/conf.d
- /etc/letsencrypt:/etc/letsencrypt
- ./logs/nginx-proxy:/var/log/nginx
deploy:
placement:
constraints: [node.role == manager]
depends_on:
- nginx-api
- website
When I use docker-compose up everything works fine.
But when I try to deploy with docker stack deploy --compose-file=docker-compose.yml prod. My nginx config files can't find the different upstreams.
This is the error provided by my service nginx-api:
2019/03/23 17:32:41 [emerg] 1#1: host not found in upstream "api" in /etc/nginx/conf.d/nginx.conf:2
See below my nginx.conf:
upstream docker-api {
server api;
}
server {
listen 80;
server_name xxxxxxxxxxxxxx;
location /static {
autoindex on;
alias /static/;
}
location /uploads {
autoindex on;
alias /uploads/;
}
location / {
proxy_pass http://docker-api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
If you see something wrong in my configuration or something I can do better, let me know!
this is happening because nginx-api service is up before the api service.
but i added the depends_on option?
you are right, and this option should work for a docker-compose up case.
but unfortunately not on docker stack deploy, or, as the docs put it:
The depends_on option is ignored when deploying a stack in swarm mode
with a version 3 Compose file.
ok, so what can i do now?
nothing. its actually not a bug-
docker swarm nodes (your stack services) are supposed to recover automatically on error. (thats why you define the restart: always option). so it should work for you anyway.
if you are using the compose file only for deploying the stack and not on a docker-compose up - you may remove the depends_on option completely, it means nothing to docker stack.

Docker [emerg] 1#1: host not found in upstream after separate compose file

i have a compose file. when i run, that working normally
services:
[...]
wordpress-1:
depends_on:
- database
image: wordpress:latest
expose:
- 5000
volumes:
- ./site1/:/var/www/html/
[...]
nginx:
container_name: nginx_
build:
context: ./services/nginx
dockerfile: Dockerfile-prod
ports:
- 80:80
depends_on:
- wordpress-1
networks:
- my-network
[...]
and nginx conf:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://wordpress-1:80;
proxy_redirect default;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
But, after i separated it into 2 docker compose (one for wordpress-1 service, and one for nginx service) when i run compose file contain nginx, i got this error: [emerg] 1#1: host not found in upstream "wordpress-1"
can you help me?
thank's
Docker-compose by default creates a network per each set of services (e.g. per each docker-compose file).
If you really need to have separate docker-compose files, you can create a shared network between the services like this:
$ cat a/docker-compose.yml
version: '3.5'
services:
a:
image: alpine
command: sleep 9999
networks: ["mynet"]
networks:
mynet:
name: shared-net
$ cat b/docker-compose.yml
version: '3.5'
services:
b:
image: alpine
command: sleep 9999
networks: ["mynet"]
networks:
mynet:
name: shared-net
After starting each, you can ping from b to a:
$ docker exec -it b_b_1 ping -c 1 a_a_1
PING a_a_1 (172.21.0.3): 56 data bytes
64 bytes from 172.21.0.3: seq=0 ttl=64 time=0.081 ms
--- a_a_1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.081/0.081/0.081 ms
It works between restarts, for example if you restart a container.
Please note, that if nginx can't find a host it's an emerg-error and nginx might stop completely - this might be a problem between service restarts (as dns resolution no longer works).

Docker Nginx reverse proxy to separate container

I'm having trouble creating a reverse proxy and having it point at apps that are in other containers.
What I have now is a docker-compose for Nginx, and then I want to have separate docker-containers for several different apps and have Nginx direct traffic to those apps.
My Nginx docker-compose is:
version: "3"
services:
nginx:
image: nginx:alpine
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
My default.conf is:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name www.mydomain.com;
location /confluence {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://192.168.1.50:8090/confluence;
}
}
I can access confluence directly at: http://192.168.1.50:8090/confluence
My compose for confluence is:
version: "3"
services:
db:
image: postgres:9.6
container_name: pg_confluence
env_file:
- env.list
ports:
- "5434:5432"
volumes:
- ./pg_conf.sql:/docker-entrypoint-initdb.d/pg_conf.sql
- dbdata:/var/lib/postgresql/data
confluence:
image: my_custom_image/confluence:6.11.0
container_name: confluence
volumes:
- confluencedata:/var/atlassian/application-data/confluence
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
environment:
- JVM_MAXIMUM_MEMORY=2g
ports:
- "8090:8090"
depends_on:
- db
volumes:
confluencedata:
dbdata:
I am able to see the Nginx "Welcome" screen when I hit mydomain.com but if I hit mydomain.com/confluence it gives a not found.
So it looks like Nginx is running, just not sending the traffic to the other container properly.
========================
=== Update With Solution ===
========================
I ended up switching to Traefik instead of Nginx. When I take the next step and start learning k8s this will help as well.
Although these network settings are what you need even if you stick with Nginx, I just didn't test them against Nginx, so hopefully they are helpful no matter which one you end up using.
For the confluence docker-compose.yml I added:
networks:
proxy:
external: true
internal:
external: false
services:
confluence:
...
networks:
- internal
- proxy
db:
...
networks:
- internal
And for the traefik docker-compose.yml I added:
networks:
proxy:
external: true
services:
reverse-proxy:
networks:
- proxy
I had to create the network manually with:
docker network create proxy
It is not really how to use docker the correct way.
If you are in a production environment, use a real orchestration tools (nowaday Kubernetes is the way to go)
If you are on you computer, you can reference a name of a container (or an alias) only if you use the same network AND this network is not the default one.
A way is to have only one docker-compose file.
Another way is to use the same network across your docker-compose.
Create a network docker network create --driver bridge my_network
use it on each docker-compose you have:
networks:
default:
external:
name: my_network

Docker service redirection based on url path

I am using docker swarm and deploying 3 tomcat services each of the running on 8443 within the container and on 8444,8445,8446 on host containers.
I am looking to use a proxy server running on 8443 which will redirect the incoming request to the corresponding service based on the url path
https://hostname:8443/a – > https://hostname:8444/a
https://hostname:8443/b – > https://hostname:8445/b
https://hostname:8443/c – > https://hostname:8446/c
My sample Docker-compose file
version: "3"
services:
tomcat1 :
image: tomcat:1
ports:
- "8446:8443"
tomcat2 :
image: tomcat:1
ports:
- "8444:8443"
tomcat3 :
image: tomcat:1
ports:
- "8445:8443"
I have explored traeffik and nginx but was not able to find to re route based on url. Any suggestions.
You could use traefik based in rule with labels Host and Path
http://docs.traefik.io/basics/#frontends
Something like
version: "3"
services:
traefik:
image: traefik
command: --web --docker --docker.swarmmode --docker.watch --docker.domain=hostname
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
tomcat1:
image: tomcat:1
labels:
- traefik.backend=tomcat1
- traefik.frontend.rule=Host:hostname;PathPrefixStrip:/a
- traefik.port=8443
You can try the way I did it using nginx.
ON UBUNTU
Inside the /etc/nginx/sites-available you will find the default file.
Inside the server block add a new location block.
server {
listen 8443;
#this is a comment
location /a {
proxy_pass http://[::]:8444/.;
#i have commented these out because i don't know if you need them
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection keep-alive;
#proxy_set_header Host $host;
#proxy_cache_bypass $http_upgrade;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Forwarded-Proto $scheme;
}
location /b {
proxy_pass http://[::]:8445/.;
}
location /c {
proxy_pass http://[::]:8446/.;
}
}

Multiple docker-compose sharing network with a not yet known host for nginx

I use multiple docker-compose files :
one for running on the same network : postgres and nginx
=> this containers collection is supposed to be always running
one for each asp core web site (each one on a specific port)
=> this containers are updated through a CI/CD pipeline (VSTS)
Because Nginx needs to know the hostname when defining the upstream, if the asp core container is not running then it's hostname is not known, then nginx throws an error on docker-compose up command :
nginx | 2018/01/04 15:59:17 [emerg] 1#1: host not found in upstream
"webportalstage:5001" in /etc/nginx/nginx.conf:9
nginx | nginx: [emerg] host not found in upstream
"webportalstage:5001" in /etc/nginx/nginx.conf:9
nginx exited with code 1
And obviously if the asp core container is running before, then nginx knows the hostname webportalstage and everything works fine. But the starting sequence is not what I expect.
is there any solution to start nginx with a not yet known hostname in the upstream ?
Here is my nginx.conf file :
worker_processes 4;
events { worker_connections 1024; }
http {
sendfile on;
upstream webportalstage {
server webportalstage:5001;
}
server {
listen 80;
location / {
proxy_pass http://webportalstage;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
And both docker-compose files :
Nginx + Postgres :
version: "3"
services:
proxy:
image: myPrivateRepo:latest
ports:
- "80:80"
container_name: nginx
networks:
aspcore:
aliases:
- nginx
postgres:
image: postgres:latest
environment:
- POSTGRES_PASSWORD=myPWD
- POSTGRES_USER=postgres
ports:
- "5432:5432"
container_name: postgres
networks:
aspcore:
aliases:
- postgres
networks:
aspcore:
driver: bridge
One of my asp core web site :
version: "3"
services:
webportal:
image: myPrivateRepo:latest
environment:
- ASPNETCORE_ENVIRONMENT=Staging
container_name: webportal
networks:
common_aspcore:
aliases:
- webportal
networks:
common_aspcore:
external: true
Well, I use the following hack in similar situation:
location / {
set $docker_host "webportalstage";
proxy_pass http://$docker_host:5001;
...
}
I'm not sure if it works with upstream, probably it should.
I know, this is not the best solution, but I didn't find any better.
I finally used extra_host feature to define static IP within my nginx+postgres docker-compose.yml file :
extra_hosts:
webportalstage: 10.5.0.20
And setting the same static IP to my asp core docker-compose file.
It works but it's not as generic as I would like

Resources