I am using docker swarm and deploying 3 tomcat services each of the running on 8443 within the container and on 8444,8445,8446 on host containers.
I am looking to use a proxy server running on 8443 which will redirect the incoming request to the corresponding service based on the url path
https://hostname:8443/a – > https://hostname:8444/a
https://hostname:8443/b – > https://hostname:8445/b
https://hostname:8443/c – > https://hostname:8446/c
My sample Docker-compose file
version: "3"
services:
tomcat1 :
image: tomcat:1
ports:
- "8446:8443"
tomcat2 :
image: tomcat:1
ports:
- "8444:8443"
tomcat3 :
image: tomcat:1
ports:
- "8445:8443"
I have explored traeffik and nginx but was not able to find to re route based on url. Any suggestions.
You could use traefik based in rule with labels Host and Path
http://docs.traefik.io/basics/#frontends
Something like
version: "3"
services:
traefik:
image: traefik
command: --web --docker --docker.swarmmode --docker.watch --docker.domain=hostname
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
tomcat1:
image: tomcat:1
labels:
- traefik.backend=tomcat1
- traefik.frontend.rule=Host:hostname;PathPrefixStrip:/a
- traefik.port=8443
You can try the way I did it using nginx.
ON UBUNTU
Inside the /etc/nginx/sites-available you will find the default file.
Inside the server block add a new location block.
server {
listen 8443;
#this is a comment
location /a {
proxy_pass http://[::]:8444/.;
#i have commented these out because i don't know if you need them
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection keep-alive;
#proxy_set_header Host $host;
#proxy_cache_bypass $http_upgrade;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Forwarded-Proto $scheme;
}
location /b {
proxy_pass http://[::]:8445/.;
}
location /c {
proxy_pass http://[::]:8446/.;
}
}
Related
I want to switch from Nginx as Reverse Proxy to traefik, since traefik offers sticky sessions, which I need in a Docker Swarm environment. This is part my Nginx Setup which worked fine:
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 600s;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /auth/ {
proxy_pass https://127.0.0.1:8443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 600s;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
This is my traefik.toml:
debug = false
logLevel = "ERROR"
defaultEntryPoints = ["https","http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
cipherSuites = [
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_RSA_WITH_AES_256_GCM_SHA384"
]
[entryPoints.keycloak]
address = ":8443"
[entryPoints.shinyproxy]
address = ":5000"
[retry]
[docker]
exposedByDefault = false
[acme]
email = "langmarkus#hotmail.com"
storage = "acme/certs.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
And this is my compose file:
version: "3.7"
services:
shinyproxy:
build: /home/shinyproxy
deploy:
#replicas: 3
user: root:root
hostname: shinyproxy
image: shinyproxy-example
labels:
- "traefik.enable=true" # Enable reverse-proxy for this service
- "traefik.frontend.rule=Host:analytics.data-mastery.com" # Domain name for the app
- "traefik.port=443"
ports:
- 5000:5000
keycloak:
image: jboss/keycloak
labels:
- "traefik.enable=true" # Enable reverse-proxy for this service
- "traefik.frontend.rule=Host:analytics.data-mastery.com" # Domain name for the app
- "traefik.port=443"
networks:
- sp-example-net
volumes:
- type: bind
source: /home/certs/fullchain.pem
target: /etc/x509/https/tls.crt
- type: bind
source: /home/certs/privkey.pem
target: /etc/x509/https/tls.key
- /home/theme/:/opt/jboss/keycloak/themes/custom/
environment:
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=myadmin
- KEYCLOAK_PASSWORD=mypassword
ports:
- 8443:8443
reverseproxy:
image: traefik:v1.7.16
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
- ./traefik/traefik.toml:/traefik.toml # Traefik configuration file
- ./volumes/traefik-acme:/acme # Tell Traefik to save SSL certs here
command: --api # Enables the web UI
ports:
- "80:80" # The HTTP port
- "443:443" # The HTTPS port
- "8080:8080" # The web UI
networks:
sp-example-net:
driver: overlay
attachable: true
SSL is working, my keycloak service is running here: https://analytics.data-mastery.com:8443/auth/ . However, I want to archieve the same behaviour like with proxy_pass where I will not have to use ports in the URL. What do I have to change?
in case you want to keep using the old traefik version, you can use the below stack files (you can also get rid of the traefik.toml and use only CLI commands)
With the below stack file, you will be able to access shinyproxy on analytics.data-mastery.com and keycloak on analytics.data-mastery.com/auth The import thing here is the defined rule https://docs.traefik.io/routing/routers/
you also don't need to expose the ports for this service, traefik will use the internal ones
version: "3.7"
services:
shinyproxy:
build: /home/shinyproxy
deploy:
replicas: 3
user: root:root
hostname: shinyproxy
image: shinyproxy-example
labels:
- traefik.enable=true
- traefik.backend.loadbalancer.swarm=true
- traefik.backend=shinyproxy
- traefik.frontend.rule=Host:analytics.data-mastery.com;
- traefik.port=5000
- traefik.docker.network=sp-example-net
keycloak:
image: jboss/keycloak
labels:
- traefik.enable=true
- traefik.backend.loadbalancer.swarm=true
- traefik.backend=keycloak
- traefik.frontend.rule=Host:analytics.data-mastery.com;Path:/auth
- traefik.port=8443
- traefik.docker.network=sp-example-net
networks:
- sp-example-net
volumes:
- type: bind
source: /home/certs/fullchain.pem
target: /etc/x509/https/tls.crt
- type: bind
source: /home/certs/privkey.pem
target: /etc/x509/https/tls.key
- /home/theme/:/opt/jboss/keycloak/themes/custom/
environment:
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=myadmin
- KEYCLOAK_PASSWORD=mypassword
reverseproxy:
image: traefik:v1.7.16
networks:
- sp-example-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
- ./traefik/traefik.toml:/traefik.toml # Traefik configuration file
- ./volumes/traefik-acme:/acme # Tell Traefik to save SSL certs here
command:
- '--docker'
- '--docker.swarmmode'
- '--docker.domain=analytics.data-mastery.com'
- '--docker.watch'
- '--accessLog'
- '--checkNewVersion=false'
- '--api'
- '--ping.entryPoint=http'
# if you want to get reid of the toml file at all
# - '--entrypoints=Name:http Address::80 Redirect.EntryPoint:https'
# - '--entrypoints=Name:https Address::443 TLS'
# - '--defaultentrypoints=http,https'
# - '--acme.entrypoint=https'
# - '--acme.email=langmarkus#hotmail.com'
# - '--acme.storage=/var/lib/traefik/acme.json'
# - '--acme.acmelogging=true'
# - '--acme.httpChallenge.entryPoint=http'
# - '--acme.domains=*.analytics.data-mastery.com,analytics.data-mastery.com'
ports:
- "80:80"
- "443:443"
- "8080:8080"
networks:
sp-example-net:
driver: overlay
attachable: true
if you want to jump directly to traefik2.1, here is a link that includes good examples for using it
I'm trying to get docker-compose to run an NGINX reverse-proxy and I'm running into an issue. I know that what I am attempting appears possible as it is outlined here:
https://dev.to/domysee/setting-up-a-reverse-proxy-with-nginx-and-docker-compose-29jg
and here:
https://www.digitalocean.com/community/tutorials/how-to-secure-a-containerized-node-js-application-with-nginx-let-s-encrypt-and-docker-compose#step-2-%E2%80%94-defining-the-web-server-configuration
My application is very simple - it has a front end and a back end (nextjs and nodejs), which I've put in docker-compose along with an nginx instance.
Here is the docker-compose file:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
container_name: nodejs
restart: unless-stopped
nextjs:
build:
context: ../.
dockerfile: Dockerfile
ports:
- "3000:3000"
container_name: nextjs
restart: unless-stopped
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
depends_on:
- nodejs
- nextjs
networks:
- app-network
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /
o: bind
networks:
app-network:
driver: bridge
And here is the nginx file:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name patientplatypus.com www.patientplatypus.com localhost;
location /back {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://nodejs:8000;
}
location / {
proxy_pass http://nextjs:3000;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
Both of these are very similar to the digitalOcean example and I can't think of how they would be different enough to cause errors. I run it with a simple docker-compose up -d --build.
When I go to localhost:80 I get page could not be found, and here is the result of my docker logs -
patientplatypus:~/Documents/patientplatypus.com/forum/back:10:03:32$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9c2e4e25e6d9 nginx:mainline-alpine "nginx -g 'daemon of…" 2 minutes ago Restarting (1) 14 seconds ago webserver
213e73495381 back_nodejs "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:8000->8000/tcp nodejs
03b6ae8f0ad4 back_nextjs "npm start" 2 minutes ago Up 2 minutes 0.0.0.0:3000->3000/tcp nextjs
patientplatypus:~/Documents/patientplatypus.com/forum/back:10:05:41$docker logs 9c2e4e25e6d9
2019/04/10 15:03:32 [emerg] 1#1: host not found in upstream "nodejs" in /etc/nginx/conf.d/nginx.conf:20
I'm pretty lost as to what could be going wrong. If anyone has any ideas please let me know. Thank you.
EDIT: SEE SOLUTION BELOW
The nginx webserver is on the network app-network which is a different network than the other two services which don't have a network defined. When no network is defined docker-compose will create a default network for them to share.
Either copy the network setting to both of the other services or remove the network setting from the webserver service.
Hi I am trying to establish ELK stack + Nginx(for load balancing and basic Auth) by using Docker. My question is : I couldn't use Nginx as loadbalancer and when I try to visit kibana web ui, Never asking to me any password. How can I rearrange my docker compose for security and load balancing? Iam using windows 10.
My file structure is below:
My docker-compose.yml:
version: '2'
services:
elasticsearch:
container_name: esc
image: esi:1.0.0
build: ./es
volumes:
- ./data/es:/usr/share/elasticsearch/data
ports:
- 9200:9200
expose:
- 9300
kibana:
container_name: kibanac
image: kibanai:1.0.0
build: ./kibana
links:
- elasticsearch
ports:
- 5601:5601
nginx:
image: nginx:latest
restart: unless-stopped
volumes:
- ./nginx/config:/etc/nginx/conf.d:ro,Z
- ./nginx/htpasswd.users:/etc/nginx/htpasswd.users:ro,Z
ports:
- "8890:8890"
depends_on:
- elasticsearch
- kibana
Docker deamons:
Nginx.conf:
upstream elasticsearch {
server localhost:9200;
keepalive 15;
}
upstream kibana {
server localhost:5601;
keepalive 15;
}
server {
listen 8888;
location / {
auth_basic "Protected Elasticsearch";
auth_basic_user_file /etc/nginx/htpasswd.users;
proxy_pass http://localhost:9200;
proxy_redirect off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
server {
listen 8889;
location / {
auth_basic "Protected Kibana";
auth_basic_user_file /etc/nginx/htpasswd.users;
proxy_pass http://localhost:5601;
proxy_redirect off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
Kibana.yml: ( calling for kibana ui-> localhost:5601)
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
elasticsearch.username: elastic
elasticsearch.password: changeme
xpack.monitoring.ui.container.elasticsearch.enabled: true
Elasticsearch.yml: ( calling for elasticsearch-> localhost:9200)
http.host: 0.0.0.0
### x-pack functions
xpack.security.enabled: false
xpack.monitoring.enabled: true
xpack.graph.enabled: false
xpack.watcher.enabled: false
Dockerfile for Kibana:
FROM docker.elastic.co/kibana/kibana:6.6.2
COPY ./config/kibana.yml /opt/kibana/config/kibana.yml
Dockerfile for Elasticsearch:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.6.2
COPY ./config/elasticsearch.yml
/usr/share/elasticsearch/config/elasticsearch.yml
please add ip address or domain name in nginx configuration which you are accessing Kibana web UI.
listen ip_address_or_domain_name:8889;
and:
proxy_pass http://kibana;
In docker-compose.yml file:
ports:
- "8890:8889"
Please note, its port 8889.
Access : http://ip_address_or_domain_name:8890
I'm trying to deploy a stack with docker.
Here is how my stack works:
nginx-proxy (redirect user requests to the good container)
website (simple nginx serving a website)
api (django application, launch with gunicorn)
nginx-api (serving static files and uploaded files and redirect to the API container if it is an endpoint)
This is my docker-compose.yml:
version: '3.2'
services:
website:
container_name: nyl2pronos-website
image: nyl2pronos-website
restart: always
build:
context: nyl2pronos_webapp
dockerfile: Dockerfile
volumes:
- ./logs/nginx-website:/var/log/nginx
expose:
- "80"
deploy:
replicas: 10
update_config:
parallelism: 5
delay: 10s
api:
container_name: nyl2pronos-api
build:
context: nyl2pronos_api
dockerfile: Dockerfile
image: nyl2pronos-api
restart: always
ports:
- 8001:80
expose:
- "80"
depends_on:
- db
- memcached
environment:
- DJANGO_PRODUCTION=1
volumes:
- ./data/api/uploads:/code/uploads
- ./data/api/static:/code/static
nginx-api:
image: nginx:latest
container_name: nyl2pronos-nginx-api
restart: always
expose:
- "80"
volumes:
- ./data/api/uploads:/uploads
- ./data/api/static:/static
- ./nyl2pronos_api/config:/etc/nginx/conf.d
- ./logs/nginx-api:/var/log/nginx
depends_on:
- api
nginx-proxy:
image: nginx:latest
container_name: nyl2pronos-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./proxy:/etc/nginx/conf.d
- /etc/letsencrypt:/etc/letsencrypt
- ./logs/nginx-proxy:/var/log/nginx
deploy:
placement:
constraints: [node.role == manager]
depends_on:
- nginx-api
- website
When I use docker-compose up everything works fine.
But when I try to deploy with docker stack deploy --compose-file=docker-compose.yml prod. My nginx config files can't find the different upstreams.
This is the error provided by my service nginx-api:
2019/03/23 17:32:41 [emerg] 1#1: host not found in upstream "api" in /etc/nginx/conf.d/nginx.conf:2
See below my nginx.conf:
upstream docker-api {
server api;
}
server {
listen 80;
server_name xxxxxxxxxxxxxx;
location /static {
autoindex on;
alias /static/;
}
location /uploads {
autoindex on;
alias /uploads/;
}
location / {
proxy_pass http://docker-api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
If you see something wrong in my configuration or something I can do better, let me know!
this is happening because nginx-api service is up before the api service.
but i added the depends_on option?
you are right, and this option should work for a docker-compose up case.
but unfortunately not on docker stack deploy, or, as the docs put it:
The depends_on option is ignored when deploying a stack in swarm mode
with a version 3 Compose file.
ok, so what can i do now?
nothing. its actually not a bug-
docker swarm nodes (your stack services) are supposed to recover automatically on error. (thats why you define the restart: always option). so it should work for you anyway.
if you are using the compose file only for deploying the stack and not on a docker-compose up - you may remove the depends_on option completely, it means nothing to docker stack.
I use multiple docker-compose files :
one for running on the same network : postgres and nginx
=> this containers collection is supposed to be always running
one for each asp core web site (each one on a specific port)
=> this containers are updated through a CI/CD pipeline (VSTS)
Because Nginx needs to know the hostname when defining the upstream, if the asp core container is not running then it's hostname is not known, then nginx throws an error on docker-compose up command :
nginx | 2018/01/04 15:59:17 [emerg] 1#1: host not found in upstream
"webportalstage:5001" in /etc/nginx/nginx.conf:9
nginx | nginx: [emerg] host not found in upstream
"webportalstage:5001" in /etc/nginx/nginx.conf:9
nginx exited with code 1
And obviously if the asp core container is running before, then nginx knows the hostname webportalstage and everything works fine. But the starting sequence is not what I expect.
is there any solution to start nginx with a not yet known hostname in the upstream ?
Here is my nginx.conf file :
worker_processes 4;
events { worker_connections 1024; }
http {
sendfile on;
upstream webportalstage {
server webportalstage:5001;
}
server {
listen 80;
location / {
proxy_pass http://webportalstage;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
And both docker-compose files :
Nginx + Postgres :
version: "3"
services:
proxy:
image: myPrivateRepo:latest
ports:
- "80:80"
container_name: nginx
networks:
aspcore:
aliases:
- nginx
postgres:
image: postgres:latest
environment:
- POSTGRES_PASSWORD=myPWD
- POSTGRES_USER=postgres
ports:
- "5432:5432"
container_name: postgres
networks:
aspcore:
aliases:
- postgres
networks:
aspcore:
driver: bridge
One of my asp core web site :
version: "3"
services:
webportal:
image: myPrivateRepo:latest
environment:
- ASPNETCORE_ENVIRONMENT=Staging
container_name: webportal
networks:
common_aspcore:
aliases:
- webportal
networks:
common_aspcore:
external: true
Well, I use the following hack in similar situation:
location / {
set $docker_host "webportalstage";
proxy_pass http://$docker_host:5001;
...
}
I'm not sure if it works with upstream, probably it should.
I know, this is not the best solution, but I didn't find any better.
I finally used extra_host feature to define static IP within my nginx+postgres docker-compose.yml file :
extra_hosts:
webportalstage: 10.5.0.20
And setting the same static IP to my asp core docker-compose file.
It works but it's not as generic as I would like