docker nginx reverse proxy 503 Service Temporarily Unavailable - docker

I want to use nginx as reverse proxy for my remote home automation access.
My infrastructure yaml looks like follows:
# /infrastructure/docker-compose.yaml
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
container_name: proxy
networks:
- raspberry_network
ports:
- 80:80
- 443:443
environment:
- ENABLE_IPV6=true
- DEFAULT_HOST=${RASPBERRY_IP}
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d
- ./proxy/vhost.d:/etc/nginx/vhost.d
- ./proxy/html:/usr/share/nginx/html
- ./proxy/certs:/etc/nginx/certs
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
networks:
raspberry_network:
My yaml containing the app configuration looks like this:
# /apps/docker-compose.yaml
version: '3'
services:
homeassistant:
container_name: home-assistant
image: homeassistant/raspberrypi4-homeassistant:stable
volumes:
- ./homeassistant:/config
- /etc/localtime:/etc/localtime:ro
environment:
- 'TZ=Europe/Berlin'
- 'VIRTUAL_HOST=${HOMEASSISTANT_VIRTUAL_HOST}'
- 'VIRTUAL_PORT=8123'
deploy:
resources:
limits:
memory: 250M
restart: unless-stopped
networks:
- infrastructure_raspberry_network
ports:
- '8123:8123'
networks:
infrastructure_raspberry_network:
external: true
Via portainer I validated that both containers are contected to the same network. However, when accessing my local IP of the raspberry pi 192.168.0.10 I am receiving "503 Service Temporarily Unavailable".
Of course when I try accessing my app via the virtual host domain xxx.xxx.de it neither works.
Any idea what the issue might be? Or any ideas how to further debug this?

You need to specify the correct VIRTUAL_HOST in the backends environment variable and make sure that they're on the same network (or docker bridge network)
Make sure that any containers that specify VIRTUAL_HOST are running before the nginx-proxy container runs. With docker-compose, this can be achieved by adding to depends_on config of the nginx-proxy container

Related

Local Communication Between Services

I have 2 services inside my docker cluster. frontend runs on port 8090, and backend runs on port 8000. How can I make frontend call backend via local DNS like fetch('https://backend.local/')? Because if I use docker-hostname, I need to specify the port to call the back-end. Do I need to have a local DNS Server inside my docker?
You have to create a Software Defined Network (SDN) in docker and then all containers running in that network can communicate with each other using the container names or you can define alias for each and use that. A simple docker-compose file for a backend microservice and mysql database can be created using the below configs.
version: '3.2'
networks:
testNetwork:
services:
mysql-dev:
image: mysql:latest
container_name: mysql-dev
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=root
ports:
- "3306:3306"
networks:
- testNetwork
backend:
image: backend:1.0
container_name: backend
environment:
- DB_USER=root
- DB_PASS=root
- DB_NAME=root
- DB_HOST=mysql-dev
- DB_DIALECT=mysql
ports:
- "4000:4000"
working_dir: /backend
command: npm start
networks:
- testNetwork

Traefik routing one application to port 80, others require explicit port

I have an environment running docker containers.
This environment hosts Traefik, Nextcloud, MotionEye and Heimdall.
I also have another environment running CoreDNS in a docker container.
For some reason, I can get MotionEye to be accessible from motioneye.docker.swarm (changed the domain in here for privacy).
However, for nextcloud and Heimdall, I have to explicitly access the ports and I'm struggling to tell why.
e.g. Heimdall is gateway.docker.swarm:8091 when should be gateway.docker.swarm
When a user requests a webpage onto the local dns server X.X.X.117 it gets routed through to the traefik instance on X.X.X.106.
My traefik compose file is as follows:
version: '3'
services:
reverse-proxy:
# The official v2 Traefik docker image
image: traefik:v2.3
restart: always
# Enables the web UI and tells Traefik to listen to docker
command: --api.insecure=true --providers.docker
ports:
# The HTTP port
- "80:80"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.port=8080"
- "traefik.backend=traefik"
- "traefik.frontend.rule=Host:traefik.docker.swarm"
- "traefik.docker.network=traefik_default"
My Heimdall compose is as follows:
version: "3"
services:
heimdall:
image: ghcr.io/linuxserver/heimdall
container_name: heimdall
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
volumes:
- /home/pi/heimdall/config:/config
ports:
- 8091:80
restart: unless-stopped
networks:
- heimdall
labels:
- "traefik.enable=true"
- "traefik.port=8091"
- "traefik.http.routers.heimdall.entrypoints=http"
- "traefik.http.routers.heimdall.rule=Host(`gateway.docker.swarm`)"
networks:
heimdall:
external:
name: heimdall
Can anyone see what I'm doing wrong here?
When you access through gateway.docker.swarm:8091 it works because you are accessing the heimdall container directly. This is possible because you defined
ports:
- 8091:80
in your docker-compose.
In order to access through traefik they must be on the same network. Also, remove the port mapping if you like this container to be only accessible through traefik. And finally correct the traefik port accordingly.
version: "3"
services:
heimdall:
image: ghcr.io/linuxserver/heimdall
container_name: heimdall
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
volumes:
- /home/pi/heimdall/config:/config
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.port=80"
- "traefik.http.routers.heimdall.entrypoints=http"
- "traefik.http.routers.heimdall.rule=Host(`gateway.docker.swarm`)"

Docker compose PHP container needs to hit nginx proxy with Host header. How?

I have 3 containers. A container that is an nginx reverse proxy:
nginx-proxy:
image: jwilder/nginx-proxy:latest
ports:
- 80:80
- 443:443
networks:
- network
Behind that are two php fpm containers on that network. Container A and container B with hostname A and B respectively. I also added these to my local hostsfile:
127.0.0.1 A
127.0.0.1 B
So I can reach from them from my localhost. Both respond to fastCGI requests.
Now I need to do a Guzzle request in A to go to B. This should go via the nginx proxy. How do I add an entry to container A's hostsfile so a request to B will go to nginx-proxy with the header Host: B. But adding
extra_hosts:
- "B:nginx-proxy"
Won't work and I can't find any other way other than hardcoding it which I do not want to do for obvious reasons.
Docker compose file:
containerA:
build:
context: 'docker'
volumes:
- .:/var/www/html
- ./docker/www.conf:/usr/local/etc/php-fpm.d/zz-docker.conf
networks:
- network
environment:
VIRTUAL_HOST: containerA
VIRTUAL_ROOT: /var/www/html/public/index.php
VIRTUAL_PROTO: fastcgi
containerB:
build:
context: 'docker'
volumes:
- .:/var/www/html
- ./docker/www.conf:/usr/local/etc/php-fpm.d/zz-docker.conf
networks:
- network
environment:
VIRTUAL_HOST: containerB
VIRTUAL_ROOT: /var/www/html/public/index.php
VIRTUAL_PROTO: fastcgi
nginx-proxy:
image: jwilder/nginx-proxy:latest
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- network
environment:
DEFAULT_HOST: containerA
networks:
network:
driver: bridge
In the docker-compose file you are ready placed the environment variables - "VIRTUAL_HOST" in the container A and B?
services:
proxy:
........
........
containerA:
........
envirinment:
VIRTUAL_HOST=A
containerB:
........
envirinment:
VIRTUAL_HOST=B
If you tried this, maybe the complete docker-file will help us.
cheers

Docker: nginx-proxy with ssl backend

I am currently in the process of containerizing wordpress apps for development. And that has been going reasonably well so far :)
At the moment I am using one docker-compose.yml file (and some configs) per app. Each app consists of an nginx-webserver, a database and wordpress with fpm. (example docker-compose.yml below). Each app handles it's ssl on it's own and I have confirmed, that it works.
The next step in my masterplan is to use an nginx reverse proxy to have all app containers up at the same time without the need to use different ports on the host.
As I understand jwilder/nginx-proxy is the best tool for the job. So I was thinking - and please correct me if that is not best practice - that I could create a compose.yml file for the nginx-proxy that could run all the time and that would expose ports 80 and 443 to the host while automatically generating the nginx-configs for every container I' spin up afterwards.
version: '3.6'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx_proxy
ports:
- '80:80'
- '443:443'
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
I tried that with an nginx-proxy which exposed port 80 to the host and a wordpress app setup in its own docker-compose.yml file using the mariadb:latest and wordpress:latest images. That did indeed work simply by adding the expose: \ -80 and the VIRTUAL_HOST environment variable.
But I don't quite get how to use the reverse proxy in front of my aforementioned wordpress apps. The documentation states this:
SSL Backends
If you would like the reverse proxy to connect to your backend using HTTPS instead of HTTP, set VIRTUAL_PROTO=https on the backend container.
Note: If you use VIRTUAL_PROTO=https and your backend container exposes port 80 and 443, nginx-proxy will use HTTPS on port 80. This is almost certainly not what you want, so you should also include VIRTUAL_PORT=443.
so I tried adding these environment variables to the app's docker-compose.yml file. Specifically on the nginx service inside and added exposed ports 80 and 443.
version: '3.6'
services:
wordpress:
image: wordpress:4.7.2-php7.1-fpm
volumes:
- ../public:/var/www/html
environment:
- WORDPRESS_DB_NAME=${WORDPRESS_DB_NAME:-wordpress}
- WORDPRESS_TABLE_PREFIX=${WORDPRESS_TABLE_PREFIX:-wp_}
- WORDPRESS_DB_HOST=${WORDPRESS_DB_HOST:-mysql}
- WORDPRESS_DB_USER=${WORDPRESS_DB_USER:-root}
- WORDPRESS_DB_PASSWORD=${WORDPRESS_DB_PASSWORD:-password}
depends_on:
- db
restart: always
db:
image: mariadb:${MARIADB_VERSION:-latest}
volumes:
- tss-data:/var/lib/mysql
# - ./db:/docker-entrypoint-initdb.d/
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-password}
- MYSQL_USER=${MYSQL_USER:-root}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-password}
- MYSQL_DATABASE=${MYSQL_DATABASE:-wordpress}
restart: always
nginx:
image: nginx:${NGINX_VERSION:-latest}
container_name: nginx
volumes:
- ${NGINX_CONF_DIR:-./nginx}:/etc/nginx/conf.d
- ${NGINX_LOG_DIR:-./logs/nginx}:/var/log/nginx
- ${WORDPRESS_DATA_DIR:-./wordpress}:/var/www/html
- ${SSL_CERTS_DIR:-./certs}:/etc/letsencrypt
- ${SSL_CERTS_DATA_DIR:-./certs-data}:/data/letsencrypt
environment:
- VIRTUAL_HOST:local.my-app.com
- VIRTUAL_PROTO:https
- VIRTUAL_PORT:443
expose:
- 80
- 443
depends_on:
- wordpress
restart: always
volumes:
tss-data:
networks:
default:
external:
name: nginx-proxy
Alas, if I try to browse to local.my-app.com on port 80 I get
503 Service Temporarily Unavailable
If I try on port 443 the nginx reverse proxy does not respond at all. I feel like I am missing something fairly obvious but I can't seem to find it and I would really appreciate any thoughts on the matter.
In the end, I opted to not handle the SSL encryption in each individual app. But instead I changed the reverse proxy to
version: '3.6'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx_proxy
ports:
- '80:80'
- '443:443'
volumes:
- ./certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
networks:
default:
external:
name: nginx-proxy
So now I can reach each app on Port 80 until I add a cert for it in which case it becomes reachable on port 443.

How to run Docker container in it's own network

Today I switched from "Docker Toolbox" to "Docker for Mac", because Docker now has finally write-access to my User directory (which doesn't worked with "Docker Toolbox") - Yay!
But this change also includes that all containers now running under my localhost and not under Docker's IP as before (e.g. 192.168.99.100).
Since my localhost listens to various ports by default (80, 443, ...) and I don't want to always add new created ports, that doesn't conflict with the standard one's, to my local dev domains (e.g. example.dev:8443), I wonder how to run my containers as before.
I read about network configs and tried a lot of things (creating a new host network, exposing ports with an IP in front of it, ...), but didn't got it working.
What kind of config do I need to run my app container with the IP 192.168.99.100? Thats my docker-compose.yml so far.
version: '2'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- mysql
- redis
- memcached
ports:
- 80:80
- 443:443
- 22:22
- 3000:3000
- 3001:3001
volumes:
- ./app/:/app/
- /tmp/debug/:/tmp/debug/
- ./:/docker/
volumes_from:
- storage
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- etc/environment.yml
- etc/environment.development.yml
mysql:
build:
context: docker/mysql/
dockerfile: MariaDB-10
ports:
- 3306:3306
volumes_from:
- storage
volumes:
- ./data/mysql:/var/lib/mysql
- /tmp/debug/:/tmp/debug/
env_file:
- etc/environment.yml
- etc/environment.development.yml
redis:
build: docker/redis/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
memcached:
build: docker/memcached/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
storage:
build: docker/storage/
volumes:
- /storage
You need to declare "networks:" for each of your services:
e.g.
version: '2'
services:
app:
image: xxxx:xxx
ports:
- "80:80"
networks:
- my-network
mysql:
image: xxxx:xxx
networks:
- my-network
networks:
my-network:
driver: bridge
Then from side your app configuration, you can use "mysql" as the hostname of database server.
You can define a network in your compose file, then add any services to the network.
https://docs.docker.com/compose/networking/
But I would suggest you just use different ports now that you are running natively. I.e. 8080:80

Resources