docker-compose Nextcloud Instance behind reverse proxy Bad gateway 502 - docker

I want to switch from using the docker run-command to a docker-compose file with my nextcloud instance that runs behind a reverse proxy (jwilder/nginx-proxy).
This is the run command I used to use:
sudo docker run -d -p 8080:80 --expose 80 --expose 443 -e VIRTUAL_HOST=nextcloud.example.com -v nextcloud:/var/www/html --restart=always --name=nextcloud nextcloud:24.0.8
I installed the mariaDB later in the container so that I didn't have to struggle with networking. Also I use the Port 8080 only in my internal network for fast up- and downloading.
This worked quite well, but now I want to create a similar environment with docker-compose:
version: '3.8'
volumes:
nextcloud:
db:
services:
db:
image: mariadb:10.5
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=my-super-strong-password
- MYSQL_PASSWORD=my-other-super-strong-password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:24.0.8
restart: always
ports:
- 8080:80
expose:
- 80
- 443
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=my-other-super-strong-password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- PHP_MEMORY_LIMIT=1G
- PHP_UPLOAD_LIMIT=128M
- VIRTUAL_HOST=nextcloud.example.com
The containers are starting successfully and I can use nextcloud in my internal network. But I cannot reach them from my domain. Instead I get a 502 Bad request. The VIRTUAL_HOST redirection seems to work since I'd get a 503 Service Temporarily Unavailable instead.
I think exposing the ports 80 and 443 doesn't work.
I've tried to add a proxy network:
networks:
proxy:
driver: bridge
external: true
and added
networks:
- default
to the db service and
networks:
- default
- proxy
to the app service.
That didn't fixed the problem. Does any of you have an idea what I can try next?
I've tried different ways to expose the ports and tried to create different networks

Nevermind found the problem.
Instead of simply creating an network named proxy, I had to create a new jwilder reverse-proxy service via docker compose with a name, as an example myreverseproxy. In each service I want to make public I needed to name this proxy as:
networks:
- default
- myreverseproxy
Also I had to use the name in the networks service area:
networks:
myreverseproxy:
external: true

Related

docker nginx reverse proxy 503 Service Temporarily Unavailable

I want to use nginx as reverse proxy for my remote home automation access.
My infrastructure yaml looks like follows:
# /infrastructure/docker-compose.yaml
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
container_name: proxy
networks:
- raspberry_network
ports:
- 80:80
- 443:443
environment:
- ENABLE_IPV6=true
- DEFAULT_HOST=${RASPBERRY_IP}
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d
- ./proxy/vhost.d:/etc/nginx/vhost.d
- ./proxy/html:/usr/share/nginx/html
- ./proxy/certs:/etc/nginx/certs
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
networks:
raspberry_network:
My yaml containing the app configuration looks like this:
# /apps/docker-compose.yaml
version: '3'
services:
homeassistant:
container_name: home-assistant
image: homeassistant/raspberrypi4-homeassistant:stable
volumes:
- ./homeassistant:/config
- /etc/localtime:/etc/localtime:ro
environment:
- 'TZ=Europe/Berlin'
- 'VIRTUAL_HOST=${HOMEASSISTANT_VIRTUAL_HOST}'
- 'VIRTUAL_PORT=8123'
deploy:
resources:
limits:
memory: 250M
restart: unless-stopped
networks:
- infrastructure_raspberry_network
ports:
- '8123:8123'
networks:
infrastructure_raspberry_network:
external: true
Via portainer I validated that both containers are contected to the same network. However, when accessing my local IP of the raspberry pi 192.168.0.10 I am receiving "503 Service Temporarily Unavailable".
Of course when I try accessing my app via the virtual host domain xxx.xxx.de it neither works.
Any idea what the issue might be? Or any ideas how to further debug this?
You need to specify the correct VIRTUAL_HOST in the backends environment variable and make sure that they're on the same network (or docker bridge network)
Make sure that any containers that specify VIRTUAL_HOST are running before the nginx-proxy container runs. With docker-compose, this can be achieved by adding to depends_on config of the nginx-proxy container

How to access a Docker container without specifying its HTTP port?

I set up a Docker network with a db container, a nextcloud container, and a nginx container. I can access the nextcloud website with 'ip-adress':8080, but I want to access it without specifying port 8080. How can I do that?
This is my docker-compose.yml:
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_PASSWORD=
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:fpm
restart: always
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
web:
image: nginx
restart: always
ports:
- 8080:80
links:
- app
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
volumes_from:
- app
What you want is to avoid having to specify the port when you request a URI. One way to do that is to use the default port for the protocol you are using (80 for HTTP, 443 for https, 21 for FTP, etc). Then rely on your client to automatically fallback to the default port.
In a Docker Compose configuration file, the syntax for exposing a port is defined as such: <host_port>:<container_port> (see the documentation). That means 8080:80 exposes port 80 from the container on your docker host on port 8080.
In your case, the service is exposing an HTTP server, which means you have to change it to the default port 80 in order to omit it. Update web.services.ports[0] from 8080:80 to 80:80, and you will be able to access nextcloud from 'ip-adress'.

Access docker ports from a container inside another container at localhost

I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.

Docker: nginx-proxy with ssl backend

I am currently in the process of containerizing wordpress apps for development. And that has been going reasonably well so far :)
At the moment I am using one docker-compose.yml file (and some configs) per app. Each app consists of an nginx-webserver, a database and wordpress with fpm. (example docker-compose.yml below). Each app handles it's ssl on it's own and I have confirmed, that it works.
The next step in my masterplan is to use an nginx reverse proxy to have all app containers up at the same time without the need to use different ports on the host.
As I understand jwilder/nginx-proxy is the best tool for the job. So I was thinking - and please correct me if that is not best practice - that I could create a compose.yml file for the nginx-proxy that could run all the time and that would expose ports 80 and 443 to the host while automatically generating the nginx-configs for every container I' spin up afterwards.
version: '3.6'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx_proxy
ports:
- '80:80'
- '443:443'
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
I tried that with an nginx-proxy which exposed port 80 to the host and a wordpress app setup in its own docker-compose.yml file using the mariadb:latest and wordpress:latest images. That did indeed work simply by adding the expose: \ -80 and the VIRTUAL_HOST environment variable.
But I don't quite get how to use the reverse proxy in front of my aforementioned wordpress apps. The documentation states this:
SSL Backends
If you would like the reverse proxy to connect to your backend using HTTPS instead of HTTP, set VIRTUAL_PROTO=https on the backend container.
Note: If you use VIRTUAL_PROTO=https and your backend container exposes port 80 and 443, nginx-proxy will use HTTPS on port 80. This is almost certainly not what you want, so you should also include VIRTUAL_PORT=443.
so I tried adding these environment variables to the app's docker-compose.yml file. Specifically on the nginx service inside and added exposed ports 80 and 443.
version: '3.6'
services:
wordpress:
image: wordpress:4.7.2-php7.1-fpm
volumes:
- ../public:/var/www/html
environment:
- WORDPRESS_DB_NAME=${WORDPRESS_DB_NAME:-wordpress}
- WORDPRESS_TABLE_PREFIX=${WORDPRESS_TABLE_PREFIX:-wp_}
- WORDPRESS_DB_HOST=${WORDPRESS_DB_HOST:-mysql}
- WORDPRESS_DB_USER=${WORDPRESS_DB_USER:-root}
- WORDPRESS_DB_PASSWORD=${WORDPRESS_DB_PASSWORD:-password}
depends_on:
- db
restart: always
db:
image: mariadb:${MARIADB_VERSION:-latest}
volumes:
- tss-data:/var/lib/mysql
# - ./db:/docker-entrypoint-initdb.d/
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-password}
- MYSQL_USER=${MYSQL_USER:-root}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-password}
- MYSQL_DATABASE=${MYSQL_DATABASE:-wordpress}
restart: always
nginx:
image: nginx:${NGINX_VERSION:-latest}
container_name: nginx
volumes:
- ${NGINX_CONF_DIR:-./nginx}:/etc/nginx/conf.d
- ${NGINX_LOG_DIR:-./logs/nginx}:/var/log/nginx
- ${WORDPRESS_DATA_DIR:-./wordpress}:/var/www/html
- ${SSL_CERTS_DIR:-./certs}:/etc/letsencrypt
- ${SSL_CERTS_DATA_DIR:-./certs-data}:/data/letsencrypt
environment:
- VIRTUAL_HOST:local.my-app.com
- VIRTUAL_PROTO:https
- VIRTUAL_PORT:443
expose:
- 80
- 443
depends_on:
- wordpress
restart: always
volumes:
tss-data:
networks:
default:
external:
name: nginx-proxy
Alas, if I try to browse to local.my-app.com on port 80 I get
503 Service Temporarily Unavailable
If I try on port 443 the nginx reverse proxy does not respond at all. I feel like I am missing something fairly obvious but I can't seem to find it and I would really appreciate any thoughts on the matter.
In the end, I opted to not handle the SSL encryption in each individual app. But instead I changed the reverse proxy to
version: '3.6'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx_proxy
ports:
- '80:80'
- '443:443'
volumes:
- ./certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
networks:
default:
external:
name: nginx-proxy
So now I can reach each app on Port 80 until I add a cert for it in which case it becomes reachable on port 443.

Multi port cantainers with nginx-proxy

I'm currently running a container with my web application and it communicates through two ports, for the frontend and backend.
I'm using jwilder/nginx-proxy to serve the applictions.
When I run the docker compose file (which starts the app and the proxy) it gives me a 502 Bad gateway
when I run with only one port it serves that part of the app.
Im passing the port with the "VIRTUAL_PORT=80" is there a way to pass more than one port or if I make a separate container for the Frontend how would I get the proxy to speak to both containers with one request?
In short does jwilder/nginx-proxy support multiport containers, and if not what's the workaround?
Thank in advance!
Docker-compose.yml
reverseproxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
myapp:
depends_on:
- reverseproxy
build: ./app-files
environment:
- "VIRTUAL_HOST=my-domain.com"
- "VIRTUAL_PORT=80,8080"
expose:
- 80
- 8080

Resources