Docker: nginx-proxy with ssl backend - docker

I am currently in the process of containerizing wordpress apps for development. And that has been going reasonably well so far :)
At the moment I am using one docker-compose.yml file (and some configs) per app. Each app consists of an nginx-webserver, a database and wordpress with fpm. (example docker-compose.yml below). Each app handles it's ssl on it's own and I have confirmed, that it works.
The next step in my masterplan is to use an nginx reverse proxy to have all app containers up at the same time without the need to use different ports on the host.
As I understand jwilder/nginx-proxy is the best tool for the job. So I was thinking - and please correct me if that is not best practice - that I could create a compose.yml file for the nginx-proxy that could run all the time and that would expose ports 80 and 443 to the host while automatically generating the nginx-configs for every container I' spin up afterwards.
version: '3.6'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx_proxy
ports:
- '80:80'
- '443:443'
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
I tried that with an nginx-proxy which exposed port 80 to the host and a wordpress app setup in its own docker-compose.yml file using the mariadb:latest and wordpress:latest images. That did indeed work simply by adding the expose: \ -80 and the VIRTUAL_HOST environment variable.
But I don't quite get how to use the reverse proxy in front of my aforementioned wordpress apps. The documentation states this:
SSL Backends
If you would like the reverse proxy to connect to your backend using HTTPS instead of HTTP, set VIRTUAL_PROTO=https on the backend container.
Note: If you use VIRTUAL_PROTO=https and your backend container exposes port 80 and 443, nginx-proxy will use HTTPS on port 80. This is almost certainly not what you want, so you should also include VIRTUAL_PORT=443.
so I tried adding these environment variables to the app's docker-compose.yml file. Specifically on the nginx service inside and added exposed ports 80 and 443.
version: '3.6'
services:
wordpress:
image: wordpress:4.7.2-php7.1-fpm
volumes:
- ../public:/var/www/html
environment:
- WORDPRESS_DB_NAME=${WORDPRESS_DB_NAME:-wordpress}
- WORDPRESS_TABLE_PREFIX=${WORDPRESS_TABLE_PREFIX:-wp_}
- WORDPRESS_DB_HOST=${WORDPRESS_DB_HOST:-mysql}
- WORDPRESS_DB_USER=${WORDPRESS_DB_USER:-root}
- WORDPRESS_DB_PASSWORD=${WORDPRESS_DB_PASSWORD:-password}
depends_on:
- db
restart: always
db:
image: mariadb:${MARIADB_VERSION:-latest}
volumes:
- tss-data:/var/lib/mysql
# - ./db:/docker-entrypoint-initdb.d/
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-password}
- MYSQL_USER=${MYSQL_USER:-root}
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-password}
- MYSQL_DATABASE=${MYSQL_DATABASE:-wordpress}
restart: always
nginx:
image: nginx:${NGINX_VERSION:-latest}
container_name: nginx
volumes:
- ${NGINX_CONF_DIR:-./nginx}:/etc/nginx/conf.d
- ${NGINX_LOG_DIR:-./logs/nginx}:/var/log/nginx
- ${WORDPRESS_DATA_DIR:-./wordpress}:/var/www/html
- ${SSL_CERTS_DIR:-./certs}:/etc/letsencrypt
- ${SSL_CERTS_DATA_DIR:-./certs-data}:/data/letsencrypt
environment:
- VIRTUAL_HOST:local.my-app.com
- VIRTUAL_PROTO:https
- VIRTUAL_PORT:443
expose:
- 80
- 443
depends_on:
- wordpress
restart: always
volumes:
tss-data:
networks:
default:
external:
name: nginx-proxy
Alas, if I try to browse to local.my-app.com on port 80 I get
503 Service Temporarily Unavailable
If I try on port 443 the nginx reverse proxy does not respond at all. I feel like I am missing something fairly obvious but I can't seem to find it and I would really appreciate any thoughts on the matter.

In the end, I opted to not handle the SSL encryption in each individual app. But instead I changed the reverse proxy to
version: '3.6'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx_proxy
ports:
- '80:80'
- '443:443'
volumes:
- ./certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
networks:
default:
external:
name: nginx-proxy
So now I can reach each app on Port 80 until I add a cert for it in which case it becomes reachable on port 443.

Related

Different domain with different phpmyadmin service and the "same port" problem (nginx reverse proxy, docker)

I have a VPS with nginx-proxy container, and I create some wordpress website with phpmyadmin service. If I want to create another site with this definition I got "same port" problem.
Ok, I can change the port to 2998 and it works fine but I need to add a new open port to my VPS. I don't want to add or change the port for each site.
Now:
example-a.com:2999 -> example-a phpmyadmin login page
examlpe-b.com:2998 -> example-b phpymadmin login page
Is there a way to direct me to the appropriate container by domain address?
example-a.com:2999 -> example-a phpmyadmin login page
examlpe-b.com:2999 -> example-b phpymadmin login page
My nginx proxy definition
networks:
nginx-proxy:
external: false
name: nginx-reverse-proxy
default:
name: nginx-reverse-proxy-default
version: '2'
services:
nginx-proxy:
build:
context: .nginx-proxy
dockerfile: Dockerfile
container_name: nginx-proxy
ports:
- 80:80
- 443:443
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- .nginx-proxy/certs:/etc/nginx/certs:ro
- .nginx-proxy/vhost.d:/etc/nginx/vhost.d
- .nginx-proxy/dhparam:/etc/nginx/dhparam
- /usr/share/nginx/html
networks:
- nginx-proxy
nginx-proxy-acme:
image: nginxproxy/acme-companion
container_name: nginx-proxy-acme
restart: always
volumes_from:
- nginx-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- .nginx-proxy/certs:/etc/nginx/certs:rw
- .nginx-proxy-acme/acme:/etc/acme.sh
And this is my wordpress site definition
version: "3.9"
volumes:
database_volume: {}
x-logging:
&default-logging
driver: json-file
options:
max-size: '1m'
max-file: '3'
services:
web:
build:
context: ./.docker
dockerfile: Dockerfile_web
container_name: test_web
ports:
- '3000:80'
volumes:
- ./wp:/var/www
depends_on:
- database
- php
restart: always
logging: *default-logging
database:
image: mariadb:latest
container_name: test_database
environment:
MYSQL_USER: wp
MYSQL_PASSWORD: wp
MYSQL_DATABASE: wp
MYSQL_ROOT_PASSWORD: wp
volumes:
- ./database_volume:/var/lib/mysql
expose:
- 3306
restart: always
logging: *default-logging
php:
build:
context: ./.docker
dockerfile: Dockerfile_php
container_name: test_php
working_dir: /var/www/
volumes:
- ./wordpress:/var/www
restart: always
logging: *default-logging
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: test_phpmyadmin
links:
- database:db
ports:
- '2999:80'
restart: always
logging: *default-logging
What you want is not possible, but you probably don't actually want it. It becomes clear once you think through what you want to configure, and what would happen if a user would go to either URL:
you have configured example-a.com to point to your IP
you have configured example-b.com to point to your IP
you have configured your nginx-proxy container to listen on ports 80 and 443
you want to configure your WordPress containers to both listen on port 2999
you, or rather the acme-companion, have configured your nginx container to forward HTTP requests that ask for host example-a.com to go to the container for example A with port 2999, and requests that ask for example-b.com to go to container B with port 2999
Now, you can see right away that you have two things attempting to listen on the same network interface with port 2999 - that doesn't work, and it can't, because who would handle picking up incoming requests before the request is parsed to find out which host it wanted ? Container A can't accept the request and, if it's meant for B, hand the request over - A doesn't know about B.
So if you think about a user sending a request to example-a.com:2999, what really happens is that a request goes to <yourip>:2999, just like if a user goes to example-b.com:2999, it will end up going to <yourip>:2999.
How can that problem be solved ? By having a third container C that accepts user requests, looks into the request, and based on whether they wanted container A or B, hands the request over to A or B.
Here is the great thing: you already have that! Container C is really your nginx container, which is listening on port 80/443. So if your users go to example-a.com without providing a port, it will go to 80 or 443 (depending on whether they used http or https). Then, nginx will analyze the request, and send it to the correct container. For this, it doesn't really matter what port A and B listen on, because to the outside world, it looks like they are listening on 80/443.
So the real answer is that while you can't combine custom ports with virtual hosts and use the same port for multiple containers (other than 80/443), you don't actually NEED custom ports in the first place! If you just configure your containers with the default ports, users can use both https://example-a.com and https://example-b.com and it will 'just work'™

docker nginx reverse proxy 503 Service Temporarily Unavailable

I want to use nginx as reverse proxy for my remote home automation access.
My infrastructure yaml looks like follows:
# /infrastructure/docker-compose.yaml
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
container_name: proxy
networks:
- raspberry_network
ports:
- 80:80
- 443:443
environment:
- ENABLE_IPV6=true
- DEFAULT_HOST=${RASPBERRY_IP}
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d
- ./proxy/vhost.d:/etc/nginx/vhost.d
- ./proxy/html:/usr/share/nginx/html
- ./proxy/certs:/etc/nginx/certs
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
networks:
raspberry_network:
My yaml containing the app configuration looks like this:
# /apps/docker-compose.yaml
version: '3'
services:
homeassistant:
container_name: home-assistant
image: homeassistant/raspberrypi4-homeassistant:stable
volumes:
- ./homeassistant:/config
- /etc/localtime:/etc/localtime:ro
environment:
- 'TZ=Europe/Berlin'
- 'VIRTUAL_HOST=${HOMEASSISTANT_VIRTUAL_HOST}'
- 'VIRTUAL_PORT=8123'
deploy:
resources:
limits:
memory: 250M
restart: unless-stopped
networks:
- infrastructure_raspberry_network
ports:
- '8123:8123'
networks:
infrastructure_raspberry_network:
external: true
Via portainer I validated that both containers are contected to the same network. However, when accessing my local IP of the raspberry pi 192.168.0.10 I am receiving "503 Service Temporarily Unavailable".
Of course when I try accessing my app via the virtual host domain xxx.xxx.de it neither works.
Any idea what the issue might be? Or any ideas how to further debug this?
You need to specify the correct VIRTUAL_HOST in the backends environment variable and make sure that they're on the same network (or docker bridge network)
Make sure that any containers that specify VIRTUAL_HOST are running before the nginx-proxy container runs. With docker-compose, this can be achieved by adding to depends_on config of the nginx-proxy container

How to access a Docker container without specifying its HTTP port?

I set up a Docker network with a db container, a nextcloud container, and a nginx container. I can access the nextcloud website with 'ip-adress':8080, but I want to access it without specifying port 8080. How can I do that?
This is my docker-compose.yml:
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_PASSWORD=
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:fpm
restart: always
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
web:
image: nginx
restart: always
ports:
- 8080:80
links:
- app
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
volumes_from:
- app
What you want is to avoid having to specify the port when you request a URI. One way to do that is to use the default port for the protocol you are using (80 for HTTP, 443 for https, 21 for FTP, etc). Then rely on your client to automatically fallback to the default port.
In a Docker Compose configuration file, the syntax for exposing a port is defined as such: <host_port>:<container_port> (see the documentation). That means 8080:80 exposes port 80 from the container on your docker host on port 8080.
In your case, the service is exposing an HTTP server, which means you have to change it to the default port 80 in order to omit it. Update web.services.ports[0] from 8080:80 to 80:80, and you will be able to access nextcloud from 'ip-adress'.

Docker - how to expose port thru jwilder nginx-proxy?

My problem is similar to this one, which is apparently unsolved to this day :/
I was following this tutorial to setup my Theia IDE, the IDE is working but I want my 8080 port to be open for testing out the node.js backend I host on the Theia IDE using the terminal.
Here are my docker-compose files I used for setting up the open ports and etc:
version: '2.2'
services:
eclipse-theia:
restart: always
image: theiaide/theia:latest
init: true
environment:
- VIRTUAL_HOST=mydomainhere.com
- LETSENCRYPT_HOST=mydomainhere.com
version: '2'
services:
nginx-proxy:
restart: always
image: jwilder/nginx-proxy
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "/etc/nginx/htpasswd:/etc/nginx/htpasswd"
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "/etc/nginx/certs"
letsencrypt-nginx-proxy-companion:
restart: always
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
volumes_from:
- "nginx-proxy"
If I add expose: - "8080" to the eclipse-theia docker-compose file I get a 502 error returned...
So that's not the way to go I guess. I also tried running netcat to check whether the port 8080 was open and it was.
UPDATE
I get the following error in the logs when I get the 502 error:
[error] 136#136: *21 no live upstreams while connecting to upstream
If I add ports: - "8080" instead I get a HSTS error..
UPDATE 2
I tried the following config following the advice from the answer below:
version: '2.2'
services:
eclipse-theia:
restart: always
image: theiaide/theia:latest
init: true
environment:
- VIRTUAL_HOST=mysubdomain1.domain.com,mysubdomain2.domain.com
- VIRTUAL_PORT=80,8080
- LETSENCRYPT_HOST=mysubdomain1.domain.com,mysubdomain2.domain.com
- LETSENCRYPT_EMAIL=mymail#domain.com
But this appears to not work either, Port 8080 seems to simply not work. I also tried specifying port 8080 on the nginx-proxy config, it does not work :/
do you use port 8080 on the proxy for something else?
I just use 80 and 443 on the proxy...
If you use 8080 just on your eclipse-theia, why not define
ports:
- "8080:8080"
on the docker compose of theia instead of the nginx-proxy?
The proxy has the sense, to use several domains/subdomains on ports 80 and 443 instead of using weird port mess.
I cant explain how to use it like you described, cause since it makes no sense to me to use it that way i wont dive into that further.
So what I had to do was set the config to:
version: '2.2'
services:
eclipse-theia:
restart: always
image: theiaide/theia:latest
init: true
environment:
- VIRTUAL_HOST=mysubdomain1.domain.com,mysubdomain2.domain.com
- LETSENCRYPT_HOST=mysubdomain1.domain.com,mysubdomain2.domain.com
- LETSENCRYPT_EMAIL=mymail#domain.com
and in the jwilder/nginx-proxy container there is apt installed so just execute apt install nano and then execute nano /etc/nginx/conf.d/default.conf and edit the second upstream port from 3000 to 8080 and voilá it works!
P.S. Don't add ports 8080 to the nginx-proxy config, that's completely unnecessary!

docker and jwilder/nginx-proxy http/https issue

I'm using docker on osx via boot2docker.
I have 2 hosts: site1.loc.test.com and site2.loc.test.com pointed to ip address of docker host.
Both should be available via 80 and 443 ports.
So I'm using jwilder/nginx-proxy for reverse proxy purposes.
But in fact when I'm running all of them via docker-compose every time I try to open via 80 port I get redirect to 443 (301 Moved Permanently).
May be I've missed something in jwilder/nginx-proxy configuration?
docker-compose.yml
proxy:
image: jwilder/nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs
ports:
- "80:80"
- "443:443"
site1:
image: httpd:2.4
volumes:
- site1:/usr/local/apache2/htdocs
environment:
VIRTUAL_HOST: site1.loc.test.com
expose:
- "80"
site2:
image: httpd:2.4
volumes:
- site2:/usr/local/apache2/htdocs
environment:
VIRTUAL_HOST: site2.loc.test.com
expose:
- "80"
Just to keep this topic up to date, the jwilder/nginx-proxy meanwhile introduced a flag for that: HTTPS_METHOD=noredirect; To be set as environment variable.
Further reading on github
I think your configuration should be correct, but it seems that this is the intended behaviour of jwilder/nginx-proxy. See these lines in the file nginx.tmpl: https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl#L89-L94
It seems that if a certificate is found, you will always be redirected to https.
EDIT: I found the confirmation in the documentation
The behavior for the proxy when port 80 and 443 are exposed is as
follows:
If a container has a usable cert, port 80 will redirect to 443 for that container so that HTTPS is always preferred when available.
You can still use a custom configuration. You could also try to override the file nginx.tmpl in a new Dockefile .
To serve traffic in both SSL and non-SSL modes without redirecting to SSL, you can include the environment variable HTTPS_METHOD=noredirect (the default is HTTPS_METHOD=redirect).
HTTPS_METHOD must be specified on each container for which you want to override the default behavior.
Here is an example Docker Compose file:
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./config/certs:/etc/nginx/certs
environment:
DEFAULT_HOST: my.example.com
app:
build:
context: .
dockerfile: ./Dockerfile
environment:
HTTPS_METHOD: noredirect
VIRTUAL_HOST: my.example.com
Note: As in this example, environment variable HTTPS_METHOD must be set on the app container, not the nginx-proxy container.
Ref: How SSL Support Works section for the jwilder/nginx-proxy Docker image.

Resources