jwilder/nginx-proxy: no access to virtual host - docker

I have a NAS behind a router. On this NAS I want to run for testing Nextcloud and Seafile together. Everything should be set up with docker. The jwilder/nginx-proxy container does no work as expected and I cannot find helpful information. I feel I am missing something very basic.
What is working:
I have a noip.com DynDNS that points to my routers ip: blabla.ddns.net
The router forwards ports 22, 80 and 443 to my NAS at 192.168.1.11
A plain nginx server running on the NAS can be accessed via blabla.ddns.net, its docker-compose.yml is this:
version: '2'
services:
nginxnextcloud:
container_name: nginxnextcloud
image: nginx
restart: always
ports:
- "80:80"
networks:
- web
networks:
web:
external: true
What is not working:
The same nginxserver like above behind the nginx-proxy. I cannot access this server. Calling blabla.ddns.net gives a 503 error, calling nextcloud.blabla.ddns.net gives "page not found". Viewing the logs of the nginx-proxy via docker logs -f nginxproxy logs every test with blabla.ddns.net and shows its 503 answer, but when I try to access nextcloud.blabla.ddns.net not even a log entry occurs.
This is the docker-compose.yml for one nginx behind a nginx-proxy:
version: '2'
services:
nginxnextcloud:
container_name: nginxnextcloud
image: nginx
restart: always
expose:
- 80
networks:
- web
environment:
- VIRTUAL_HOST=nextcloud.blabla.ddns.net
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginxproxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- web
networks:
web:
external: true
The generated configuration file for nginx-proxy /etc/nginx/conf.d/default.conf contains entries for my test server:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# nextcloud.blabla.ddns.net
upstream nextcloud.blabla.ddns.net {
## Can be connected with "web" network
# nginxnextcloud
server 172.22.0.2:80;
}
server {
server_name nextcloud.blabla.ddns.net;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://nextcloud.blabla.ddns.net;
}
}
Why is this minimal example not working?

Related

docker nginx dns is not recognised

I have a problem with my Docker configuration.
I would like to set a DNS name on an Ubuntu server 22.04. However, this is not achieved. I run my applications via local images. The proxy pass runs via a NGINX container. I can reach the ports without problems, but as soon as I set the DNS domain, it is not reached.
docker-compose.yml
version: "3.9"
services:
test_proxy:
network_mode: bridge
image: nginx
tty: true
container_name: test_nginx
ports:
- '80:80'
- '443:443'
depends_on:
- test_client
- test_backend
networks:
- test-network
test-client:
image: test_client
container_name: test_client
ports:
- '1000:80'
restart: 'always'
networks:
- test-network
test_backend:
image: test_backend
restart: 'always'
ports:
- "2000:2000"
expose:
- "2000"
depends_on:
- test_db
networks:
- test-network
test_db:
image: mongo
ports:
- "27017:27017"
container_name: test_db
volumes:
- /test/test-db
- test_data:/test_db
networks:
- test-network
restart: always
volumes:
test_data:
networks:
test-network:
driver: bridge
nginx.conf in nginx container
upstream test_client {
server test_client:80;
}
upstream test_backend {
server test_backend:80;
}
server {
listen 80;
listen [::]:80;
server_name test.eu www.test.eu;
# ssl_certificate /run/secrets/ssl_cert;
# ssl_certificate_key /run/secrets/ssl_key;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers HIGH:!aNULL:!MD5;
index index.html index.htm index.nginx-debian.html;
add_header Cache-Control 'no-store, no-cache';
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
expires 0;
charset utf-8;
root /usr/share/nginx/html;
resolver 127.0.0.11 valid=10s;
set $session_name nginx_session;
location ~ /\.(?!well-known).* {
deny all;
}
# frontend
location / {
proxy_pass http://test_client$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/quasar.myapp.com-error.log error;
# backend
location /api {
proxy_pass http://test_backend$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443;
listen [::]:443;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I have already tried to set the resolver to resolver 127.0.0.11 ipv6=off valid=10s;

Attempting to use docker / docker compose to setup nginx to proxy requests to different ports on localhost - what am I missing?

I am attempting to forward requests this way:
https://xxx.domain1.com -> http://localhost:3000
https://yyy.domain2.com -> http://localhost:3001
To make it easier to get nginx up and running, I'm using docker. Here is my Dockerfile:
version: '3.7'
services:
proxy:
image: nginx:alpine
container_name: proxy
ports:
- '443:443'
- '80:80'
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./.cert/cert.pem:/etc/nginx/.cert/cert.pem
- ./.cert/key.pem:/etc/nginx/.cert/key.pem
restart: 'unless-stopped'
networks:
- backend
networks:
backend:
driver: bridge
And here is my nginx.conf:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80;
server_name yyy.domain2.com;
chunked_transfer_encoding on;
location / {
proxy_pass http://localhost:3001/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 80;
server_name xxx.domain1.com;
chunked_transfer_encoding on;
location / {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
stream {
map $ssl_preread_server_name $name {
xxx.domain1.com backend;
yyy.domain2.com frontend;
}
upstream backend {
server localhost:3000;
}
upstream frontend {
server localhost:3001;
}
server {
listen 443;
listen [::]:443;
proxy_pass $name;
ssl_preread on;
ssl_certificate ./.cert/cert.pem;
ssl_certificate_key ./.cert/key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
}
}
I can access my services locally if I just open http://localhost:3000/test and http://localhost:3001/test, no problem.
But if I attempt to access with https://xxx.domain1.com/test, it spins for a while and then fails with ERR_CONNECTION_TIMED_OUT.
What am I missing?
UPDATE: I tried setting up the nginx service with a host network, but same result so far. I tried:
services:
proxy:
image: nginx:alpine
# ports:
# - '443:443'
# - '80:80'
...
extra_hosts:
- "host.docker.internal:host-gateway"
and
services:
proxy:
image: nginx:alpine
ports:
- '443:443'
- '80:80'
...
network_mode: "host"
But no luck...
I think I'm missing the part on how to tell nginx to forward the request to the host, instead to localhost inside of it's own container.
But how to fix that?
Thanks,
Eduardo

jwilder/nginx-proxy with cloudflare SSL doesnt

I'm having problem with using jwilder/nginx-proxy with cloudflare ssl (origin key, FULL type SSL).
Everything is working fine (in http) until I activate DNS Proxy of Cloudflare. With the server returning 521 (Web Server Down).
Here's my docker-compose.yaml
version: "2"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs
network_mode: bridge
saraswati-global:
image: asia.gcr.io/ordent-production/ordent/saraswati-global
ports:
- 3000:3000
environment:
- VIRTUAL_HOST=beta.saraswati.global
- VIRTUAL_PORT=3000
- VIRTUAL_PROTO=https
network_mode: bridge
api-healed-id:
image: asia.gcr.io/ordent-production/ordent/api.healed.id
ports:
- 4001:4001
environment:
- VIRTUAL_HOST=dev.healed.id
- VIRTUAL_PORT=4001
- VIRTUAL_PROTO=https
network_mode: bridge
Maybe you guys could help me with the configuration -
Here's the nginx configuration created by above config :
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:D
ssl_prefer_server_ciphers off;
resolver 172.26.0.2;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# beta.saraswati.global
upstream beta.saraswati.global {
## Can be connected with "bridge" network
# ordent-production-host_saraswati-global_1
server 172.17.0.3:3000;
}
server {
server_name beta.saraswati.global;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass https://beta.saraswati.global;
}
}
# dev.healed.id
upstream dev.healed.id {
## Can be connected with "bridge" network
# ordent-production-host_api-healed-id_1
server 172.17.0.4:4001;
}
server {
server_name dev.healed.id;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass https://dev.healed.id;
}
}
The issue is caused because of this when you are defining the nginx-proxy service:
ports:
- 80:80
As you enabled SSL on Cloudflare, the default port will be 443, not 80.
So the nginx-proxy needs to listen to 443 port and the correct way is:
ports:
- 443:443

nginx docker compose redirect delay

I'm having a particularly odd behaviour with my docker-compose and nginx setup. I am trying to have nginx proxy_pass requests to a backend web service. The web service is Spring Boot, but I don't believe that's relevant. My web service will issue a 302 redirect to users who are not authenticated, sending them to a /login page. This all seems to work as expected except for an indeterminate period of time when I bring the docker-compose up. Early requests to the service which result in 302 responses result in timeouts, whilst requests direct to the /login page return as expected immediately. After an indeterminate period of time, usually minutes, something seems to stabilise and everything works as expected. I've verified the behaviour using chrome from a client machine and curl directly on the host running the compose. I believe the 302 responses are somehow getting dropped but I'm not sure.
Can anyone spot a problem?
version: '3.5'
services:
proxy:
image: nginx:latest
container_name: "proxy"
restart: always
volumes:
- blah:/etc/nginx
ports:
- 80:80
- 443:443
webservice:
image: "webservice:latest"
container_name: "webservice"
restart: always
ports:
- 8080:8080
Nginx Config:
worker_processes auto;
events { }
http {
server {
listen 80 default_server;
listen 443 default_server ssl;
ssl_certificate /etc/nginx/cert;
ssl_certificate_key /etc/nginx/key;
if ($scheme = http) {
return 301 https://$host:443$request_uri;
}
gzip on;
gzip_types text/plain application/xml application/json application/javascript;
location / {
proxy_http_version 1.1;
proxy_pass http://webservice:8080/;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
access_log /etc/nginx/log/access.log combined;
error_log /etc/nginx/log/error.log warn;
}
}
}

Nginx as reverse proxy server for Nexus - can't connect in docker environment

I have environment builded upon docker containers (in boot2docker). I have following docker-compose.yml file to quickly setup nginx and nexus servers :
version: '3.2'
services:
nexus:
image: stefanprodan/nexus
container_name: nexus
ports:
- 8081:8081
- 5000:5000
nginx:
image: nginx:latest
container_name: nginx
ports:
- 5043:443
volumes:
- /opt/dm/nginx2/nginx.conf:/etc/nginx/nginx.conf:ro
Nginx has following configuration (nginx.conf)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
proxy_send_timeout 120;
proxy_read_timeout 300;
proxy_buffering off;
keepalive_timeout 5 5;
tcp_nodelay on;
server {
listen 80;
server_name demo.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name demo.com;
# allow large uploads of files - refer to nginx documentation
client_max_body_size 1024m;
# optimize downloading files larger than 1G - refer to nginx doc before adjusting
#proxy_max_temp_file_size 2048m
#ssl on;
#ssl_certificate /etc/nginx/ssl.crt;
#ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://nexus:8081/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
Nexus seems to work very well. I call sucessfully curl http://localhost:8081 on docker host machine. This return me html of nexus login site. Now I want to try nginx server. It is configured to listen on 443 port, but SSL is right now disabled (I wanted to test it before diving into SSL configuration). As you can notice, my ngix container maps port 443 to port 5043. Thus, I try to use following curl command : curl -v http://localhost:5043/. Now I expect that my http request is going to be send to nginx and proxied to proxy_pass http://nexus:8081/; nexus. Nexus hostname is visible within docker container network and is accesible from nginx container. Unfortunately in reponse I receive :
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 5043 (#0)
> GET / HTTP/1.1
> Host: localhost:5043
> User-Agent: curl/7.49.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
I was checking nginx logs, error, access but these logs are empty. Can somebody help me solving this problem ? It should be just a simple example of proxying requests, but maybe I misunderstand some concept ?
Do you have an upstream directive in your nginx conf (placed within the http directive)?
upstream nexus {
server <Nexus_IP>:<Nexus_Port>;
}
Only then nginx can correctly resolve it. The docker-compose service name nexus is not injected to the nginx container on runtime.
You can try links in docker-compose:
https://docs.docker.com/compose/compose-file/#links
This gives you an alias for the linked container in your /etc/hosts. But you still need an upstream directive. Update: If resolvable, you can as well use the names directly in nginx directives like location.
https://serverfault.com/questions/577370/how-can-i-use-environment-variables-in-nginx-conf
As #arnold's answer you are missing the upstream configuration in your nginx. I saw you are using the stefanprodan nexus image, see his blog for the full configuration. Below you can find mine (remember to open ports 8081 and 5000 of nexus even the entrance point is the 443). Besides you need to include the certificate because docker client requires ssl working:
worker_processes 2;
events {
worker_connections 1024;
}
http {
error_log /var/log/nginx/error.log warn;
access_log /dev/null;
proxy_intercept_errors off;
proxy_send_timeout 120;
proxy_read_timeout 300;
upstream nexus {
server nexus:8081;
}
upstream registry {
server nexus:5000;
}
server {
listen 80;
listen 443 ssl default_server;
server_name <yourdomain>;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
ssl_certificate /etc/letsencrypt/live/<yourdomain>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<yourdomain>/privkey.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
keepalive_timeout 5 5;
proxy_buffering off;
# allow large uploads
client_max_body_size 1G;
location / {
# redirect to docker registry
if ($http_user_agent ~ docker ) {
proxy_pass http://registry;
}
proxy_pass http://nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
The certificates are generated using letsencrypt or certbot. The rest of the configuration is to have an A+ in ssllabs analysis as it is explained here
Your docker-compose of 5000 port is an dynamic port(Because it hadn't been exposed ) , so you cannot connect the 5000 port because the
ports:
- 8081:8081
- 5000:5000
are not efficient .
you can use like this:
Build a new Dockerfile and expose 5000 port (Mine name is )
FROM sonatype/nexus3:3.16.2
EXPOSE 5000```
Use new image to start the container and publish the port .
version: "3.7"
services:
nexus:
image: 'feibor/nexus:3.16.2-1'
deploy:
placement:
constraints:
- node.hostname == node1
restart_policy:
condition: on-failure
ports:
- 8081:8081/tcp
- 5000:5000/tcp
volumes:
- /mnt/home/opt/nexus/nexus-data:/nexus-data:z

Resources