I have a problem with my Docker configuration.
I would like to set a DNS name on an Ubuntu server 22.04. However, this is not achieved. I run my applications via local images. The proxy pass runs via a NGINX container. I can reach the ports without problems, but as soon as I set the DNS domain, it is not reached.
docker-compose.yml
version: "3.9"
services:
test_proxy:
network_mode: bridge
image: nginx
tty: true
container_name: test_nginx
ports:
- '80:80'
- '443:443'
depends_on:
- test_client
- test_backend
networks:
- test-network
test-client:
image: test_client
container_name: test_client
ports:
- '1000:80'
restart: 'always'
networks:
- test-network
test_backend:
image: test_backend
restart: 'always'
ports:
- "2000:2000"
expose:
- "2000"
depends_on:
- test_db
networks:
- test-network
test_db:
image: mongo
ports:
- "27017:27017"
container_name: test_db
volumes:
- /test/test-db
- test_data:/test_db
networks:
- test-network
restart: always
volumes:
test_data:
networks:
test-network:
driver: bridge
nginx.conf in nginx container
upstream test_client {
server test_client:80;
}
upstream test_backend {
server test_backend:80;
}
server {
listen 80;
listen [::]:80;
server_name test.eu www.test.eu;
# ssl_certificate /run/secrets/ssl_cert;
# ssl_certificate_key /run/secrets/ssl_key;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers HIGH:!aNULL:!MD5;
index index.html index.htm index.nginx-debian.html;
add_header Cache-Control 'no-store, no-cache';
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
expires 0;
charset utf-8;
root /usr/share/nginx/html;
resolver 127.0.0.11 valid=10s;
set $session_name nginx_session;
location ~ /\.(?!well-known).* {
deny all;
}
# frontend
location / {
proxy_pass http://test_client$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/quasar.myapp.com-error.log error;
# backend
location /api {
proxy_pass http://test_backend$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443;
listen [::]:443;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I have already tried to set the resolver to resolver 127.0.0.11 ipv6=off valid=10s;
Related
I am attempting to forward requests this way:
https://xxx.domain1.com -> http://localhost:3000
https://yyy.domain2.com -> http://localhost:3001
To make it easier to get nginx up and running, I'm using docker. Here is my Dockerfile:
version: '3.7'
services:
proxy:
image: nginx:alpine
container_name: proxy
ports:
- '443:443'
- '80:80'
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./.cert/cert.pem:/etc/nginx/.cert/cert.pem
- ./.cert/key.pem:/etc/nginx/.cert/key.pem
restart: 'unless-stopped'
networks:
- backend
networks:
backend:
driver: bridge
And here is my nginx.conf:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80;
server_name yyy.domain2.com;
chunked_transfer_encoding on;
location / {
proxy_pass http://localhost:3001/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 80;
server_name xxx.domain1.com;
chunked_transfer_encoding on;
location / {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
stream {
map $ssl_preread_server_name $name {
xxx.domain1.com backend;
yyy.domain2.com frontend;
}
upstream backend {
server localhost:3000;
}
upstream frontend {
server localhost:3001;
}
server {
listen 443;
listen [::]:443;
proxy_pass $name;
ssl_preread on;
ssl_certificate ./.cert/cert.pem;
ssl_certificate_key ./.cert/key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
}
}
I can access my services locally if I just open http://localhost:3000/test and http://localhost:3001/test, no problem.
But if I attempt to access with https://xxx.domain1.com/test, it spins for a while and then fails with ERR_CONNECTION_TIMED_OUT.
What am I missing?
UPDATE: I tried setting up the nginx service with a host network, but same result so far. I tried:
services:
proxy:
image: nginx:alpine
# ports:
# - '443:443'
# - '80:80'
...
extra_hosts:
- "host.docker.internal:host-gateway"
and
services:
proxy:
image: nginx:alpine
ports:
- '443:443'
- '80:80'
...
network_mode: "host"
But no luck...
I think I'm missing the part on how to tell nginx to forward the request to the host, instead to localhost inside of it's own container.
But how to fix that?
Thanks,
Eduardo
I have 2 domains pointing to one virtual private server ubuntu 21. The first domain(running on port 3000) works as expected, the second domain(running on port 4000 on container and 5000 on host) does not and return nginx 502 bad gateway. I have added port 4000 point to 80 on nginx container:
I have configured like below:
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx:stable-alpine
ports:
- "3000:80" # nginx listen on 80
- "4000:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
pwm-node:
build: .
image: my_acc/pwm-node
environment:
- PORT=3000
depends_on:
- mongo
mongo:
image: mongo
volumes:
- mongo-db:/data/db
redis:
image: redis
volumes:
mongo-db:
nginx conf:
server {
listen 80;
server_name first_domain.com www.first_domain.com;
# Redirect http to https
location / {
return 301 https://first_domain.com$request_uri;
}
}
server {
listen 80;
server_name second_domain.com www.second_domain.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:4000;
proxy_redirect off;
}
}
server {
listen 443 ssl http2;
server_name first_domain.com www.first_domain.com;
ssl on;
server_tokens off;
ssl_certificate /etc/nginx/ssl/live/first_domain.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/first_domain.com/privkey.pem;
ssl_dhparam /etc/nginx/dhparam/dhparam-2048.pem;
ssl_buffer_size 8k;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://pwm-node:3000;
proxy_redirect off;
}
}
looks like nginx does not accept http://localhost:4000;. I may have to add node-app-4000 to docker-compose.yml as a service and replace localhost with node-app-4000
I'm having problem with using jwilder/nginx-proxy with cloudflare ssl (origin key, FULL type SSL).
Everything is working fine (in http) until I activate DNS Proxy of Cloudflare. With the server returning 521 (Web Server Down).
Here's my docker-compose.yaml
version: "2"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs
network_mode: bridge
saraswati-global:
image: asia.gcr.io/ordent-production/ordent/saraswati-global
ports:
- 3000:3000
environment:
- VIRTUAL_HOST=beta.saraswati.global
- VIRTUAL_PORT=3000
- VIRTUAL_PROTO=https
network_mode: bridge
api-healed-id:
image: asia.gcr.io/ordent-production/ordent/api.healed.id
ports:
- 4001:4001
environment:
- VIRTUAL_HOST=dev.healed.id
- VIRTUAL_PORT=4001
- VIRTUAL_PROTO=https
network_mode: bridge
Maybe you guys could help me with the configuration -
Here's the nginx configuration created by above config :
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:D
ssl_prefer_server_ciphers off;
resolver 172.26.0.2;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# beta.saraswati.global
upstream beta.saraswati.global {
## Can be connected with "bridge" network
# ordent-production-host_saraswati-global_1
server 172.17.0.3:3000;
}
server {
server_name beta.saraswati.global;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass https://beta.saraswati.global;
}
}
# dev.healed.id
upstream dev.healed.id {
## Can be connected with "bridge" network
# ordent-production-host_api-healed-id_1
server 172.17.0.4:4001;
}
server {
server_name dev.healed.id;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass https://dev.healed.id;
}
}
The issue is caused because of this when you are defining the nginx-proxy service:
ports:
- 80:80
As you enabled SSL on Cloudflare, the default port will be 443, not 80.
So the nginx-proxy needs to listen to 443 port and the correct way is:
ports:
- 443:443
I have a NAS behind a router. On this NAS I want to run for testing Nextcloud and Seafile together. Everything should be set up with docker. The jwilder/nginx-proxy container does no work as expected and I cannot find helpful information. I feel I am missing something very basic.
What is working:
I have a noip.com DynDNS that points to my routers ip: blabla.ddns.net
The router forwards ports 22, 80 and 443 to my NAS at 192.168.1.11
A plain nginx server running on the NAS can be accessed via blabla.ddns.net, its docker-compose.yml is this:
version: '2'
services:
nginxnextcloud:
container_name: nginxnextcloud
image: nginx
restart: always
ports:
- "80:80"
networks:
- web
networks:
web:
external: true
What is not working:
The same nginxserver like above behind the nginx-proxy. I cannot access this server. Calling blabla.ddns.net gives a 503 error, calling nextcloud.blabla.ddns.net gives "page not found". Viewing the logs of the nginx-proxy via docker logs -f nginxproxy logs every test with blabla.ddns.net and shows its 503 answer, but when I try to access nextcloud.blabla.ddns.net not even a log entry occurs.
This is the docker-compose.yml for one nginx behind a nginx-proxy:
version: '2'
services:
nginxnextcloud:
container_name: nginxnextcloud
image: nginx
restart: always
expose:
- 80
networks:
- web
environment:
- VIRTUAL_HOST=nextcloud.blabla.ddns.net
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginxproxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- web
networks:
web:
external: true
The generated configuration file for nginx-proxy /etc/nginx/conf.d/default.conf contains entries for my test server:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# nextcloud.blabla.ddns.net
upstream nextcloud.blabla.ddns.net {
## Can be connected with "web" network
# nginxnextcloud
server 172.22.0.2:80;
}
server {
server_name nextcloud.blabla.ddns.net;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://nextcloud.blabla.ddns.net;
}
}
Why is this minimal example not working?
I've published my API, ID server STS and web ui on separate docker containers and I'm using a nginx container to act as the reverse proxy to serve these app. I can browse to each one of them and even open the discovery endpoint for the STS. Problem comes when I try to login into the web portal, it tries to redirect me back to the STS for logging in but I'm getting ERR_CONNECTION_REFUSED the url looks okay I think it's the STS that is not available from the redirection from the Web UI.
My docker-compose is as below:
version: '3.4'
services:
reverseproxy:
container_name: reverseproxy
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./proxy.conf:/etc/nginx/proxy.conf
- ./cert:/etc/nginx
ports:
- 8080:8080
- 8081:8081
- 8082:8082
- 443:443
restart: always
links:
- sts
sts:
container_name: sts
image: idsvrsts:latest
links:
- localdb
expose:
- "8080"
kernel:
container_name: kernel
image: kernel_api:latest
depends_on:
- localdb
links:
- localdb
portal:
container_name: portal
image: webportal:latest
environment:
- TZ=Europe/Moscow
depends_on:
- localdb
- sts
- kernel
- reverseproxy
localdb:
image: mcr.microsoft.com/mssql/server
container_name: localdb
environment:
- 'MSSQL_SA_PASSWORD=password'
- 'ACCEPT_EULA=Y'
- TZ=Europe/Moscow
ports:
- "1433:1433"
volumes:
- "sqldatabasevolume:/var/opt/mssql/data/"
volumes:
sqldata:
And this is the nginx.config:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-sts {
server sts:8080;
}
upstream docker-kernel {
server kernel:8081;
}
upstream docker-portal {
server portal:8081;
}
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_certificate cert.pem;
ssl_certificate_key key.pem;
ssl_password_file global.pass;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_cache_bypass $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
server {
listen 8080;
listen [::]:8080;
server_name sts;
location / {
proxy_pass http://docker-sts;
# proxy_redirect off;
}
}
server {
listen 8081;
listen [::]:8081;
server_name kernel;
location / {
proxy_pass http://docker-kernel;
}
}
server {
listen 8082;
listen [::]:8082;
server_name portal;
location / {
proxy_pass http://docker-portal;
}
}
}
The web ui redirects to the below url, which works okay if I browse to it using the STS server without nginx.
http://localhost/connect/authorize?client_id=myclient.id&redirect_uri=http%3A%2F%2Flocalhost%3A22983%2Fstatic%2Fcallback.html&response_type=id_token%20token&scope=openid%20profile%20kernel.api&state=f919149753884cb1b8f2b907265dfb8f&nonce=77806d692a874244bdbb12db5be40735
Found the issue. The containers could not see each other because nginx was not appending the port on the url.
I changed this:
'proxy_set_header Host $host;'
To this:
'proxy_set_header Host $host:$server_port;'