Bind server running in docker to domain - docker

Background - the Web App
I've got a containerized app running in docker on an Ubuntu host on port 8090. Here's the docker compose file that ties together the backend, the Postgres server and the Vue+Nginx frontend:
version: "3.8"
services:
# DATABASE BACKEND
use_db:
container_name: use_db
image: postgres:14.2
expose:
- "5433"
ports:
- "5433:5433"
environment:
# POSTGRES_HOST_AUTH_METHOD: "trust"
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "blabla"
POSTGRES_DB: "use_db"
command: "-p 5433"
restart: always
volumes:
- db:/var/lib/postgresql/data
# FRONT END (LOOKING TO INTERNET)
use_frontend:
container_name: 'use_frontend'
build:
context: ./admin
dockerfile: Dockerfile
restart: always
depends_on:
- use_backend
ports:
- 8090:80 # port forwarding = HOST:DOCKER
# BACKEND (FASTAPI)
use_backend:
container_name: 'use_backend'
build:
context: ./api
dockerfile: Dockerfile
restart: always
depends_on:
- use_db
environment:
DATABASE_URL: "postgres://....."
HOST_LOCATION: "http://<HOST IP>:8090"
command: gunicorn --bind 0.0.0.0:8000 -k uvicorn.workers.UvicornWorker main:app
volumes:
db:
driver: local
So when the docker containers are started with docker compose up -d, I can access the web app at <HOST>:8090.
Inside the frontend container, the Nginx conf looks like this:
events {}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
root /usr/share/nginx/html;
include /etc/nginx/mime.types;
client_max_body_size 20M;
location / {
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://use_backend:8000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
location /ws/ {
proxy_pass http://use_backend:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
}
}
Goal
Now my next goal is to access the web app via a normal URL. The host machine has a paid domain name tied to one of its user accounts, let's call it example.com. So there's a dummy index.html sitting in /home/example.com/ that can be replaced with a real web app to be accessed from the Internet as https://example.com.
There's also a Nginx server running directly on the host whose config is located in /etc/nginx/nginx.conf and is as follows:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 512M;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POOD LE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;
}
When I check the open ports containing 80 (lsof -n -i -P | grep 80) I get:
nginx 167921 root 11u IPv4 1381601 0t0 TCP *:80 (LISTEN)
nginx 167922 www-data 11u IPv4 1381601 0t0 TCP *:80 (LISTEN)
nginx 167923 www-data 11u IPv4 1381601 0t0 TCP *:80 (LISTEN)
nginx 167924 www-data 11u IPv4 1381601 0t0 TCP *:80 (LISTEN)
nginx 167925 www-data 11u IPv4 1381601 0t0 TCP *:80 (LISTEN)
Which confirms that the Nginx service is running on the host listening on port 80.
The Big Question
How do I bind my docker app (running on port 8090) to the host domain example.com (to run on default port HTTP 80 / HTTPS 8080) so I can access the app from https://example.com?

You can:
Stop nginx on your host and publish your Docker container on host port 80 and 443:
ports:
- 80:80
- 443:443
This assumes that your Docker application already has an TLS listener on port 443 (whether it does or not is not clear from your question).
Configure nginx to proxy requests to your container. E.g, add to your nginx configuration:
location / {
proxy_pass https://localhost:8090/;
}
In this case, you would configure nginx to listen for TLS connections on port 443 (using e.g. these instructions) and have a proxy stanza for both your http and https listeners.

Related

Why is https not working for my site hosted in docker?

I have a site running in docker with 4 containers, a react front end, .net backend, sql database and nginx server. My docker compose file looks like this:
version: '3'
services:
sssfe:
image: mydockerhub:myimage-fe-1.3
ports:
- 9000:9000
volumes:
- sssfev:/usr/share/nginx/html
depends_on:
- sssapi
sssapi:
image: mydockerhub:myimage-api-1.3
environment:
- SQL_CONNECTION=myconnection
ports:
- 44384:44384
depends_on:
- jbdatabase
jbdatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypass
volumes:
- dbdata:/var/opt/mssql
ports:
- 1433:1433
reverseproxy:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- "80:80"
- "443:443"
volumes:
- example_certbot-etc:/etc/letsencrypt
links :
- sssfe
certbot:
depends_on:
- reverseproxy
image: certbot/certbot
container_name: certbot
volumes:
- example_certbot-etc:/etc/letsencrypt
- sssfev:/usr/share/nginx/html
command: certonly --webroot --webroot-path=/usr/share/nginx/html --email myemail --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
volumes:
example_certbot-etc:
external: true
dbdata:
sssfev:
I was following this link and am using cerbot and letsencrypt for the certificate. My nginx conf file is this:
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
location / {
rewrite ^ https://$host$request_uri? permanent;
}
location ~ /.well-known/acme-challenge {
allow all;
root /usr/share/nginx/html;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
index index.html index.htm;
root /usr/share/nginx/html;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/nginx/conf.d/options-ssl-nginx.conf;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
My issue is that https doesn't work for my site. When I hit https://example.com, I get ERR_CONNECTION_REFUSED. The non https site resolves and works fine however. Can't figure out what's going on. It looks like the ssl port is open and nginx is listening to it:
ss -tulpn | grep LISTEN
tcp LISTEN 0 128 *:9000 *:* users:(("docker-proxy",pid=18336,fd=4))
tcp LISTEN 0 128 *:80 *:* users:(("docker-proxy",pid=18464,fd=4))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=420,fd=4))
tcp LISTEN 0 128 *:1433 *:* users:(("docker-proxy",pid=18152,fd=4))
tcp LISTEN 0 128 *:443 *:* users:(("docker-proxy",pid=18452,fd=4))
tcp LISTEN 0 128 *:44384 *:* users:(("docker-proxy",pid=18243,fd=4))
And my containers:
reverseproxy 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
sssfe 80/tcp, 0.0.0.0:9000->9000/tcp
sssapi 0.0.0.0:44384->44384/tcp
database 0.0.0.0:1433->1433/tcp
I'm assuming it's an issue with my nginx config, but I'm new to this and not sure where to go from here.
If you need to support SSL, please do this:
mkdir /opt/docker/nginx/conf.d -p
touch /opt/docker/nginx/conf.d/nginx.conf
mkdir /opt/docker/nginx/cert -p
then
vim /opt/docker/nginx/conf.d/nginx.conf
If you need to force the redirection to https when accessing http:
server {
listen 443 ssl;
server_name example.com www.example.com; # domain
# Pay attention to the file location, starting from /etc/nginx/
ssl_certificate 1_www.example.com_bundle.crt;
ssl_certificate_key 2_www.example.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE;
ssl_prefer_server_ciphers on;
client_max_body_size 1024m;
location / {
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#Intranet address
proxy_pass http://172.17.0.8:9090; #change it
}
}
server {
listen 80;
server_name example.com www.example.com; # The domain name of the binding certificate
#HTTP domain name request is converted to https
return 301 https://$host$request_uri;
}
docker run -itd --name nginx -p 80:80 -p 443:443 -v /opt/docker/nginx/conf.d/nginx.conf:/etc/nginx/conf.d/nginx.conf -v /opt/docker/nginx/cert:/etc/nginx -m 100m nginx
After startup, enter docker ps to see if the startup is successful
docker logs nginx view logs.

Gitlab vs Registry in docker container behind proxy ERROR

I'm trying to enable gitlab registry running in docker behing nginx proxy on centos lxd container :)
Nginx's configuration on centos
server {
listen *:80;
server_name registry.site.name;
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
}
server{
listen 443 ssl http2;
server_name registry.site.name;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/site.name/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site.name/privkey.pem;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security "max-age=63072000" always;
location /{
proxy_pass http://localhost:8085;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Url-Scheme $scheme;
}
}
Gitlab.rb configuration
registry_external_url 'https://registry.site.name'
gitlab_rails['registry_enabled'] = true
registry['enable'] = true
registry['registry_http_addr'] = "git.site.name:8085" # (it is the same as gitlab ip - 172.17.0.3:8085)
registry_nginx['enable'] = false
Docker-compose
version: '2.3'
services:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
container_name: 'git'
hostname: 'git.site.name'
ports:
- '22:22'
- '8081:8081'
- '8085:8085'
volumes:
- '/data/Projects/git/config:/etc/gitlab'
- '/var/log/git:/var/log/gitlab'
- '/data/Projects/git/data:/var/opt/gitlab'
network_mode: bridge
Looks good. If i make a request to registry.site.name, i see it in gitlab/registry/current log. Registry page also opens good in the project.
But, i can't use CLI
Every time i'm trying to docker login registry.site.name it fails with
Error response from daemon: Get https://registry.site.name/v2/: remote error: tls: protocol version not supported
And this request stopped before git docker container, my nginx proxy logs:
2020/08/05 10:42:21 [crit] 268168#0: *9 SSL_do_handshake() failed (SSL: error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol) while SSL handshaking, client: 10.200.3.1, server: 0.0.0.0:443
The same error is triggered if i try to check tls1.2 connection with
curl -I -v -L --tlsv1.2 --tls-max 1.2 registry.site.name
So maybe docker login uses tls 1.2 but i don't understand why it is not working, because i set it up in nginx config.
I also tried nginx configuraton from that question gitlab docker registry with external nginx and omnibus
but still no luck
The mistake was that nginx config FOR git.site.conf didn't contain TLSv1.2
So be sure that both config (git&registry) have tls 1.2 support

Docker nginx reverseProxy Connection Refused

I have 2 projects one called defaultWebsite and the other one nginxProxy.
I am trying to set up the following:
in /etc/hosts i have setup 127.0.0.1 default.local, docker containers are running for all. I did not add a php-fpm container for the reverseProxy (Should i?)
nginxReverseProxy default.config
#sample setup
upstream default_local {
server host.docker.internal:31443;
}
server {
listen 0.0.0.0:80;
return 301 https://$host$request_uri;
}
server {
listen 0.0.0.0:443 ssl;
server_name default.local;
ssl_certificate /etc/ssl/private/localhost/default_dev.crt;
ssl_certificate_key /etc/ssl/private/localhost/default_dev.key;
#ssl_verify_client off;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
index index.php index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $proxy_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://default_local;
}
}
defaultWebsite config:
server {
listen 0.0.0.0:80;
server_name default.local;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 0.0.0.0:443 ssl;
server_name default.local;
root /app/public;
#this is for local. on production this will be different.
ssl_certificate /etc/ssl/default.local/localhost.crt;
ssl_certificate_key /etc/ssl/default.local/localhost.key;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/default_error.log;
access_log /var/log/nginx/default_access.log;
}
docker-compose.yml for defaultWebsite:
services:
nginx:
build: DockerConfig/nginx
working_dir: /app
volumes:
- .:/app
- ./log:/log
- ./data/nginx/htpasswd:/etc/nginx/.htpasswd
- ./data/nginx/nginx_dev.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php-fpm
- mysql
links:
- php-fpm
- mysql
ports:
- "31080:80"
- "31443:443"
expose:
- "31080"
- "31443"
environment:
VIRUAL_HOST: "default.local"
APP_FRONT_CONTROLLER: "public/index.php"
networks:
default:
aliases:
- default
php-fpm:
build: DockerConfig/php-fpm
working_dir: /app
volumes:
- .:/app
- ./log:/log
- ./data/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
ports:
- "30902:9000"
expose:
- "30902"
extra_hosts:
- "default.local:127.0.0.1"
networks:
- default
environment:
XDEBUG_CONFIG: "remote_host=172.29.0.1 remote_enable=1 remote_autostart=1 idekey=\"PHPSTORM\" remote_log=\"/var/log/xdebug.log\""
PHP_IDE_CONFIG: "serverName=default.local"
docker-compose.yml for nginxReverseProxy:
services:
reverse_proxy:
build: DockerConfig/nginx
hostname: reverseProxy
ports:
- 80:80
- 443:443
extra_hosts:
- "host.docker.internal:127.0.0.1"
volumes:
- ./data/nginx/dev/default_dev.conf:/etc/nginx/conf.d/default.conf
- ./data/certs:/etc/ssl/private/
docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e9a8479e6f8 default_nginx "nginx -g 'daemon of…" 12 hours ago Up 12 hours 31080/tcp, 31443/tcp, 0.0.0.0:31080->80/tcp, 0.0.0.0:31443->443/tcp default_nginx_1
5e1df4d6f1f5 default_php-fpm "/usr/sbin/php-fpm7.…" 12 hours ago Up 12 hours 30902/tcp, 0.0.0.0:30902->9000/tcp default_php-fpm_1
f3ec76cd7148 default_mysql "/entrypoint.sh mysq…" 12 hours ago Up 12 hours (healthy) 33060/tcp, 0.0.0.0:31336->3306/tcp default_mysql_1
d633511bc6a8 proxy_reverse_proxy "/bin/sh -c 'exec ng…" 12 hours ago Up 12 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp proxy_reverse_proxy_1
If i access directly default.local:31443 i can see the page working.
When i try to access http://default.local it redirects me to https://default.local but in the same time i get this error:
reverse_proxy_1 | 2020/04/14 15:22:43 [error] 6#6: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.80.1, server: default.local, request: "GET / HTTP/1.1", upstream: "https://127.0.0.1:31443/", host: "default.local"
Not sure this is the answer, but the writing is to too long for a comment.
On your nginx conf, you have:
upstream default_local {
server host.docker.internal:31443;
}
and as i see it (could be wrong here;), you have a different container accessing it:
extra_hosts:
- "host.docker.internal:127.0.0.1"
but you set the hostname to 127.0.0.1, shouldn't it be the docker host ip. Since it is connecting to a different container?
In general ensure the docker host ip is used on all containers, when they need to connect to another container/outside.
ok, so it seems that the docker ip should be used on linux machines because this "host.docker.internal" variable does not exists yet (to be added in a future version)
to get docker ip in linux should be enough to run ip addr | grep "docker"
so the final config should look something like this for reverse_proxy default.conf:
upstream default_name {
server 172.17.0.1:52443;
}
#redirect to https
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
server_name default.localhost;
listen 443 ssl http2;
large_client_header_buffers 4 16k;
ssl_certificate /etc/ssl/private/localhost/whatever_dev.crt;
ssl_certificate_key /etc/ssl/private/localhost/whatever_dev.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
index index.php index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $proxy_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://default_name;
}
}

jwilder/nginx-proxy: no access to virtual host

I have a NAS behind a router. On this NAS I want to run for testing Nextcloud and Seafile together. Everything should be set up with docker. The jwilder/nginx-proxy container does no work as expected and I cannot find helpful information. I feel I am missing something very basic.
What is working:
I have a noip.com DynDNS that points to my routers ip: blabla.ddns.net
The router forwards ports 22, 80 and 443 to my NAS at 192.168.1.11
A plain nginx server running on the NAS can be accessed via blabla.ddns.net, its docker-compose.yml is this:
version: '2'
services:
nginxnextcloud:
container_name: nginxnextcloud
image: nginx
restart: always
ports:
- "80:80"
networks:
- web
networks:
web:
external: true
What is not working:
The same nginxserver like above behind the nginx-proxy. I cannot access this server. Calling blabla.ddns.net gives a 503 error, calling nextcloud.blabla.ddns.net gives "page not found". Viewing the logs of the nginx-proxy via docker logs -f nginxproxy logs every test with blabla.ddns.net and shows its 503 answer, but when I try to access nextcloud.blabla.ddns.net not even a log entry occurs.
This is the docker-compose.yml for one nginx behind a nginx-proxy:
version: '2'
services:
nginxnextcloud:
container_name: nginxnextcloud
image: nginx
restart: always
expose:
- 80
networks:
- web
environment:
- VIRTUAL_HOST=nextcloud.blabla.ddns.net
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginxproxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- web
networks:
web:
external: true
The generated configuration file for nginx-proxy /etc/nginx/conf.d/default.conf contains entries for my test server:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# nextcloud.blabla.ddns.net
upstream nextcloud.blabla.ddns.net {
## Can be connected with "web" network
# nginxnextcloud
server 172.22.0.2:80;
}
server {
server_name nextcloud.blabla.ddns.net;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://nextcloud.blabla.ddns.net;
}
}
Why is this minimal example not working?

Nginx as reverse proxy server for Nexus - can't connect in docker environment

I have environment builded upon docker containers (in boot2docker). I have following docker-compose.yml file to quickly setup nginx and nexus servers :
version: '3.2'
services:
nexus:
image: stefanprodan/nexus
container_name: nexus
ports:
- 8081:8081
- 5000:5000
nginx:
image: nginx:latest
container_name: nginx
ports:
- 5043:443
volumes:
- /opt/dm/nginx2/nginx.conf:/etc/nginx/nginx.conf:ro
Nginx has following configuration (nginx.conf)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
proxy_send_timeout 120;
proxy_read_timeout 300;
proxy_buffering off;
keepalive_timeout 5 5;
tcp_nodelay on;
server {
listen 80;
server_name demo.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name demo.com;
# allow large uploads of files - refer to nginx documentation
client_max_body_size 1024m;
# optimize downloading files larger than 1G - refer to nginx doc before adjusting
#proxy_max_temp_file_size 2048m
#ssl on;
#ssl_certificate /etc/nginx/ssl.crt;
#ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://nexus:8081/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
Nexus seems to work very well. I call sucessfully curl http://localhost:8081 on docker host machine. This return me html of nexus login site. Now I want to try nginx server. It is configured to listen on 443 port, but SSL is right now disabled (I wanted to test it before diving into SSL configuration). As you can notice, my ngix container maps port 443 to port 5043. Thus, I try to use following curl command : curl -v http://localhost:5043/. Now I expect that my http request is going to be send to nginx and proxied to proxy_pass http://nexus:8081/; nexus. Nexus hostname is visible within docker container network and is accesible from nginx container. Unfortunately in reponse I receive :
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 5043 (#0)
> GET / HTTP/1.1
> Host: localhost:5043
> User-Agent: curl/7.49.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
I was checking nginx logs, error, access but these logs are empty. Can somebody help me solving this problem ? It should be just a simple example of proxying requests, but maybe I misunderstand some concept ?
Do you have an upstream directive in your nginx conf (placed within the http directive)?
upstream nexus {
server <Nexus_IP>:<Nexus_Port>;
}
Only then nginx can correctly resolve it. The docker-compose service name nexus is not injected to the nginx container on runtime.
You can try links in docker-compose:
https://docs.docker.com/compose/compose-file/#links
This gives you an alias for the linked container in your /etc/hosts. But you still need an upstream directive. Update: If resolvable, you can as well use the names directly in nginx directives like location.
https://serverfault.com/questions/577370/how-can-i-use-environment-variables-in-nginx-conf
As #arnold's answer you are missing the upstream configuration in your nginx. I saw you are using the stefanprodan nexus image, see his blog for the full configuration. Below you can find mine (remember to open ports 8081 and 5000 of nexus even the entrance point is the 443). Besides you need to include the certificate because docker client requires ssl working:
worker_processes 2;
events {
worker_connections 1024;
}
http {
error_log /var/log/nginx/error.log warn;
access_log /dev/null;
proxy_intercept_errors off;
proxy_send_timeout 120;
proxy_read_timeout 300;
upstream nexus {
server nexus:8081;
}
upstream registry {
server nexus:5000;
}
server {
listen 80;
listen 443 ssl default_server;
server_name <yourdomain>;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
ssl_certificate /etc/letsencrypt/live/<yourdomain>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<yourdomain>/privkey.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
keepalive_timeout 5 5;
proxy_buffering off;
# allow large uploads
client_max_body_size 1G;
location / {
# redirect to docker registry
if ($http_user_agent ~ docker ) {
proxy_pass http://registry;
}
proxy_pass http://nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
The certificates are generated using letsencrypt or certbot. The rest of the configuration is to have an A+ in ssllabs analysis as it is explained here
Your docker-compose of 5000 port is an dynamic port(Because it hadn't been exposed ) , so you cannot connect the 5000 port because the
ports:
- 8081:8081
- 5000:5000
are not efficient .
you can use like this:
Build a new Dockerfile and expose 5000 port (Mine name is )
FROM sonatype/nexus3:3.16.2
EXPOSE 5000```
Use new image to start the container and publish the port .
version: "3.7"
services:
nexus:
image: 'feibor/nexus:3.16.2-1'
deploy:
placement:
constraints:
- node.hostname == node1
restart_policy:
condition: on-failure
ports:
- 8081:8081/tcp
- 5000:5000/tcp
volumes:
- /mnt/home/opt/nexus/nexus-data:/nexus-data:z

Resources