Nginx Reverse Proxy with Port Forwarding Not Working - docker

I'm having trouble accessing a locally-hosted website. The idea is that a site hosted in a docker container and sitting behind an Nginx proxy should be accessible from the internet.
I have a hostname with NoIP, let's call it stuff.ddns.net.
I've set up IP updates to NoIP DNS servers (i.e., stuff.ddns.net always points to my router).
My router forwards ports 80 and 443 to a static IP on my local network (a Linux machine).
I'm hosting an Apache Airflow web server in a Docker container on aforementioned Linux machine, and I've set AIRFLOW__WEBSERVER__BASE_URL: 'https://stuff.ddns.net/airflow'.
When I try accessing stuff.ddns.net/airflow in my web browser, I get Safari can't open the page "stuff.ddns.net/airflow" because Safari can't connect to the server "stuff.ddns.net".
Here is my nginx.conf:
# top-level http config for websocket headers
# If Upgrade is defined, Connection = upgrade
# If Upgrade is empty, Connection = close
events {
worker_connections 1024;
}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream airflow {
server localhost:8080;
}
server {
listen [::]:80;
server_name stuff.ddns.net;
return 302 https://$host$request_uri;
}
server {
listen [::]:443 ssl;
server_name stuff.ddns.net;
ssl_certificate /run/secrets/stuff_ddns_net_pem_chain;
ssl_certificate_key /run/secrets/stuff_ddns_net_key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /run/secrets/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
location /airflow/ {
proxy_pass http://airflow;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Ideas?
EDIT: A truncated (i.e., other Airflow components left out) docker-compose.yml for full clarity of the setup:
version: '3.7'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.4.0}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD: 'cat /run/secrets/sql_alchemy_conn'
AIRFLOW__CELERY__RESULT_BACKEND_CMD: 'cat /run/secrets/result_backend'
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth'
AIRFLOW__WEBSERVER__BASE_URL: 'https://stuff.ddns.net/airflow'
AIRFLOW__WEBSERVER__ENABLE_PROXY_FIX: 'True'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ./storage/airflow/dags:/opt/airflow/dags
- ./storage/airflow/logs:/opt/airflow/logs
- ./storage/airflow/plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-1000}:0"
secrets:
- sql_alchemy_conn
- result_backend
- machine_pass
depends_on:
&airflow-common-depends-on
redis:
condition: service_healthy
postgres:
condition: service_healthy
x-stuff-common:
&stuff-common
restart: unless-stopped
networks:
- ${DOCKER_NETWORK:-stuff}
services:
nginx:
<<: *stuff-common
container_name: stuff-nginx
image: nginxproxy/nginx-proxy:alpine
hostname: nginx
ports:
- ${PORT_NGINX:-80}:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./config/nginx.conf:/etc/nginx/nginx.conf:ro
secrets:
- stuff_ddns_net_pem_chain
- stuff_ddns_net_key
- dhparam.pem
airflow-webserver:
<<: *stuff-common
<<: *airflow-common
container_name: stuff-airflow-webserver
command: webserver
ports:
- ${PORT_UI_AIRFLOW:-8080}:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:${PORT_UI_AIRFLOW:-8080}/airflow/health"]
interval: 10s
timeout: 10s
retries: 5
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
networks:
stuff:
name: ${DOCKER_NETWORK:-stuff}
secrets:
... <truncated>

The solution here was threefold:
Docker container uses the same bridge network as all the other containers.
In the nginx.conf upstream declaration, replace localhost with the LAN IP address of the Docker host (this works for me since I'm using a statically-assigned address).
Add listen <PORT>; above the listen [::]:<PORT>; directives in nginx.conf (I'm not sure what this does, but everything breaks without this).
Here is what the top part of the Nginx.conf looks like now:
upstream airflow {
server 192.168.50.165:8080;
}
server {
listen 80;
listen [::]:80;
server_name stuff.ddns.net;
return 302 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name stuff.ddns.net;
.....

Related

Redirect Odoo 8069 to HTTPS without VPC config (AWS/VPS)

I created a Github repo weeks ago with Docker Compose, Odoo, PostgreSQL, Certbot, Nginx as a proxy server, and a little bit of PHP stuff (Symfony) -> https://github.com/Inushin/dockerOdooSymfonySSL When I was trying the config I found that NGINX worked as it was supposed to and you get the correct HHTP -> HTTPS redirect, BUT if you put the port 8069, the browser goes to HTTP. One of the solutions should be configured de another VPC, but I was thinking about using this repo for other "minimal VPS services" and not needing another VPC, so... how could I solve this? Maybe from Odoo config? Is something missing in the NGINX conf?
NGINX
#FOR THE ODOO DOMAIN
server {
listen 80;
server_name DOMAIN_ODOO;
server_tokens off;
location / {
return 301 https://$server_name$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name DOMAIN_ODOO;
server_tokens off;
location / {
proxy_pass http://web:8069;
proxy_set_header Host DOMAIN_ODOO;
proxy_set_header X-Forwarded-For $remote_addr;
}
ssl_certificate /etc/letsencrypt/live/DOMAIN_ODOO/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/DOMAIN_ODOO/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
nginx:
image: nginx:1.15-alpine
expose:
- "80"
- "443"
ports:
- "80:80"
- "443:443"
networks:
- default
volumes:
- ./data/nginx:/etc/nginx/conf.d/:rw
- ./data/certbot/conf:/etc/letsencrypt/:rw
- ./data/certbotSymfony/conf:/etc/letsencrypt/symfony/:rw
- ./data/certbotSymfony/www:/var/www/certbot/:rw
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
web:
image: odoo:13.0
depends_on:
- db
ports:
- "8069:8069/tcp"
volumes:
- web-data:/var/lib/odoo
- ./data/odoo/config:/etc/odoo
- ./data/odoo/addons:/mnt/extra-addons

502 Error on Production Deployment Django & Nginx using a docker compose file

I am using docker-compose to build containers and to serve the frontend of my website at https:// example.com and the backend at a subdomain, https:// api.example.com. The SSL certificates for both the root and subdomain are working properly, and I can access the live site (static files served by Nginx) at https:// example.com so at least half of the configuration is working properly. The problem occurs when the frontend tries to communicate with the backend. All calls are met with a "No 'Access-Control-Allow-Origin'" 502 Error in the console logs. In the logs of the docker container, this is the error response.
Docker Container Error
2022/03/09 19:01:21 [error] 30#30: *7 connect() failed (111: Connection refused) while connecting
to upstream, client: xxx.xx.xxx.xxx, server: api.example.com, request: "GET /api/services/images/
HTTP/1.1", upstream: "http://127.0.0.1:8000/api/services/images/",
host: "api.example.com", referrer: "https://example.com/"
I think it's likely that something is wrong with my Nginx or docker-compose configuration. When setting the SECURE_SSL_REDIRECT, SECURE_HSTS_INCLUDE_SUBDOMAINS, and the SECURE_HSTS_SECONDS to False or None (in the Django settings) I am able to hit http:// api.example.com:8000/api/services/images/ and get the data I am looking for. So it is running and hooked up, just not taking requests from where I want it to be. I've attached the Nginx configuration and the docker-compose.yml. Please let me know if you need more info, I would greatly appreciate any input, and thanks in advance for the help.
Nginx-custom.conf
# Config for the frontend application under example.com
server {
listen 80;
server_name example.com www.example.com;
if ($host = www.example.com) {
return 301 https://$host$request_uri;
}
if ($host = example.com) {
return 301 https://$host$request_uri;
}
return 404;
}
server {
server_name example.com www.example.com;
index index.html index.htm;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
root /usr/share/nginx/html;
try_files $uri /index.html =404;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
##### Config for the backend server at api.example.com
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
server {
server_name api.example.com;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
proxy_pass http://127.0.0.1:8000/; #API Server
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
Docker-Compose File
version: '3.9'
# services that make up the development env
services:
# DJANGO BACKEND
backend:
container_name: example-backend
restart: unless-stopped
image: example-backend:1.0.1
build:
context: ./backend/src
dockerfile: Dockerfile
command: gunicorn example.wsgi:application --bind 0.0.0.0:8000
ports:
- 8000:8000
environment:
- SECRET_KEY=xxx
- DEBUG=0
- ALLOWED_HOSTS=example.com,api.example.com,xxx.xxx.xxx.x
- DB_HOST=postgres-db
- DB_NAME=xxx
- DB_USER=xxx
- DB_PASS=xxx
- EMAIL_HOST_PASS=xxx
# sets a dependency on the db container and there should be a network connection between the two
networks:
- db-net
- shared-network
links:
- postgres-db:postgres-db
depends_on:
- postgres-db
# POSTGRES DATABASE
postgres-db:
container_name: postgres-db
image: postgres
restart: always
volumes:
- example-data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_DB=exampledb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
networks:
- db-net
# ANGULAR & NGINX FRONTEND
frontend:
container_name: example-frontend
build:
context: ./frontend
ports:
- "80:80"
- "443:443"
networks:
- shared-network
links:
- backend
depends_on:
- backend
networks:
shared-network:
driver: bridge
db-net:
volumes:
example-data:

nginx problem / causes redirection to port 80

I want to redirect all traffic from http://localhost:8080 to http://my-service:8080
But when I access http://localhost:8080 the nginx redirects me to http://localhost
This is my nginx.conf
events {
worker_connections 1024; ## Default: 1024
}
http{
server {
listen 8080 default_server;
listen [::]:8080 default_server;
server_name localhost;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://my-service:8080/;
}
}
}
And this is my docker-compose
version: '2.1'
services:
nginx-proxy:
image: nginx:stable-alpine
container_name: nginx-proxy
ports:
- "8080:8080"
volumes:
- ./data/nginx/nginx.conf:/etc/nginx/nginx.conf
networks:
- no-internet
- internet
my-service:
....
expose:
- "8080"
networks:
- no-internet
networks:
internet:
driver: bridge
no-internet:
internal: true
driver: bridge
when I run the docker compose without the nginx, I can access http://localhost:8080 without redirection.
I solved the problem:
server {
listen 8080;
listen [::]:8080;
server_name localhost;
location / {
proxy_pass http://ap-service:8080;
proxy_redirect http://ap-service:8080/ $scheme://$host:8080/;
}
}

HTTP redirected to HTTPS in nginx.conf

I have a nginx.conf in which I am running an application on localhost. I need to redirect the application from HTTP to HTTPS. In the nginx.conf, I have a configuration as below:
http {
error_log /etc/nginx/error/error.log warn; #./nginx/error.log warn;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;
server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name localhost;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:EECDH:EDH:!MD5:!RC4:!LOW:!MEDIUM:!CAMELLIA:!ECDSA:!DES:!DSS:!3DES:!NULL;
ssl_prefer_server_ciphers on;
keepalive_timeout 70;
location / {
proxy_pass http://localhost:80;
proxy_ssl_certificate /etc/nginx/ssl.crt;
proxy_ssl_certificate_key /etc/nginx/ssl.key;
proxy_ssl_verify off;
allow all;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
#access_log /var/log/nginx/access.log;
#error_log /var/log/nginx/error.log;
client_max_body_size 0;
client_body_buffer_size 128k;
proxy_connect_timeout 1200s;
proxy_send_timeout 1200s;
proxy_read_timeout 1200s;
proxy_buffers 32 4k;
}
}
And docker-compose.yml as below:-
version: '2'
services:
mysql:
image: mysql:5.7.21
restart: always
environment:
- MYSQL_ROOT_PASSWORD=admin
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=admin
volumes:
- ./mysql:/var/lib/mysql
networks:
- bookstack-bridge
bookstack:
image: solidnerd/bookstack:latest
container_name: bookstack
restart: always
depends_on:
- mysql
environment:
- APP_URL=http://localhost:8080
volumes:
- ./uploads:/var/www/bookstack/public/uploads
- ./storage-uploads:/var/www/bookstack/public/storage
ports:
- 8080:8080
networks:
- bookstack-bridge
nginx:
image: nginx:latest
container_name: bookstack-nginx
restart: always
And in the docker-compose.yml, I do have APP_URL=http://localhost:8080 env variable.
Does anybody have an idea, what needs to be changed to redirect from HTTP to HTTPS?
Thanks in advance.
I customized your docker-compose-yml.
Your docker-compose.yml would not work for https because some parts are wrong or missing.
To use HTTPS you have to create the certificates with Openssl. These must be in the folder /etc/nginx/certs in the container.
When you put the certificates in the folder you have to set - VIRTUAL_PORT=8080 to 443 and change the APP_URL from http to https
When you start a service and assign it to the network "web" nginx automatically sees that a new service is registered. It automatically maps to the port specified in the image. This happens with the volume command "/tmp/docker.sock:ro". ":ro" stands for Readonly
If you assign a service to the network "internal" it is not accessible from the outside and Nginx ignores it. See "mysql" service.
With "depends_on:" i say that all services have to start before bookstack starts. This is important! First Nginx, then MySql and finally bookstack.
I prefer to use VIRTUAL_HOST on its own local domain. You can also use localhost there, the only important thing is that your "hosts" file in the operating system points to your external Docker IP. Example: "192.168.5.121 bookstack.local"
My tip! I would store the service "nginx--proxy" in a sepparate docker-compose file. Then you can easily register further services with the nginx.
Good luck with that and if you want to use Bookstack only locally HTTPS might not be that urgent now. Otherwise search for "Create Certs for Nginx local"
Before you start create the network "web":
docker network create web
version: '2.4'
services:
mysql:
image: mysql:5.7.21
container_name: bookstack-mysql
restart: unless-stopped
networks:
- "internal"
healthcheck:
test: "exit 0"
environment:
- MYSQL_ROOT_PASSWORD=admin
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=admin
volumes:
- ./docker/data/mysql:/var/lib/mysql
bookstack:
image: solidnerd/bookstack:0.29.3
container_name: bookstack
restart: unless-stopped
networks:
- "web"
- "internal"
depends_on:
nginx--proxy:
condition: service_started
mysql:
condition: service_healthy
environment:
- VIRTUAL_HOST=bookstack.local
- VIRTUAL_PORT=8080
- DB_HOST=mysql:3306
- DB_DATABASE=bookstack
- DB_USERNAME=bookstack
- DB_PASSWORD=admin
- APP_URL=http://bookstack.local
volumes:
- ./docker/data/uploads:/var/www/bookstack/public/uploads
- ./docker/data/storage-uploads:/var/www/bookstack/storage/uploads
nginx--proxy:
image: jwilder/nginx-proxy:latest
container_name: nginx--proxy
restart: always
environment:
DEFAULT_HOST: default.vhost
ports:
- "80:80"
- "443:443"
volumes:
- ./docker/data/certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- "web"
- "internal"
networks:
web:
external: true
internal:
external: false
The solution worked for me:-
In the docker-compose.yml, in nginx service section added networks tag-
networks:
- bookstack-bridge
And in the nginx.conf added proxy_pass as-
proxy_pass http://bookstack:8080;
Thanks you guys for your help.

nginx container: unknown directive "events", .conf error

I'm using a nginx container with docker lihe proxy server. The real function is redirect the request from http://127.0.0.1:7003 to my asp .net rest app with expose port in 5000.
So I have been investigating where is the sintaxys error and what is happening.
nginx-container | 2020/01/28 08:34:06 [emerg] 1#1: unknown directive "events" in /etc/nginx/conf.d/Local.Project.Core.conf:1
nginx-container | nginx: [emerg] unknown directive "server" in /etc/nginx/conf.d/Local.Project.Core.conf:1
So there is my nginx Dockerfile:
FROM nginx:latest
# Copy virtual hosts config
RUN rm /etc/nginx/conf.d/default.conf
COPY ./wwwroot/config/Local.Project.Core.conf /etc/nginx/conf.d/
My docker compose where I put the connexions:
local-project:
image: project-mysql-image
container_name: project-mysql-container
ports:
- 127.0.0.1:7000:80
- 127.0.0.1:7001:433
- 127.0.0.1:7002:5000
expose:
- "5000"
environment:
ASPNETCORE_ENVIRONMENT: Production
ASPNETCORE_URLS: http://+:80;http//+:433;http:+:5000 # Is going to use Kestrel standard 5000 port, only http connection
ASPNETCORE_Kestrel__Certificates__Default__Path: /etc/ssl/certs/Local.Proyect.Core.pfx
ASPNETCORE_Kestrel__Certificates__Default__Password: local
volumes:
- .\wwwroot\cer\Local.Proyect.Core.cer:/etc/ssl/certs/Local.Proyect.Core.pfx
nginx:
image: nginx-image
container_name: nginx-container
ports:
- 127.0.0.1:7003:80
And the most important, the file.conf:
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
include /etc/nginx/sites-enabled/*;
upstrem project-mysql-container { server project-mysql-container:5000; }
server {
listen 80;
root /;
index index.html index.htm index.nginx-debian.html;
server_name *.Local.Project.Core;
location / {
proxy_pass http://project-mysql-container:5000;
}
location /ws/ {
proxy_pass http://project-mysql-container:5000/ws/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
}
In your nginx configuration, you have upstrem project-mysql-container .... That upstrem should be upstream.

Resources