there. I have a docker nginx reverse proxy configured with ssl by configuration like this:
server {
listen 80;
server_name example.com;
location / {
return 301 https://$host$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name example.com;
root /var/www/example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# some locations location ... { ... }
}
The certificates are configured with certbot and working pretty fine.
All the containers are up and running. But I have multiple websites running on my server. They are managed by local NginX. So I set ports for the docker NginX like this:
nginx:
image: example/example_nginx:latest
container_name: example_nginx
ports:
- "8123:80"
- "8122:443"
volumes:
- ${PROJECT_ROOT}/data/certbot/conf:/etc/letsencrypt
- ${PROJECT_ROOT}/data/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
The docker port 80 maps to local port 8123 (http). The docker port 443 maps to local port 8122 (https). To pass the request from local NginX to docker container NginX I use the following config:
server {
listen 80;
server_name example.com;
location / {
access_log off;
proxy_pass http://localhost:8123;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 443 ssl;
server_name example.com;
location / {
access_log off;
proxy_pass https://localhost:8122;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
When I open the website it works but certificate seems to be broken and my WebSockets crash.
My question is: how can I pass ssl processing from local NginX to docker NginX so that it will work as expected?
~
~
~
Related
I have nginx & docker-compose setup with the following nginx config file, here api and kibana are docker containers which are running on ports 8080 and 5601 respectively
user nobody;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events
{
worker_connections 1024;
}
http
{
server
{
listen 80;
server_name my-domain.com www.my-domain.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server
{
listen 443 ssl;
server_name my-domain.com www.my-domain.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/all/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/all/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location ^~ /
{
proxy_pass http://api:8080/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
}
location ^~ /monitoring
{
proxy_pass http://kibana:5601/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
rewrite /monitoring/(.*)$ /$1 break;
}
}
}
All of my containers are up and running and everything seems fine but when i visit https://my-domain.com i get back This site can’t be reached and if i go to the non-secure http://my-domain.com/ i get nginx 404 error with the following log in the container
[error] 17#17: *13 open() "/etc/nginx/html/index.html" failed (2: No such file or directory), client: 123.456.789.101, server: my-domain.com, request: "GET / HTTP/1.1", host: "my-domain.com
Why is it looking for a file? Is there something wrong with my nginx config? pls help
Found it! 🤦🏽♂️ it should've been
proxy_set_header Host $host;
instead of
proxy_set_header Host $http_host;
Edit:
Apparently i also had to stop docker containers after running it for the first time and start again to get it to work
I have a backend container in internal docker network, which is not accessible to the internet.
Through nginx proxy i want to send request (webhook to slack) from backend server to outside world. Is it possible at all?
I have this config for nginx:
server {
listen 80 default_server;
server_name localhost;
client_max_body_size 100M;
charset utf-8;
... # setup for server containers
}
server{
listen 443;
server_name hooks.slack.com;
location / {
proxy_pass https://hooks.slack.com/;
proxy_redirect off;
proxy_set_header Host $http_host;
#proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #Gets CSS working
#proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Is there a "proper" structure for the directives of an NGINX Reverse Proxy? I have seen 2 main differences when looking for examples of an NGINX reverse proxy.
http directive is used to house all server directives. Servers with data are listed in a pool within the upstream directive.
server directives are listed directly within the main directive.
Is there any reason for this or is this just a syntactical sugar difference?
Example of #1 within ./nginx.conf file:
upstream docker-registry {
server registry:5000;
}
http {
server {
listen 80;
listen [::]:80;
return 301 https://$host#request_uri;
}
server {
listen 443 default_server;
ssl on;
ssl_certificate external/cert.pem;
ssl_certificate_key external/key.pem;
# set HSTS-Header because we only allow https traffic
add_header Strict-Transport-Security "max-age=31536000;";
proxy_set_header Host $http_host; # required for Docker client sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
location / {
auth_basic "Restricted"
auth_basic_user_file external/docker-registry.htpasswd;
proxy_pass http://docker-registry; # the docker container is the domain name
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
}
}
Example of #2 within ./nginx.conf file:
server {
listen 80;
listen [::]:80;
return 301 https://$host#request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
error_log /var/log/nginx/error.log info;
access_log /var/log/nginx/access.log main;
ssl_certificate /etc/ssl/private/{SSL_CERT_FILENAME};
ssl_certificate_key /etc/ssl/private/{SSL_CERT_KEY_FILENAME};
location / {
proxy_pass http://app1
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr; # could also be `$proxy_add_x_forwarded_for`
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
}
}
I dont quite understand your question, but it seems to me that the second example is missing the http {}, I dont think that nginx will start without it.
unless your example2 file is included somehow in the nginx.conf that has the http{}
I have set up a Docker container using the pydio/cells:2.1.1 image from dockerhub.
My docker-compose.yaml contains the following section:
cells:
image: pydio/cells:2.1.1
environment:
- CELLS_NO_TLS=1
- CELLS_BIND=files.redacted.dev:8080
- CELLS_EXTERNAL=https://files.redacted.dev
volumes:
- /srv/cells:/var/cells
ports:
- "8081:8080"
depends_on:
- cells_mysql
restart: unless-stopped
To expose Cells to the network I'm using NGINX with the following configuration:
server {
client_max_body_size 200M;
server_name files.redacted.dev;
location / {
proxy_buffering off;
proxy_pass http://localhost:8081$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /ws {
proxy_buffering off;
proxy_pass http://localhost:8081;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/files.redacted.dev/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/files.redacted.dev/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = files.redacted.dev) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name files.redacted.dev;
listen 80;
listen [::]:80;
return 404; # managed by Certbot
}
Pretty much everything works OK however I noticed that when I create a new file or folder I then had to reload before it appeared in the UI.
Looking at Firefox's dev console I see 404 errors on GET wss://files.redacted.dev/ws/chat and wss://files.redacted.dev/ws/event requests.
I tested on the host with the following command (thereby bypassing NGINX):
curl --include --no-buffer --header "Connection: Upgrade" --header "Upgrade: websocket" --header "Host: files.redacted.dev:80" --header "Origin: https://files.redacted.dev" --header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" --header "Sec-WebSocket-Version: 13" http://localhost:8081/ws/chat
And the command didn't terminate (I'm assuming that means it was successful...).
Looks like the NGINX configuration is the problem. Does anybody know how to fix this?
In the end it was a missing header for the /ws location:
location /ws {
proxy_buffering off;
proxy_pass http://localhost:8081;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
proxy_set_header Host $host; # This is what was missing!
proxy_http_version 1.1; # This might also be needed...
}
I am stuck at this problem and I need help.
I am trying to configure nginx server with django-channels and I have the following configurations, Nginx:
server {
server_name {{ my_domain }};
location = /favicon.ico {access_log off;log_not_found off;}
client_max_body_size 32m;
root /var/www/religion-python;
location /static {
alias /var/www/religion-python/static/;
}
location /media {
alias /var/www/religion-python/media;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
location /ws/ {
proxy_pass http://0.0.0.0:9000;
proxy_http_version 1.1;
proxy_read_timeout 86400;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/api.agro.dots.md/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/api.agro.dots.md/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
gunicorn:
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/var/www/religion-python
ExecStart=/var/www/religion-python/bin/gunicorn\
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
Agronomi.wsgi:application
[Install]
WantedBy=multi-user.target
I used this tutorial to configure gunicorn but for websocket I read on django-channel site that I have to setup daphne with supervisor which I don't know how and can't find how to do this. Can someone help me with some tutorials or tips on how to do this or maybe someone can explain me for what is need supervisor?
I read that uvicorn is easy to install and configure with gunicorn and django-chanels but again
I did found nothing about how to do this.