Native Nginx reverse proxy to Docker container with Letsencrypt - docker

I have an ubuntu 18.0.4 lts box with nginx installed and configuered as a reverse proxy:
/etc/nginx/sites-enabled/default:
server {
server_name example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://0.0.0.0:3000;
}
I have a website running in a docker container listening on port 3000. With this configuration if I browse to http://example.com I see the site.
I've then installed LetsEncypt using the standard install from their website then I run sudo certbot --nginx and follow the instructions to enable https for mydomain.com.
Now my etc/nginx/sites-enabled/default looks like this and i'm unable to load the site on both https://example.com and http://example.com:
server {
server_name example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://0.0.0.0:3000;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
Any ideas?

I figured it out. The problem wasn't with my nginx/letsencrypt config it was a networking issue at the provider level (azure).
I noticed the Network Security Group only allowed traffic on port 80. The solution was to add a rule for 443.
After adding this rule everything now works as expected.

Related

How to pass SSL processing from local NginX to Docker NginX?

there. I have a docker nginx reverse proxy configured with ssl by configuration like this:
server {
listen 80;
server_name example.com;
location / {
return 301 https://$host$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name example.com;
root /var/www/example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# some locations location ... { ... }
}
The certificates are configured with certbot and working pretty fine.
All the containers are up and running. But I have multiple websites running on my server. They are managed by local NginX. So I set ports for the docker NginX like this:
nginx:
image: example/example_nginx:latest
container_name: example_nginx
ports:
- "8123:80"
- "8122:443"
volumes:
- ${PROJECT_ROOT}/data/certbot/conf:/etc/letsencrypt
- ${PROJECT_ROOT}/data/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
The docker port 80 maps to local port 8123 (http). The docker port 443 maps to local port 8122 (https). To pass the request from local NginX to docker container NginX I use the following config:
server {
listen 80;
server_name example.com;
location / {
access_log off;
proxy_pass http://localhost:8123;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 443 ssl;
server_name example.com;
location / {
access_log off;
proxy_pass https://localhost:8122;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
When I open the website it works but certificate seems to be broken and my WebSockets crash.
My question is: how can I pass ssl processing from local NginX to docker NginX so that it will work as expected?
~
~
~

dynamic ssl certificate with nginx

I had a Shopify-like application. So, my customer get sub-domain when they create store(i.e customer1.myShopify.com).
to handle this case of dynamic sub-domains with nginx:
server {
listen 443 ssl;
server_name admin.myapp.com;
ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem;
location / {
proxy_pass http://admin-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
server {
listen 443 ssl;
server_name *.myapp.com;
ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem;
location / {
proxy_pass http://app-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
this works great so if you visit admin.myapp.com you'll see the admin application and if you visit any xxx.myapp.com you'll see the shop-front-end application.
The Problem
I want to allow my customer to connect their own domain. so I told them to connect with CNAME and A Record.
A Record => # => 12.12.12.3(my root nginx ip)
CNAME => WWW => thier.myapp.com
not each request to customer.com will resolved by my nginx.
so I added this configuration to my nginx, to catch all other server_name request:
server {
listen 80;
server_name server_name ~^.*$;
location / {
proxy_pass http://app-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
and it works fine.
but how can I handle SSL for this case? because it could be any domain name.I don't know what the customer domain name will be.
how i can give them the ability to add SSL certificate automatically and without create manually ?
This server block should work since variable names are supported for ssl_certificate and ssl_certificate_key directives.
http {
map "$ssl_server_name" $domain_name { ~(.*)\.(.*)\.(.*)$ $2.$3; }
server {
listen 443;
server_name server_name ~^.*$;
ssl_certificate /path/to/cert/files/$domain_name.crt;
ssl_certificate_key /path/to/cert/keys/$domain_name.key;
location / {
proxy_pass http://app-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
}
P.S. Using variable names would compromise performance because now nginx would load the files on each ssl handshake.
Ref: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate
In my view instead of compromising the nginx performance, we should use one cron job, say letsencrypt bot, which will fetch certificate based on user requested domain, and you can add the certificate in nginx conf, and restart server.
Bonus:
I have used traefik which is kubernetes based solutions, they load configs on the fly without the restart.
In VM you can use the certbot for managing the SSL/TLS certificate.
Now if you are using the HTTP-01 method to verify your domain you won't be able to get the Wild card domain name.
i would suggest to use the DNS-01 method in cert-bot for domain verification and you can get the wild card certificate and use it.
Adding the certificate to Nginx config using :
ssl_certificate /path/to/cert/files/tls.crt;
ssl_certificate_key /path/to/cert/keys/tls.key;
If you are using the certbot it will also auto inject and add the SSL config above lines to the configuration file.
For different domains also you can run the job or certbot with HTTP-01 method and you will get the certificate.
If you are on Kubernetes you can use the cert-manager, which will be managing the SSL/TLS certificate.
I suggest using separete block with different ssh certificates thats the only solution that worked for me
server {
listen 80;
root /var/www/html/example1.com;
index index.html;
server_name example1.com;
ssl_certificate /etc/letsencrypt/live/example1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example1.com/privkey.pem;
location / {
try_files $uri $uri/ =404;
}
}
server {
listen 80;
root /var/www/html/example2.com;
index index.html;
server_name example2.com;
ssl_certificate /etc/letsencrypt/live/example2.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example2.com/privkey.pem;
location / {
try_files $uri $uri/ =404;
}
}

How to set up gunicorn and uvicorn with nginx and django chanels?

I am stuck at this problem and I need help.
I am trying to configure nginx server with django-channels and I have the following configurations, Nginx:
server {
server_name {{ my_domain }};
location = /favicon.ico {access_log off;log_not_found off;}
client_max_body_size 32m;
root /var/www/religion-python;
location /static {
alias /var/www/religion-python/static/;
}
location /media {
alias /var/www/religion-python/media;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
location /ws/ {
proxy_pass http://0.0.0.0:9000;
proxy_http_version 1.1;
proxy_read_timeout 86400;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/api.agro.dots.md/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/api.agro.dots.md/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
gunicorn:
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/var/www/religion-python
ExecStart=/var/www/religion-python/bin/gunicorn\
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
Agronomi.wsgi:application
[Install]
WantedBy=multi-user.target
I used this tutorial to configure gunicorn but for websocket I read on django-channel site that I have to setup daphne with supervisor which I don't know how and can't find how to do this. Can someone help me with some tutorials or tips on how to do this or maybe someone can explain me for what is need supervisor?
I read that uvicorn is easy to install and configure with gunicorn and django-chanels but again
I did found nothing about how to do this.

Broken UI and multiple console errors when deploying Nexus3 on Docker and an nginx reverse-proxy

I am running Nexus3 in Docker as well as an nginx reverse-proxy on Docker. I have my own ssl certificate. I followed the instructions from Sonatype on how to properly handle using a reverse proxy and use the Nexus3 certificates. Problem is that this is what I see when I go to my repository:
These are my errors:
What could cause this? This is my nginx.conf:
server {
listen 443 ssl http2;
ssl_certificate /etc/ssl/confidential.com/fullchain.cer;
ssl_certificate_key /etc/ssl/confidential.com/*.confidential.com.key;
server_name internal.confidential.com;
location /test {
proxy_pass http://nexus:8081/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}

How to implement (Certbot) ssl using Docker with Nginx image

I'm trying to implement ssl in my application using Docker with nginx image. I have two apps, one for back-end (api) and other for front-end (admin). It's working with http on port 80, but I need to use https. This is my nginx config file...
upstream ulib-api {
server 10.0.2.229:8001;
}
server {
listen 80;
server_name api.ulib.com.br;
location / {
proxy_pass http://ulib-api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
upstream ulib-admin {
server 10.0.2.229:8002;
}
server {
listen 80;
server_name admin.ulib.com.br;
location / {
proxy_pass http://ulib-admin;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
I get some tutorials but all is using docker-compose. I need to install it with Dockerfile. Can anyone give me a light?
... I'm using ECS instance on AWS and project is building with CI/CD
This is just one of possible ways:
First issue certificate using certbot. You will end up with a couple of *.pem files.
There are pretty tutorials on installing and running certbot on different systems, I used Ubuntu with command certbot --nginx certonly. You need to run this command on your domain because certbot will check that you are the owner of the domain by a number of challenges.
Second, you create nginx containers. You will need proper nginx.conf and link certificates to this containers. I use docker volumes but that is not the only way.
My nginx.conf looks like following:
http {
server {
listen 443 ssl;
ssl_certificate /cert/<yourdomain.com>/fullchain.pem;
ssl_certificate_key /cert/<yourdomain.com>/privkey.pem;
ssl_trusted_certificate /cert/<yourdomain.com>/chain.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
...
}
}
Last, you run nginx with proper volumes connected:
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf:ro -v $PWD/cert:/cert:ro -p 443:443 nginx:1.15-alpine
Notice:
I mapped $PWD/cert into container as /cert. This is a folder, where *.pem files are stored. They live under ./cert/example.com/*.pem
Inside nginx.conf you refer these certificates with ssl_... directives
You should expose port 443 to be able to connect

Resources