How to set up gunicorn and uvicorn with nginx and django chanels? - django-channels

I am stuck at this problem and I need help.
I am trying to configure nginx server with django-channels and I have the following configurations, Nginx:
server {
server_name {{ my_domain }};
location = /favicon.ico {access_log off;log_not_found off;}
client_max_body_size 32m;
root /var/www/religion-python;
location /static {
alias /var/www/religion-python/static/;
}
location /media {
alias /var/www/religion-python/media;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
location /ws/ {
proxy_pass http://0.0.0.0:9000;
proxy_http_version 1.1;
proxy_read_timeout 86400;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/api.agro.dots.md/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/api.agro.dots.md/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
gunicorn:
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/var/www/religion-python
ExecStart=/var/www/religion-python/bin/gunicorn\
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
Agronomi.wsgi:application
[Install]
WantedBy=multi-user.target
I used this tutorial to configure gunicorn but for websocket I read on django-channel site that I have to setup daphne with supervisor which I don't know how and can't find how to do this. Can someone help me with some tutorials or tips on how to do this or maybe someone can explain me for what is need supervisor?
I read that uvicorn is easy to install and configure with gunicorn and django-chanels but again
I did found nothing about how to do this.

Related

How to pass SSL processing from local NginX to Docker NginX?

there. I have a docker nginx reverse proxy configured with ssl by configuration like this:
server {
listen 80;
server_name example.com;
location / {
return 301 https://$host$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name example.com;
root /var/www/example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# some locations location ... { ... }
}
The certificates are configured with certbot and working pretty fine.
All the containers are up and running. But I have multiple websites running on my server. They are managed by local NginX. So I set ports for the docker NginX like this:
nginx:
image: example/example_nginx:latest
container_name: example_nginx
ports:
- "8123:80"
- "8122:443"
volumes:
- ${PROJECT_ROOT}/data/certbot/conf:/etc/letsencrypt
- ${PROJECT_ROOT}/data/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
The docker port 80 maps to local port 8123 (http). The docker port 443 maps to local port 8122 (https). To pass the request from local NginX to docker container NginX I use the following config:
server {
listen 80;
server_name example.com;
location / {
access_log off;
proxy_pass http://localhost:8123;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 443 ssl;
server_name example.com;
location / {
access_log off;
proxy_pass https://localhost:8122;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
When I open the website it works but certificate seems to be broken and my WebSockets crash.
My question is: how can I pass ssl processing from local NginX to docker NginX so that it will work as expected?
~
~
~

Redirecting to the invalid URI. Jenkins in a docker container behind the Nginx

Successfully pulled an image from the official Jenkins hub and run a container with the following parameters
docker run -d --name=jenkins -p 8080:8080 -p 50000:50000 -e JENKINS_OPTS="--prefix=/build" -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
Also, I have the Nginx installed on my host (not a container!)
Instructions for Nginx
upstream jenkins {
server localhost:8080;
keepalive 16;
}
server {
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
server_name example.com www.example.com;
ignore_invalid_headers off;
location /build/ {
proxy_pass http://jenkins;
proxy_http_version 1.1;
proxy_redirect default;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-Proto: $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_request_buffering off;
}
access_log /var/log/nginx/jenkins.access.log;
error_log /var/log/nginx/jenkins.error.log;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
listen 80;
return 301 https://example.comk$request_uri;
}
Trying to access Jenkins via https://example.com/build. It asks me to input an initial admin password. After successfull submission it gives me this page
Page URL is https://example.com/build/:%20https://example:80/build/
I tried to add prefix... Tried to restart both of them but nothing changes.
Simply put set_proxy_headers strings before the proxy_pass. Such as
location /build/ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-Proto: $scheme;
proxy_pass http://jenkins;
proxy_http_version 1.1;
proxy_redirect default;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_request_buffering off;
}

Pydio Cells docker + NGINX = 404 error on /ws/chat and /ws/event

I have set up a Docker container using the pydio/cells:2.1.1 image from dockerhub.
My docker-compose.yaml contains the following section:
cells:
image: pydio/cells:2.1.1
environment:
- CELLS_NO_TLS=1
- CELLS_BIND=files.redacted.dev:8080
- CELLS_EXTERNAL=https://files.redacted.dev
volumes:
- /srv/cells:/var/cells
ports:
- "8081:8080"
depends_on:
- cells_mysql
restart: unless-stopped
To expose Cells to the network I'm using NGINX with the following configuration:
server {
client_max_body_size 200M;
server_name files.redacted.dev;
location / {
proxy_buffering off;
proxy_pass http://localhost:8081$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /ws {
proxy_buffering off;
proxy_pass http://localhost:8081;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/files.redacted.dev/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/files.redacted.dev/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = files.redacted.dev) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name files.redacted.dev;
listen 80;
listen [::]:80;
return 404; # managed by Certbot
}
Pretty much everything works OK however I noticed that when I create a new file or folder I then had to reload before it appeared in the UI.
Looking at Firefox's dev console I see 404 errors on GET wss://files.redacted.dev/ws/chat and wss://files.redacted.dev/ws/event requests.
I tested on the host with the following command (thereby bypassing NGINX):
curl --include --no-buffer --header "Connection: Upgrade" --header "Upgrade: websocket" --header "Host: files.redacted.dev:80" --header "Origin: https://files.redacted.dev" --header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" --header "Sec-WebSocket-Version: 13" http://localhost:8081/ws/chat
And the command didn't terminate (I'm assuming that means it was successful...).
Looks like the NGINX configuration is the problem. Does anybody know how to fix this?
In the end it was a missing header for the /ws location:
location /ws {
proxy_buffering off;
proxy_pass http://localhost:8081;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
proxy_set_header Host $host; # This is what was missing!
proxy_http_version 1.1; # This might also be needed...
}

How to implement (Certbot) ssl using Docker with Nginx image

I'm trying to implement ssl in my application using Docker with nginx image. I have two apps, one for back-end (api) and other for front-end (admin). It's working with http on port 80, but I need to use https. This is my nginx config file...
upstream ulib-api {
server 10.0.2.229:8001;
}
server {
listen 80;
server_name api.ulib.com.br;
location / {
proxy_pass http://ulib-api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
upstream ulib-admin {
server 10.0.2.229:8002;
}
server {
listen 80;
server_name admin.ulib.com.br;
location / {
proxy_pass http://ulib-admin;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
I get some tutorials but all is using docker-compose. I need to install it with Dockerfile. Can anyone give me a light?
... I'm using ECS instance on AWS and project is building with CI/CD
This is just one of possible ways:
First issue certificate using certbot. You will end up with a couple of *.pem files.
There are pretty tutorials on installing and running certbot on different systems, I used Ubuntu with command certbot --nginx certonly. You need to run this command on your domain because certbot will check that you are the owner of the domain by a number of challenges.
Second, you create nginx containers. You will need proper nginx.conf and link certificates to this containers. I use docker volumes but that is not the only way.
My nginx.conf looks like following:
http {
server {
listen 443 ssl;
ssl_certificate /cert/<yourdomain.com>/fullchain.pem;
ssl_certificate_key /cert/<yourdomain.com>/privkey.pem;
ssl_trusted_certificate /cert/<yourdomain.com>/chain.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
...
}
}
Last, you run nginx with proper volumes connected:
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf:ro -v $PWD/cert:/cert:ro -p 443:443 nginx:1.15-alpine
Notice:
I mapped $PWD/cert into container as /cert. This is a folder, where *.pem files are stored. They live under ./cert/example.com/*.pem
Inside nginx.conf you refer these certificates with ssl_... directives
You should expose port 443 to be able to connect

Native Nginx reverse proxy to Docker container with Letsencrypt

I have an ubuntu 18.0.4 lts box with nginx installed and configuered as a reverse proxy:
/etc/nginx/sites-enabled/default:
server {
server_name example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://0.0.0.0:3000;
}
I have a website running in a docker container listening on port 3000. With this configuration if I browse to http://example.com I see the site.
I've then installed LetsEncypt using the standard install from their website then I run sudo certbot --nginx and follow the instructions to enable https for mydomain.com.
Now my etc/nginx/sites-enabled/default looks like this and i'm unable to load the site on both https://example.com and http://example.com:
server {
server_name example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://0.0.0.0:3000;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
Any ideas?
I figured it out. The problem wasn't with my nginx/letsencrypt config it was a networking issue at the provider level (azure).
I noticed the Network Security Group only allowed traffic on port 80. The solution was to add a rule for 443.
After adding this rule everything now works as expected.

Resources