how to run coturn in ubuntu running on localhost - coturn

i want to run coturn server on ubuntu i do not have any domain i want to test it on localhost for that i followed a tutorial https://www.allerstorfer.at/install-coturn-on-ubuntu/
here are the steps i followed
sudo apt-get install coturn
nano /etc/default/coturn
TURNSERVER_ENABLED=1
listening-port=3478
cli-port=5766
listening-ip=172.17.19.101
created a secret
openssl rand -hex 32
and added to the turnserver.conf file
use-auth-secret
static-auth-secret=583bAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF
vi /etc/services
stun-turn 3478/tcp # Coturn
stun-turn 3478/udp # Coturn
stun-turn-tls 5349/tcp # Coturn
stun-turn-tls 5349/udp # Coturn
turnserver-cli 5766/tcp # Coturn
started the coturn server
turnserver -o -v
0: log file opened: /var/tmp/turn_18181_2020-09-15.log
0:
RFC 3489/5389/5766/5780/6062/6156 STUN/TURN Server
Version Coturn-4.5.0.7 'dan Eider'
0:
Max number of open files/sockets allowed for this process: 65535
0:
Due to the open files/sockets limitation,
max supported number of TURN Sessions possible is: 32500 (approximately)
0:
==== Show him the instruments, Practical Frost: ====
0: TLS supported
0: DTLS supported
0: DTLS 1.2 supported
0: TURN/STUN ALPN supported
0: Third-party authorization (oAuth) supported
0: GCM (AEAD) supported
0: OpenSSL compile-time version: OpenSSL 1.1.1 11 Sep 2018 (0x1010100f)
0:
0: SQLite supported, default database location is /var/lib/turn/turndb
0: Redis supported
0: PostgreSQL supported
0: MySQL supported
0: MongoDB is not supported
0:
0: Default Net Engine version: 3 (UDP thread per CPU core)
=====================================================
0: Config file found: /etc/turnserver.conf
0: Bad configuration format: TURNSERVER_ENABLED
0: Listener address to use: 172.17.19.101
0: Config file found: /etc/turnserver.conf
0: Bad configuration format: TURNSERVER_ENABLED
0: Domain name:
0: Default realm:
0: ERROR:
CONFIG ERROR: Empty cli-password, and so telnet cli interface is disabled! Please set a non empty cli-password!
0:
CONFIGURATION ALERT: you did specify the long-term credentials usage
but you did not specify the default realm option (-r option).
Check your configuration.
0: WARNING: cannot find certificate file: turn_server_cert.pem (1)
0: WARNING: cannot start TLS and DTLS listeners because certificate file is not set properly
0: WARNING: cannot find private key file: turn_server_pkey.pem (1)
0: WARNING: cannot start TLS and DTLS listeners because private key file is not set properly
0: Relay address to use: 0.0.0.0
netstat -npta | grep turnserver
8 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
then
service coturn stop
service coturn start
service coturn restart
service coturn status
returned
service coturn status
● coturn.service - LSB: coturn TURN Server
Loaded: loaded (/etc/init.d/coturn; generated)
Active: active (running) since Tue 2020-09-15 17:02:05 PKT; 3s ago
Docs: man:systemd-sysv-generator(8)
Process: 18860 ExecStop=/etc/init.d/coturn stop (code=exited, status=0/SUCCESS)
Process: 18867 ExecStart=/etc/init.d/coturn start (code=exited, status=0/SUCCESS)
Tasks: 15 (limit: 4915)
CGroup: /system.slice/coturn.service
└─18889 /usr/bin/turnserver -c /etc/turnserver.conf -o -v
there is a step given in the tutorial as
Add to DNS
turn.domain.xx → domain.xx
stun.domain.xx → domain.xx
i am confused on this part so i
edited /etc/hosts file and added
127.0.0.1 turn.domain.xx
127.0.0.1 sturn.domain.xx
and
telnet localhost 5766
returns
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
but in the tutorial it shows the output the only thing i have changed is the listening_ip which they used
listening-ip=172.17.19.101
and i used
listening-ip=0.0.0.0
if i use ->
listening-ip=172.17.19.101
then the command ->
netstat -npta | grep turnserver
returned nothing
please guide me how can i test the coturn server on localhost

Even it seems a bit late; your turnserver.conf file has an invalid line TURNSERVER_ENABLED=1. This line corrupts your conf file and turnserver does not start. As you can see in the log file:
0: Bad configuration format: TURNSERVER_ENABLED
You can not use this parameter in the turnserver.conf file. This parameter is for /etc/default/coturn file.

Related

Xdebug 3.0 WSL2 and VSCode - address is already in use by docker-proxy

My VSCode in WSL:Ubuntu is unable to listen to the xdebug port, because it is blocked by some docker-proxy.
I was following this Solution, but trying VSCode to listen to the xdebug port, results in the following error:
Error: listen EADDRINUSE: address already in use :::9003
Can anyone help with connecting VSCode to xdebug?
Windows 11 says the port is already allocated by wslhost:
PS C:\WINDOWS\system32> Get-Process -Id (Get-NetTCPConnection -LocalPort 9003).OwningProcess
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
285 47 2288 4748 0,05 19480 1 wslhost
Ubuntu tells, its allocated by some docker-proxy:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9003 0.0.0.0:* LISTEN 17210/docker-proxy
tcp6 0 0 :::9003 :::* LISTEN 17217/docker-proxy
docker-compose-version: docker-compose version 1.25.0
The xdebug.log says:
[Step Debug] INFO: Connecting to configured address/port: host.docker.internal:9003.
[Step Debug] ERR: Time-out connecting to debugging client, waited: 200 ms. Tried: host.docker.internal:9003 (through xdebug.client_host/xdebug.client_port) :-(
For sure as long as nothing is listening.
As to xdebug.client_host I'v tried:
host.docker.internal
xdebug://gateway and xdebug://nameserver refering to this: https://docs.google.com/document/d/1W-NzNtExf5C4eOu3rRQm1WlWnbW44u3ANDDA49d3FD4/edit?pli=1
setting the env-variable with docker-compose.yml: XDEBUG_CONFIG="client_host=..."
Removing the Expose directive from Dockerfile/docker-compose as in this comment doesn't remove the error neither.
Solved it. For others with this challenge:
Inside of wsl-ubuntu -> docker-containter host.docker.internal directs to the wrong ip.
In the wsl-distribution the file /etc/resolv.conf is the ip of the windows host.
To get the correct ip use this answer: How to get the primary IP address of the local machine on Linux and OS X?
My solution is to define an env-variable with this ip:
alias docker_compose_local_ip="ifconfig eth0 | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'"
export DOCKER_COMPOSE_LOCAL_IP=$(docker_compose_local_ip)
and configure the container with it:
services:
service-name:
environment:
- XDEBUG_CONFIG=client_host=${DOCKER_COMPOSE_LOCAL_IP} ...

How to fix "Connection refused" error on ACME certificate challenge with cookiecutter-django

I have created a simple website using cookiecutter-django (using the latest master cloned today). Running the docker-compose setup locally works. Now I would like to deploy the site on digital ocean. To do this, I run the following commands:
$ docker-machine create -d digitalocean --digitalocean-access-token=secret instancename
$ eval "$(docker-machine env instancename)"
$ sudo docker-compose -f production.yml build
$ sudo docker-compose -f production.yml up
In the cookiecutter-django documentation I read
If you are not using a subdomain of the domain name set in the project, then remember to put your staging/production IP address in the DJANGO_ALLOWED_HOSTS environment variable (see Settings) before you deploy your website. Failure to do this will mean you will not have access to your website through the HTTP protocol.
Therefore, in the file .envs/.production/.django I changed the line with DJANGO_ALLOWED_HOSTS from
DJANGO_ALLOWED_HOSTS=.example.com (instead of example.com I use my actual domain)
to
DJANGO_ALLOWED_HOSTS=XXX.XXX.XXX.XX
(with XXX.XXX.XXX.XX being the IP of my digital ocean droplet; I also tried DJANGO_ALLOWED_HOSTS=.example.com and DJANGO_ALLOWED_HOSTS=.example.com,XXX.XXX.XXX.XX with the same outcome)
In addition, I logged in to where I registered the domain and made sure to point the A-Record to the IP of my digital ocean droplet.
With this setup the deployment does not work. I get the following error message:
traefik_1 | time="2019-03-29T21:32:20Z" level=error msg="Unable to obtain ACME certificate for domains \"example.com\" detected thanks to rule \"Host:example.com\" : unable to generate a certificate for the domains [example.com]: acme: Error -> One or more domains had a problem:\n[example.com] acme: error: 400 :: urn:ietf:params:acme:error:connection :: Fetching http://example.com/.well-known/acme-challenge/example-key-here: Connection refused, url: \n"
Unfortunately, I was not able to find a solution for this problem. Any help is greatly appreciated!
Update
When I run netstat -antp on the server as suggested in the comments I get the following output (IPs replaced with placeholders):
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1590/sshd
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:48923 SYN_RECV -
tcp 0 332 XXX.XXX.XXX.XX:22 ZZ.ZZZ.ZZ.ZZZ:49726 ESTABLISHED 16959/0
tcp 0 1 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:17195 FIN_WAIT1 -
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:57909 ESTABLISHED 16958/sshd: [accept
tcp6 0 0 :::2376 :::* LISTEN 5120/dockerd
tcp6 0 0 :::22 :::* LISTEN 1590/sshd
When I run $ sudo docker-compose -f production.yml up before, netstat -antp returns this:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1590/sshd
tcp 0 332 XXX.XXX.XXX.XX:22 ZZ.ZZZ.ZZ.ZZZ:49726 ESTABLISHED 16959/0
tcp 0 0 XXX.XXX.XXX.XX:22 AA.AAA.AAA.A:50098 ESTABLISHED 17046/sshd: [accept
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:55652 SYN_RECV -
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:16750 SYN_RECV -
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:31541 SYN_RECV -
tcp 0 1 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:57909 FIN_WAIT1 -
tcp6 0 0 :::2376 :::* LISTEN 5120/dockerd
tcp6 0 0 :::22 :::* LISTEN 1590/sshd
In my experience, the Droplets are configured as needed by cookiecutter-django, the ports are open properly, so unless you closed them, you shouldn't have to do anything.
Usually, when this error happens, it's due to DNS configuration issue. Basically Let's Encrypt was not able to reach your server using the domain example.com. Unfortunately, you're not giving us the actual domain you've used, so I'll try to guess.
You said you've configured a A record to point to your droplet, which is what you should do. However, this config needs to be propagated on most of the name servers, which may take time. It might be propagated for you, but if the name server used by Let's Encrypt isn't, your TLS certificate will fail.
You can check how well it's propagated using an online tool which checks multiple name servers at once, like https://dnschecker.org/.
From your machine, you can do so using dig (for people interested, I recommend this video):
# Using your default name server
dig example.com
# Using 1.1.1.1 as name server
dig #1.1.1.1 example.com
Hope that helps.

Securing Redis with Stunnel on Docker Swarm

I have added stunnel to a Redis container and PHP-FPM container to securely transfer application data between services on a docker swarm cluster. I haven't been able to find any other similar questions, so I'm wondering if I'm taking the wrong approach here.
I have this working in my local environment, it's when I deploy it to the swarm that it fails.
Problem
When I try to ping from the client container by executing redis-cli -p 8001 ping
Then I get the following error: Error: Connection reset by peer
When I take a look at the logs for stunnel I can see that it accepted the connection on the client and then fails when attempting to send it to the redis server container as seen below
2018.05.19 16:42:39 LOG5[ui]: Configuration successful
2018.05.19 16:45:19 LOG7[0]: Service [redis-client] started
2018.05.19 16:45:19 LOG5[0]: Service [redis-client] accepted connection from 127.0.0.1:41710
2018.05.19 16:45:19 LOG6[0]: s_connect: connecting 10.0.0.5:6379
2018.05.19 16:45:19 LOG7[0]: s_connect: s_poll_wait 10.0.0.5:6379: waiting 10 seconds
2018.05.19 16:45:19 LOG3[0]: s_connect: connect 10.0.0.5:6379: Connection refused (111)
2018.05.19 16:45:19 LOG5[0]: Connection reset: 0 byte(s) sent to SSL, 0 byte(s) sent to socket
2018.05.19 16:45:19 LOG7[0]: Local descriptor (FD=3) closed
2018.05.19 16:45:19 LOG7[0]: Service [redis-client] finished (0 left)
Configuration Details
Here's the stunnel configuration on the Redis server
pid = /run/stunnel-redis.pid
output = /tmp/stunnel.log
[redis-server]
cert = /etc/stunnel/redis-server.crt
key = /etc/stunnel/redis-server.key
accept = redis_master:6379
connect = 127.0.0.1:6378
And here's the stunnel configuration for the client
pid = /run/stunnel-redis.pid
output = /tmp/stunnel.log
[redis-client]
client = yes
accept = 127.0.0.1:8001
connect = redis_master:6379
CAfile = /etc/stunnel/redis-server.crt
verify = 4
debug = 7
This is what my docker-stack.yml file looks like for these two services
php_fpm:
build:
context: .
dockerfile: fpm.Dockerfile
image: registry.github.com/hidden
ports:
- "8001"
redis_master:
build:
context: .
dockerfile: redis.Dockerfile
image: registry.github.com/hidden
ports:
- "6378"
- "6379"
sysctls:
- net.core.somaxconn=511
volumes:
- redis-data:/data
Output of netstat -plunt in the fpm client container
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 208/stunnel4
tcp 0 0 127.0.0.11:45281 0.0.0.0:* LISTEN -
tcp6 0 0 :::9000 :::* LISTEN 52/php-fpm.conf)
udp 0 0 127.0.0.11:43781 0.0.0.0:* -
Output of netstat -plunt in the redis server container
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.11:39294 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:6378 0.0.0.0:* LISTEN 8/redis-server *:63
tcp 0 0 10.0.0.14:6379 0.0.0.0:* LISTEN 37/stunnel4
tcp6 0 0 :::6378 :::* LISTEN 8/redis-server *:63
udp 0 0 127.0.0.11:44855 0.0.0.0:* -
I've confirmed there is no firewall active on the host machine. These services are currently on the same host, but they will soon be on separate hosts, hence the need for stunnel.
These services are deployed with the docker stack command so an overlay network is automatically created and attached to both of these services.
Anyone have any thoughts on why the request from the client to the server is being refused?
FINALLY got this working! I hope this helps someone else. The problem was the stunnel configuration on the redis-server, the correct configureation is as follows:
[redis-server]
cert = /etc/stunnel/redis-server.crt
key = /etc/stunnel/redis-server.key
accept = 6379
connect = 6378
The problem appears to be that I had used the hostname redis_master in the accept option, switching it to only the port fixed the problem.

Docker Couchbase: Cannot connect to port 8091 using curl from within entrypoint script

Running docker-machine version 0.5.0, Docker version 1.9.0 on OS X 10.11.1.
I've a Couchbase image of my own (not the official one). From inside the entrypoint script, I'm running some curl commands to configure the Couchbase server and to load sample data. Problem is, curl fails with error message Failed to connect to localhost port 8091: Connection refused.
I've tried 127.0.0.1, 0.0.0.0, localhost, all without any success. netstat shows that port 8091 on localhost is listening. If I later log on to the server using docker exec and run the same curl commands, those work! What am I missing?
Error:
couchbase4 | % Total % Received % Xferd Average Speed Time Time Time Current
couchbase4 | Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8091: Connection refused
netstat output:
root#cd4d3eb00666:/opt/couchbase/var/lib# netstat -lntu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:21100 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:21101 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9998 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8091 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8092 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:41125 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11209 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11210 0.0.0.0:* LISTEN
tcp6 0 0 :::11209 :::* LISTEN
tcp6 0 0 :::11210 :::* LISTEN
Here is my Dockerfile:
FROM couchbase
COPY configure-cluster.sh /opt/couchbase
CMD ["/opt/couchbase/configure-cluster.sh"]
and configure-cluster.sh
/entrypoint.sh couchbase-server &
sleep 10
curl -v -X POST http://127.0.0.1:8091/pools/default -d memoryQuota=300 -d indexMemoryQuota=300
curl -v http://127.0.0.1:8091/node/controller/setupServices -d services=kv%2Cn1ql%2Cindex
curl -v http://127.0.0.1:8091/settings/web -d port=8091 -d username=Administrator -d password=password
curl -v -u Administrator:password -X POST http://127.0.0.1:8091/sampleBuckets/install -d '["travel-sample"]'
This configures the Couchbase server but still debugging how to bring Couchbase back in foreground.
Complete details at: https://github.com/arun-gupta/docker-images/tree/master/couchbase
It turns out that if I do the curls after restarting the server, those work. Go figure! That said, note that the REST API for installing sample buckets is undocumented as far as I know. arun-gupta's blog and his answer here are the only places where I saw any mention of a REST call for installing sample buckets. There's a python script available but that requires installing python-httplib2.
That said, arun-gupta's last curl statement may be improved upon as follows:
if [ -n "$SAMPLE_BUCKETS" ]; then
IFS=',' read -ra BUCKETS <<< "$SAMPLE_BUCKETS"
for bucket in "${BUCKETS[#]}"; do
printf "\n[INFO] Installing %s.\n" "$bucket"
curl -sSL -w "%{http_code} %{url_effective}\\n" -u $CB_USERNAME:$CB_PASSWORD --data-ascii '["'"$bucket"'"]' $ENDPOINT/sampleBuckets/install
done
fi
where SAMPLE_BUCKETS can be a comma-separated environment variable, possible values being combinations of gamesim-sample, beer-sample and travel-sample. The --data-ascii option keeps curl from choking on the dynamically created JSON.
Now if only there was an easy way to start the server in the foreground. :)

How can I redirect HTTP to HTTPS with uWSGI internal routing?

I have deployed a WSGI application with uWSGI, but I am not using NGINX. How can I use uWSGI's internal routing to redirect http requests to https?
I have tried uwsgi --route-uri="^http:\/\/(.+)$ redirect-permanent:https://\$1" but get an error from uWSGI: unrecognized option '--route-uri=^https:\/\/(.+)$ redirect-permanent:https://\$1'
to redirect http to https, use following config:
[uwsgi]
; privileged port can only be opened as shared socket
shared-socket = 0.0.0.0:80
shared-socket = 0.0.0.0:443
;enable redirect to https
http-to-https = =0
; enable https, spdy is optional
https2 = addr==1,cert=server.crt,key=server.key,spdy=1
; alternative
; https = =1,server.crt,server.key
; force change of user after binding to ports as root
uid = user
gid = usergroup
; where original app will be running on IP or UNIX socket
socket = 127.0.0.1:8001
module = smthg.wsgi
If your reverse proxy or load balancer passes X-Forwarded-Proto header along with the request, the following config will work:
[uwsgi]
http-socket = :3031
<... your uwsgi config options here ... >
route-if=equal:${HTTP_X_FORWARDED_PROTO};http redirect-permanent:https://<your_host_name_here>${REQUEST_URI}
Some load balancers, such as AWS ELB pass this header automatically.
To build on Oleg's answer: For this to work, you'll need to manually add some headers to stop UWSGI causing the 502 errors at the ELB.
route-if=equal:${HTTP_X_FORWARDED_PROTO};http addheader:Content-Type: */*; charset="UTF-8"
route-if=equal:${HTTP_X_FORWARDED_PROTO};http addheader:Content-Length: 0
route-if=equal:${HTTP_X_FORWARDED_PROTO};http redirect-permanent:https://<your_host_name_here>${REQUEST_URI}
For the ELB to recognise the 302, you need to manually add the Content-Length and Content-Type header. This was not obvious, even if you add ELB logging.
To debug you need to remember to actually send the X-Forwarded-Proto header with curl:
curl -v -H "X-Forwarded-Proto: http" http://localhost:port
Here is how you can redirect & force HTTPS directly in uWSGI, for anyone who does not want to run nginx.
[uwsgi]
master = True
enable-threads = True
thunder-lock = True
shared-socket = :443
https2 = addr==0,cert=yourdomain.crt,key=yourdomain.key,HIGH,spdy=1
http-to-https = 0.0.0.0:80
route-if-not = equal:${HTTPS};on redirect-permanent:https://${HTTP_HOST}${REQUEST_URI}
route-if = equal:${HTTPS};on addheader:Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
Tested and works with docker too (python:3.6.7-alpine3.8)
Also, if you will debug an HTTP request you will see the 1st response header is 301 to HTTPS.
Then if you will try again (from the same browser) you will see 307 since HSTS is enabled.
[uWSGI] getting INI configuration from uwsgi.ini
*** Starting uWSGI 2.0.17.1 (64bit) on [Fri Dec 21 20:06:47 2018] ***
compiled with version: 6.4.0 on 21 December 2018 20:05:49
os: Linux-3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017
nodename: web1
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /usr/src/app
detected binary path: /usr/local/bin/uwsgi
*** dumping internal routing table ***
[rule: 0] subject: ${HTTPS};on func: !equal action: redirect-permanent:https://${HTTP_HOST}${REQUEST_URI}
[rule: 1] subject: ${HTTPS};on func: equal action: addheader:Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
*** end of the internal routing table ***
uwsgi shared socket 0 bound to TCP address :443 fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
chdir() to /usr/src/app
your memory page size is 4096 bytes
detected max file descriptor number: 65536
lock engine: pthread robust mutexes
thunder lock: enabled
uWSGI http bound on :443 fd 3
uWSGI http bound on 0.0.0.0:80 fd 5
uwsgi socket 0 bound to TCP address 127.0.0.1:45870 (port auto-assigned) fd 4
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.6.7 (default, Dec 21 2018, 03:29:53) [GCC 6.4.0]
Python main interpreter initialized at 0x7fdf16663b40
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 364600 bytes (356 KB) for 4 cores
*** Operational MODE: preforking ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x7fdf16663b40 pid: 1 (default app)
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1)
spawned uWSGI worker 1 (pid: 18, cores: 1)
spawned uWSGI worker 2 (pid: 19, cores: 1)
spawned uWSGI worker 3 (pid: 20, cores: 1)
spawned uWSGI worker 4 (pid: 21, cores: 1)
spawned uWSGI http 1 (pid: 22)
Note it runs as root
Hope this helps.
Just another answer. Doing
[uwsgi]
<... other uwsgi configs ... >
plugins = router_redirect
route-if-not = equal:${HTTPS};on redirect-permanent:https://${HTTP_HOST}${REQUEST_URI}
will force HTTPS for the whole site.
Tested.
For those who have tried both answers above and unfortunately failed, leave uWSGI what it is and add the Nginx CONF:
server {
listen 80;
server_name <your_domain>;
rewrite ^/(.*) https://<your_domain>/$1 permanent;
}
I feel like uWSGI is not that friendly when it comes to internal routing.

Resources