How can I redirect HTTP to HTTPS with uWSGI internal routing? - uwsgi

I have deployed a WSGI application with uWSGI, but I am not using NGINX. How can I use uWSGI's internal routing to redirect http requests to https?
I have tried uwsgi --route-uri="^http:\/\/(.+)$ redirect-permanent:https://\$1" but get an error from uWSGI: unrecognized option '--route-uri=^https:\/\/(.+)$ redirect-permanent:https://\$1'

to redirect http to https, use following config:
[uwsgi]
; privileged port can only be opened as shared socket
shared-socket = 0.0.0.0:80
shared-socket = 0.0.0.0:443
;enable redirect to https
http-to-https = =0
; enable https, spdy is optional
https2 = addr==1,cert=server.crt,key=server.key,spdy=1
; alternative
; https = =1,server.crt,server.key
; force change of user after binding to ports as root
uid = user
gid = usergroup
; where original app will be running on IP or UNIX socket
socket = 127.0.0.1:8001
module = smthg.wsgi

If your reverse proxy or load balancer passes X-Forwarded-Proto header along with the request, the following config will work:
[uwsgi]
http-socket = :3031
<... your uwsgi config options here ... >
route-if=equal:${HTTP_X_FORWARDED_PROTO};http redirect-permanent:https://<your_host_name_here>${REQUEST_URI}
Some load balancers, such as AWS ELB pass this header automatically.

To build on Oleg's answer: For this to work, you'll need to manually add some headers to stop UWSGI causing the 502 errors at the ELB.
route-if=equal:${HTTP_X_FORWARDED_PROTO};http addheader:Content-Type: */*; charset="UTF-8"
route-if=equal:${HTTP_X_FORWARDED_PROTO};http addheader:Content-Length: 0
route-if=equal:${HTTP_X_FORWARDED_PROTO};http redirect-permanent:https://<your_host_name_here>${REQUEST_URI}
For the ELB to recognise the 302, you need to manually add the Content-Length and Content-Type header. This was not obvious, even if you add ELB logging.
To debug you need to remember to actually send the X-Forwarded-Proto header with curl:
curl -v -H "X-Forwarded-Proto: http" http://localhost:port

Here is how you can redirect & force HTTPS directly in uWSGI, for anyone who does not want to run nginx.
[uwsgi]
master = True
enable-threads = True
thunder-lock = True
shared-socket = :443
https2 = addr==0,cert=yourdomain.crt,key=yourdomain.key,HIGH,spdy=1
http-to-https = 0.0.0.0:80
route-if-not = equal:${HTTPS};on redirect-permanent:https://${HTTP_HOST}${REQUEST_URI}
route-if = equal:${HTTPS};on addheader:Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
Tested and works with docker too (python:3.6.7-alpine3.8)
Also, if you will debug an HTTP request you will see the 1st response header is 301 to HTTPS.
Then if you will try again (from the same browser) you will see 307 since HSTS is enabled.
[uWSGI] getting INI configuration from uwsgi.ini
*** Starting uWSGI 2.0.17.1 (64bit) on [Fri Dec 21 20:06:47 2018] ***
compiled with version: 6.4.0 on 21 December 2018 20:05:49
os: Linux-3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017
nodename: web1
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /usr/src/app
detected binary path: /usr/local/bin/uwsgi
*** dumping internal routing table ***
[rule: 0] subject: ${HTTPS};on func: !equal action: redirect-permanent:https://${HTTP_HOST}${REQUEST_URI}
[rule: 1] subject: ${HTTPS};on func: equal action: addheader:Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
*** end of the internal routing table ***
uwsgi shared socket 0 bound to TCP address :443 fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
chdir() to /usr/src/app
your memory page size is 4096 bytes
detected max file descriptor number: 65536
lock engine: pthread robust mutexes
thunder lock: enabled
uWSGI http bound on :443 fd 3
uWSGI http bound on 0.0.0.0:80 fd 5
uwsgi socket 0 bound to TCP address 127.0.0.1:45870 (port auto-assigned) fd 4
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.6.7 (default, Dec 21 2018, 03:29:53) [GCC 6.4.0]
Python main interpreter initialized at 0x7fdf16663b40
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 364600 bytes (356 KB) for 4 cores
*** Operational MODE: preforking ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x7fdf16663b40 pid: 1 (default app)
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1)
spawned uWSGI worker 1 (pid: 18, cores: 1)
spawned uWSGI worker 2 (pid: 19, cores: 1)
spawned uWSGI worker 3 (pid: 20, cores: 1)
spawned uWSGI worker 4 (pid: 21, cores: 1)
spawned uWSGI http 1 (pid: 22)
Note it runs as root
Hope this helps.

Just another answer. Doing
[uwsgi]
<... other uwsgi configs ... >
plugins = router_redirect
route-if-not = equal:${HTTPS};on redirect-permanent:https://${HTTP_HOST}${REQUEST_URI}
will force HTTPS for the whole site.
Tested.

For those who have tried both answers above and unfortunately failed, leave uWSGI what it is and add the Nginx CONF:
server {
listen 80;
server_name <your_domain>;
rewrite ^/(.*) https://<your_domain>/$1 permanent;
}
I feel like uWSGI is not that friendly when it comes to internal routing.

Related

how to run coturn in ubuntu running on localhost

i want to run coturn server on ubuntu i do not have any domain i want to test it on localhost for that i followed a tutorial https://www.allerstorfer.at/install-coturn-on-ubuntu/
here are the steps i followed
sudo apt-get install coturn
nano /etc/default/coturn
TURNSERVER_ENABLED=1
listening-port=3478
cli-port=5766
listening-ip=172.17.19.101
created a secret
openssl rand -hex 32
and added to the turnserver.conf file
use-auth-secret
static-auth-secret=583bAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF
vi /etc/services
stun-turn 3478/tcp # Coturn
stun-turn 3478/udp # Coturn
stun-turn-tls 5349/tcp # Coturn
stun-turn-tls 5349/udp # Coturn
turnserver-cli 5766/tcp # Coturn
started the coturn server
turnserver -o -v
0: log file opened: /var/tmp/turn_18181_2020-09-15.log
0:
RFC 3489/5389/5766/5780/6062/6156 STUN/TURN Server
Version Coturn-4.5.0.7 'dan Eider'
0:
Max number of open files/sockets allowed for this process: 65535
0:
Due to the open files/sockets limitation,
max supported number of TURN Sessions possible is: 32500 (approximately)
0:
==== Show him the instruments, Practical Frost: ====
0: TLS supported
0: DTLS supported
0: DTLS 1.2 supported
0: TURN/STUN ALPN supported
0: Third-party authorization (oAuth) supported
0: GCM (AEAD) supported
0: OpenSSL compile-time version: OpenSSL 1.1.1 11 Sep 2018 (0x1010100f)
0:
0: SQLite supported, default database location is /var/lib/turn/turndb
0: Redis supported
0: PostgreSQL supported
0: MySQL supported
0: MongoDB is not supported
0:
0: Default Net Engine version: 3 (UDP thread per CPU core)
=====================================================
0: Config file found: /etc/turnserver.conf
0: Bad configuration format: TURNSERVER_ENABLED
0: Listener address to use: 172.17.19.101
0: Config file found: /etc/turnserver.conf
0: Bad configuration format: TURNSERVER_ENABLED
0: Domain name:
0: Default realm:
0: ERROR:
CONFIG ERROR: Empty cli-password, and so telnet cli interface is disabled! Please set a non empty cli-password!
0:
CONFIGURATION ALERT: you did specify the long-term credentials usage
but you did not specify the default realm option (-r option).
Check your configuration.
0: WARNING: cannot find certificate file: turn_server_cert.pem (1)
0: WARNING: cannot start TLS and DTLS listeners because certificate file is not set properly
0: WARNING: cannot find private key file: turn_server_pkey.pem (1)
0: WARNING: cannot start TLS and DTLS listeners because private key file is not set properly
0: Relay address to use: 0.0.0.0
netstat -npta | grep turnserver
8 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
tcp 0 0 0.0.0.0:3478 0.0.0.0:* LISTEN 18889/turnserver
then
service coturn stop
service coturn start
service coturn restart
service coturn status
returned
service coturn status
● coturn.service - LSB: coturn TURN Server
Loaded: loaded (/etc/init.d/coturn; generated)
Active: active (running) since Tue 2020-09-15 17:02:05 PKT; 3s ago
Docs: man:systemd-sysv-generator(8)
Process: 18860 ExecStop=/etc/init.d/coturn stop (code=exited, status=0/SUCCESS)
Process: 18867 ExecStart=/etc/init.d/coturn start (code=exited, status=0/SUCCESS)
Tasks: 15 (limit: 4915)
CGroup: /system.slice/coturn.service
└─18889 /usr/bin/turnserver -c /etc/turnserver.conf -o -v
there is a step given in the tutorial as
Add to DNS
turn.domain.xx → domain.xx
stun.domain.xx → domain.xx
i am confused on this part so i
edited /etc/hosts file and added
127.0.0.1 turn.domain.xx
127.0.0.1 sturn.domain.xx
and
telnet localhost 5766
returns
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
but in the tutorial it shows the output the only thing i have changed is the listening_ip which they used
listening-ip=172.17.19.101
and i used
listening-ip=0.0.0.0
if i use ->
listening-ip=172.17.19.101
then the command ->
netstat -npta | grep turnserver
returned nothing
please guide me how can i test the coturn server on localhost
Even it seems a bit late; your turnserver.conf file has an invalid line TURNSERVER_ENABLED=1. This line corrupts your conf file and turnserver does not start. As you can see in the log file:
0: Bad configuration format: TURNSERVER_ENABLED
You can not use this parameter in the turnserver.conf file. This parameter is for /etc/default/coturn file.

Haproxy always giving 503 Service Unavailable

I've installed Haproxy 1.8 on a Kubernetes Container.
Whenever I make any request to /test, I always get 503 Service Unavailable response. I want to return the stats page when I get a request to /test
Following is my configuration file:
/etc/haproxy/haproxy.cfg:
global
daemon
maxconn 256
defaults
mode http
timeout connect 15000ms
timeout client 150000ms
timeout server 150000ms
frontend stats
bind *:8404
mode http
stats enable
stats uri /stats
stats refresh 10s
frontend http-in
bind *:8083
default_backend servers
acl ar1 path -i -m sub /test
use_backend servers if ar1
backend servers
mode http
server server1 10.1.0.46:8404/stats maxconn 32
# 10.1.0.46 is my container IP
I can access the /stats page using:
curl -ik http://10.1.0.46:8404/stats
But when I do:
curl -ik http://10.1.0.46:8083/test
I always get following response:
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Content-Type: text/html
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
I started haproxy using:
/etc/init.d/haproxy restart
and then subsequently restart it using:
haproxy -f haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
Following is the output of netstat -anlp:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 54/python3.5
tcp 0 0 0.0.0.0:8083 0.0.0.0:* LISTEN 802/haproxy
tcp 0 0 0.0.0.0:8404 0.0.0.0:* LISTEN 802/haproxy
tcp 0 0 10.1.0.46:8404 10.0.15.225:20647 TIME_WAIT -
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node PID/Program name Path
Following is the output of ps -eaf:
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Jul22 ? 00:00:00 /bin/sh -c /bin/bash -x startup_script.sh
root 6 1 0 Jul22 ? 00:00:00 /bin/bash -x startup_script.sh
root 54 6 0 Jul22 ? 00:00:09 /usr/local/bin/python3.5 /usr/local/bin/gunicorn --bind 0.0.0.0:5000 runner:app?
root 57 54 0 Jul22 ? 00:02:50 /usr/local/bin/python3.5 /usr/local/bin/gunicorn --bind 0.0.0.0:5000 runner:app?
root 61 0 0 Jul22 pts/0 00:00:00 bash
root 739 0 0 07:02 pts/1 00:00:00 bash
root 802 1 0 08:09 ? 00:00:00 haproxy -f haproxy.cfg -p /var/run/haproxy.pid -sf 793
root 804 739 0 08:10 pts/1 00:00:00 ps -eaf
Why could I be getting 503 unavailable always?
Why do you use HAProxy 1.8 when a 2.2.x already exists?
You will need to adopt the path in the backend which can't be set on the server level.
backend servers
mode http
http-request set-path /stats
server server1 10.1.0.46:8404 maxconn 32
# 10.1.0.46 is my container IP

Configuring haproxy load balancer in front of ha artifactory cluster

I'm trying to configure an haproxy load balancer in front of our 2-node ha artifactory cluster. I'm using the page here as a guide:
https://jfrog.com/knowledge-base/how-to-configure-haproxy-with-artifactory/
but this was written years ago for a much older version of haproxy (I'm running 2.0.8) and a lot of the code is deprecated. The recommended configuration starts with errors. Here it is:
# version 1.0
# History
# https://jfrog.com/knowledge-base/how-to-configure-haproxy-with-artifactory/
# —————————————————————————
# Features enabled by this configuration
# HA configuration
# port 80, 443 Artifactory GUI/API
#
# This uses ports to distinguish artifactory docker repositories
# port 443 docker-virtual (v2) docker v1 is redirected to docker-dev-local.
# port 5001 docker-prod-local (v1); docker-prod-local2 (v2)
# port 5002 docker-dev-local (v1); docker-dev-local2 (v2)
#
# Edit this file with required information enclosed in <…>
# 1. certificate and key
# 2. artifactory-host
# 3 replace the port numbers if needed
# —————————————————————————-
global
log 127.0.0.1 local0
chroot /var/lib/haproxy
maxconn 4096
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048
stats socket /run/haproxy/admin.sock mode 660 level admin
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
option forwardfor
option http-server-close
maxconn 4000
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
listen stats
bind *:2016
mode http
stats enable
stats uri /haproxy
stats hide-version
stats refresh 5s
stats realm Haproxy\ Statistics
frontend normal
bind *:80
bind *:443 ssl crt /etc/ssl/artifactory/cert.pem
mode http
option forwardfor
reqirep ^([^ :]*) /v2(.*$) 1 /artifactory/api/docker/docker-virtual/v22
reqadd X-Forwarded-Proto: https if { ssl_fc }
option forwardfor header X-Real-IP
default_backend normal
# Artifactory HA Configuration
# Using default failover interval – rise = 2; fall =3 3; interval – 2 seconds
backend normal
mode http
balance roundrobin
option httpchk OPTIONS /
option httpchk GET /api/system/ping HTTP/1.1\r\nHost:haproxy\r\n
option forwardfor
option http-server-close
appsession JSESSIONID len 52 timeout 3h
server platform-artifactory-ha-01 172.17.1.71:80 check fall 3 inter 3s rise 2
server platform-artifactory-ha-02 172.17.1.122:80 check fall 3 inter 3s rise 2
If I run haproxy -f haproxy.cfg -c I get:
[WARNING] 121/054551 (11113) : parsing [haproxy.cfg:55] : The 'reqirep' directive is deprecated in favor of 'http-request replace-header' and will be removed in next version.
[ALERT] 121/054551 (11113) : parsing [haproxy.cfg:55] : 'reqirep' : Expecting nothing, 'if', or 'unless', got '/v2(.*$)'.
[WARNING] 121/054551 (11113) : parsing [haproxy.cfg:56] : The 'reqadd' directive is deprecated in favor of 'http-request add-header' and will be removed in next version.
[ALERT] 121/054551 (11113) : parsing [haproxy.cfg:56] : 'reqadd' : Expecting nothing, 'if', or 'unless', got 'https'.
[ALERT] 121/054551 (11113) : parsing [haproxy.cfg:68] : 'appsession' is not supported anymore since HAProxy 1.6.
[ALERT] 121/054551 (11113) : Error(s) found in configuration file : haproxy.cfg
[ALERT] 121/054551 (11113) : Fatal errors found in configuration.
I've been able to get artifactory to start up by commenting the following lines 64 and 65:
# reqirep ^([^ :]*) /v2(.*$) 1 /artifactory/api/docker/docker-virtual/v22
# reqadd X-Forwarded-Proto: https if { ssl_fc }
and adding:
http-request set-header X-Forwarded-Proto https if { ssl_fc }
to replace line 65
I also had to comment line 79 to get the haproxy service to start without errors:
# appsession JSESSIONID len 52 timeout 3h
But now it doesn't properly work in the case where folks are trying to push dockers into the regsitry.
I've got to figure the new way to write line 79 and line 64. But I'm having trouble finding the correct configuration directives in the documentation.
The reqirep keyword was spitted in several http-request directives.
You will need to use http-request replace-path.
My suggestion, untested
# reqirep ^([^ :]*) /v2(.*$) 1 /artifactory/api/docker/docker-virtual/v22
http-request replace-path /v2(.*$) /artifactory/api/docker/docker-virtual/v22\1
The appsession isn't anymore part of haproxy as the ALERT message shows.
My suggestion for the cookie sticky, untested.
backend normal
mode http
balance roundrobin
# this makes no sense option httpchk OPTIONS /
option httpchk GET /api/system/ping HTTP/1.1\r\nHost:haproxy\r\n
option forwardfor
option http-server-close
stick-table type string len 52 size 2m expire 3h
#appsession JSESSIONID len 52 timeout 3h
stick on cookie(JSESSIONID)
server platform-artifactory-ha-01 172.17.1.71:80 check fall 3 inter 3s rise 2
server platform-artifactory-ha-02 172.17.1.122:80 check fall 3 inter 3s rise 2

Rails app on Apache-Passenger - runs fine on localhost but not via remote access

I have a Rails application deployed on Apache-Passenger which runs fine when access from localhost, but doesn't run via remote access.
Let's say the server name is server.name.com. The server info is -
[kbc#server KBC]$ uname -a
Linux server.name.com 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[kbc#server KBC]$ cat /etc/issue
CentOS release 6.5 (Final)
Kernel \r on an \m
When I do
[kbc#server ]$ curl http://localhost:3000/, it returns the home page for the application.
But when I try to access the Rails app from my laptop, I get the following error -
→ curl http://server.name.com:3000/
curl: (7) Failed to connect to server.name.com port 3000: Connection refused
To check if I can access the server, I tried -
→ ping server.name.com:3000
ping: cannot resolve server.name.com:3000: Unknown host
But, I can ping the server by -
→ ping server.name.com
PING server.name.com (#.#.#.#): 56 data bytes
64 bytes from #.#.#.#: icmp_seq=0 ttl=61 time=1.526 ms
64 bytes from #.#.#.#: icmp_seq=1 ttl=61 time=6.624 ms
Here is the Passenger configuration -
<VirtualHost *:3000>
ServerName server.name.com
ServerAlias server.name.com
DocumentRoot /home/kbc/KBC/public
<Directory /home/kbc/KBC/public>
AllowOverride all
Options -MultiViews
</Directory>
ErrorLog /var/log/httpd/kbc_error.log
CustomLog /var/log/httpd/kbc_access.log common
</VirtualHost>
NameVirtualHost *:3000
PassengerPreStart https://server.name.com:3000/
and
LoadModule passenger_module /home/kbc/.rvm/gems/ruby-2.3.0#kbc/gems/passenger-5.0.30/buildout/apache2/mod_passenger.so
<IfModule mod_passenger.c>
PassengerRoot /home/kbc/.rvm/gems/ruby-2.3.0#kbc/gems/passenger-5.0.30
PassengerDefaultRuby /home/kbc/.rvm/wrappers/ruby-2.3.0/ruby
PassengerRuby /home/kbc/.rvm/wrappers/ruby-2.3.0/ruby
PassengerMaxPoolSize 5
PassengerPoolIdleTime 90
PassengerMaxRequests 10000
</IfModule>
Passenger-status info -
[kbc#server ]$ passenger-status
Version : 5.0.30
Date : 2016-10-17 11:30:08 -0400
Instance: bKUJ0ptp (Apache/2.2.15 (Unix) DAV/2 Phusion_Passenger/5.0.30)
----------- General information -----------
Max pool size : 5
App groups : 1
Processes : 1
Requests in top-level queue : 0
----------- Application groups -----------
/home/kbc/KBC:
App root: /home/kbc/KBC
Requests in queue: 0
* PID: 5696 Sessions: 0 Processed: 1 Uptime: 1m 45s
CPU: 0% Memory : 38M Last used: 1m 45s ago
What am I doing wrong? Please let me know if you need more information.
This sounds like a connectivity problem, not a Passenger/Apache problem. The host you're running the server on may not accept inbound connections on port 3000 (due to iptables, firewall, or security group access control rules).
Take a look at apache not accepting incoming connections from outside of localhost and Apache VirtualHost and localhost, for instance.
#Jatin, could you please post the apache main configuration ? (/etc/apache2/apache2.conf or similar)
Also, please provide the output of the following :
sudo netstat -nl
sudo iptables -L
Just for the record, the ping utility can only test connectivity at the IP layer, meaning that it can tell you whether the host at a given IP is responding. It cannot, however, tell you if a specific TCP port is open on the remote system.
Testing TCP connectivity can be achieved easily with telnet or netcat :
telnet server.name.com 3000
If you get something like :
Trying #.#.#.#...
Connected to server.name.com.
Escape character is '^]'.
then this means you can correctly access the TCP endpoint, eliminating any possibility of network-related issues. In other words, if this works, you probably have a configuration problem with Apache.

Putting a uWSGI fast router in front of uWSGI servers running in docker containers

I have multiple Flask based webapps running in docker containers (their processes need to be isolated from the host OS). To run these apps I use uWSGI servers inside the containers. Incoming requests should hit a uWSGI FastRouter with a subscription server (as described here: http://uwsgi-docs.readthedocs.org/en/latest/SubscriptionServer.html). When starting a container, the uWSGI should announce itself based on some internal configuration as a subdomain.
So the setup looks like this:
Request ---> FastRouter ----> container | myapp1 |
|
----> container | myapp2 |
I'm trying to test this on a single host running both the fast router as well as some docker containers.
The FastRouter is started using
uwsgi --fastrouter :1717 --fastrouter-subscription-server localhost:2626 --fastrouter-subscription-slot 1000
Question 1 Do I need to do anything else to get the subscription server running? Is this started together with the fastrouter process?
The containers have two ports mapped from the host to the container: 5000 (the webapp) and 2626 (to subscribe to the fast router).
So they're started like this:
docker run -d -p 5000:5000 -p 2626:2626 myImage $PATH_TO_START/start.sh
Where in start.sh the uWSGI is started as
uwsgi --http :5000 -M --subscribe-to 127.0.0.1:2626:/test --module server --callable env --enable-threads
The output looks good, it prints at the end:
spawned uWSGI master process (pid: 58)
spawned uWSGI worker 1 (pid: 73, cores: 1)
spawned uWSGI http 1 (pid: 74)
subscribing to 127.0.0.1:2626:/test
On the host I can do
curl localhost:5001
And I see the Webserver greeting me from inside the container. However, doing
curl localhost:1717/test
gets no response.
Question 2
Am I getting anything fundamentally wrong here? Should I test differently?
Question 3
How can I debug the FastRouter?
Edit:
Still struggling with this setup. I'm using a separate VPS now for the fastrouter. It is started using
uwsgi --uid fastrouter --master --fastrouter :80 --fastrouter-subscription-server :2626 --daemonize uwsgi.log --pidfile ./uwsgi.pid --chmod-socket --chown-socket fastrouter
WARNING: Think before copying above call for your project since it opens up the subscription service publicly - my plan is to secure it afterwards using the key signing facilities provided by uwsgi since the VPS doesn't have a private network available.
netstat -anp shows
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 843/uwsgi
udp 0 0 0.0.0.0:2626 0.0.0.0:* 843/uwsgi
unix 3 [ ] STREAM CONNECTED 9089 843/uwsgi
unix 3 [ ] STREAM CONNECTED 9090 843/uwsgi
unix 3 [ ] SEQPACKET CONNECTED 8764 843/uwsgi
unix 3 [ ] SEQPACKET CONNECTED 8763 843/uwsgi
Anyway, using uwsgi nodes with --http :5000 --module server --callable env --enable-threads --subscribe-to [Subscription-Server-IP-Address]2626:/test --socket-timeout 120 --listen 128 --buffer-size 32768 --honour-stdin still leads to the same result - uwsgi logs 'subscribing to', but http://[Subscription-Server-IP-Address]/test is not reachable. Is this kind of routing even possible? Every example I can find only assigns subdomains like [mysub].example.com, root domains, or root domains with some port number. This page includes a hint that the subscription server should be part of a routable address: http://projects.unbit.it/uwsgi/wiki/Example.
So I have a follow-up question:
Is the FastRouter even meant to let nodes announce new routes that haven't yet been set statically in a DNS zone file? I don't really care whether it's http://[key].example.com or http://example.com/[key], what's important is that these keys can be generated from inside a Docker container at setup time of the uwsgi server.
Generally the "dockered" apps run in a different network namespace, so loopback of a docker app is not the same of the fastrouter.
Use unix sockets for subscriptions, they are a great way for inter-namespace communication.
Your commands are good, the fastrouter is pretty verbose so if you do not see messages in its logs it is not working (and eventually you can strace it)

Resources