Running unicorn and nginx on different servers - ruby-on-rails

Most of the tutorials out there show how to configure nginx web server as a proxy to a unicorn ruby application server when they are on the same server; a result is that they both communicate via unix sockets. How can I configure both of them if they are on different servers.

Unicorn designed to serve fast clients only:
unicorn is an HTTP server for Rack applications designed to only serve
fast clients on low-latency, high-bandwidth connections and take
advantage of features in Unix/Unix-like kernels. Slow clients should
only be served by placing a reverse proxy capable of fully buffering
both the the request and response in between unicorn and slow clients.
How does it work within load balancing between multi nodes environment? The answer is to have application nodes Nginx+Unicorn (connect via Unix Domain Socket) and top level Nginx as load balancer on separate node.

The basic setup is as follows:
In your unicorn config you'll want to listen to a TCP port rather than a unix socket:
listen 80, :tcp_nopush => true
Likewise, in your Nginx configuration simply proxy requests to a remote server:
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com down;
server backend4.example.com;
}
You should also checkout http://unicorn.bogomips.org/examples/nginx.conf for unicorn-tailored nginx configuration.

Related

HAProxy config for TCP load balancing in docker container

I'm trying to put a HAProxy loadbalancer in front of my RabbitMQ cluster(which is setup with nodes in separate docker containers). I cannot find many examples for haproxy config for the above setup,
global
debug
defaults
log global
mode tcp
timeout connect 5000
timeout client 50000
timeout server 50000
frontend main
bind *:8089
default_backend app
backend app
balance roundrobin
mode http
server rabbit-1 172.18.0.2:8084
server rabbit-2 172.18.0.3:8085
server rabbit-3 172.18.0.4:8086
In this example, what should I give in the place of the ip addresses for docker containers?

accessing mosquitto via port 443 and apache

I am running a MQTT Mosquitto server listening on port 8883 using TLS in a docker container with name 'mosquitto'.
In another docker container in the same network I am running an Apache webserver with a webpage at my_domain (at port 443).
The Apache should forward all requests to my_domain/mosquitto to the Mosquitto broker.
using my_domain/mosquitto. Thus I add
ProxyPreserveHost On
ProxyPass /mosquitto ws://mosquitto:8883
ProxyPassReverse /mosquitto ws://mosquitto:8883
to my httpd.conf which redirects https-browser-calls to my_domain/mosquitto to mosquitto.
This of course result in an OpenSSL error at Mosquitto.
But using the MQTT client (python) results in Name or service not known
What I am doing wrong?
P.S.:
The SSL keys / certificates for the Apache and the Mosquitto are different.
When disabling the webserver, redirect the Mosquitto to port 443 via docker the connection is working.
To use a HTTP reverse proxy (Apache) to proxy for a MQTT broker you must use MQTT of Websockets (because WebSocket connections are bootstrapped over HTTP).
A native MQTT connection will just not work as Apache has no way of understanding the native protocol format.
You will need to enable a Websocket Listener in Mosquitto and tell the client to make a websocket connect.
You should also probably be using /mqtt not /mosquitto as the path to proxy as this is the default for WebSocket connects

Multiple Sub-Domains on a single server. Docker + NGINX # EC2

I have multiple NGNIX-uWSGI based Django Applications deployed using Docker and hosted in EC2 (currently at different ports like 81, 82, ...). Now I wish to add in sub-domains to this such that sub1.domain.com and sub2.domain.com will both work from the same EC2 instance.
I am fine with multiple ports, BUT they dont work via DNS settings.
sub1.domain.com -> 1.2.3.4:81
sub2.domain.com -> 1.2.3.4:82
What I cannot do
Multiple IPs ref: allocation of a new ip for each deployed sub-domain is not possible.
NGINX Proxy ref: This looks like the ideal solution BUT this is not maintained by an org like Docker or NGINX, so I am un-sure of the security and reliability.
What I am considering:
I am considering to write my own NGINX reverse proxy, similar to Apache Multiple Sub Domains With One IP Address BUT then the flow is will via multiple proxies since already there is an NGINX-uWSGI proxy via the Tech Stack
you can use nginx upstream
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
server_name sub.test.com www.sub.test.com;
location / {
proxy_pass http://backend;
}
}

Nginx retry same end point on http_502 in Docker service Discovery

We use docker swarm with service discovery for Backend REST application. The services in swarm are configured with endpoint_mode: vip and are running in global mode. Nginx is proxy passed with service discovery aliases. When we update Backend services sometimes nginx throws 502 as service discovery may point to the updating service.
In such case, We wanted to retry the same endpoint again. How can we achieve this?
According to this we added upstream with the host's private IP and used proxy_next_upstream error timeout http_502; but still the problem persists.
nginx.conf
upstream servers {
server 192.168.1.2:443; #private ip of host machine
server 192.168.1.2:443 backup;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
proxy_next_upstream http_502;
location /endpoint1 {
proxy_pass http://docker.service1:8080/endpoint1;
}
location /endpoint2 {
proxy_pass http://docker.service2:8080/endpoint2;
}
location /endpoint3 {
proxy_pass http://docker.service3:8080/endpoint3;
}
}
Here if http://docker.service1:8080/endpoint1 throws 502 we want to hit http://docker.service1:8080/endpoint1 again.
Additional queries:
Is there any way in docker swarm to make it stop pointing to updating service in service discovery till that service is fully up?
Is upstream necessary here since we directly use docker service discovery?
I suggest you add a health check directly at container level (here)
By doing so, docker pings periodically an endpoint you specified, if it's found unhealthy it will 1) stop routing traffic to it 2) kill the container and restart a new one. Therefore you upstream will be resolved to one of the healthy containers. No need to retry.
As for your additional questions, the first one, docker won't start routing til it's healthy. The second, nginx is still useful to distribute traffic according to endpoint url. But personally nginx + swarm vip mode is not a great choice because swarm load balancer is poorly documented, it doesn't support sticky session and you can't have proxy level health check, I would use traefik instead, it has its own load balancer.

Pass Requests From Nginx to Local Thin Server

I have nginx serving up my rails app, but I also have a separate 'thin' server running on another port to use with Faye (publish / subscribe gem).
So I believe that since all requests are going through nginx (right?), I can't just call myapp.com:9292 if the thin server is setup on that port, even if I use the myapp.com host rather than localhost for the thin server, because its not routed through nginx.
If I have the thin server running at 0.0.0.0:9292, what would I need to add to my nginx conf to route pings to myapp.com:9292 to 0.0.0.0:9292?
Actually you can - just call example.com:9292 - , because Nginx is listening to port 80 only, and sometimes 443
Unless you add another server block that listens to 9292 explicitly, the example.com:9292 should pass directly to your 'thin' server

Resources