What is the best practice for Nginx/ELB/Unicorn architecture on AWS? - ruby-on-rails

We have an RoR application in AWS Beijing. AWS Beijing does not have Route 53 (We can't use Alias to apply ELB to Apex domain), so we must use a Front-end Server running Nginx in front of ELB.
Now our architecture likes below:
Front-end (Nginx) -- ELB --- App-(1~n) (Nginx--Unicorn)
We have noticed the words from Unicorn description below:
"Unicorn must never be exposed to slow clients, as it will never ever use new-fangled things like non-blocking socket I/O, threads, epoll or kqueue. Unicorn must be used with a fully-buffering reverse proxy such as nginx for slow clients."
So my question are:
1. Before Unicorn, do we need nginx on the App Server?
2. If we remove nginx on App Server, can nginx on Front-end Server play such the effect like unicorn describing?

I would recommend replacing the ELB with HAProxy in this scenario where you don't have the alias feature from Route53 to point to your apex domain. Putting a Nginx instance in front of the ELB doesn't seem to be a good idea because you are adding a new layer just because you can't reference the ELB on Route53. You also lose the benefit of high availably by putting a Nginx instance in front of it the ELB.
My suggestion is that you keep one instance of Nginx on each of your app servers in front of Unicorn and use HAProxy as load balancer: HAProxy > [Nginx > Unicorn]. In a simple setup of HAProxy you also don't have the same availability of the ELB but you can setup a high available configuration if needed.

1) Nginx must be always in front Unicorn because Unicorn can't deal with slow clients efficiently, it just locked by those clients
2) Never talk to Unicorn via network, it means each app server need to have its own Nginx. Nginx as Load Balancer is a way better than ELB black box.

Related

How to automatically start a RubyOnRails app on server startup" (hopefully with more details about environment and nginx logs)?

I am having troubles with auto-scaling group in AWS. I am running a Ruby on Rails App in EC2 instance with ELB. I applied auto-scaling group so it scales-up automatically when heavy traffics come. However, the app server upon nginx does not start automatically so it becomes "OutOfService" in ELB. Any solution?
You can either use Monit to monitor your services, in this case Nginx and/or ( Puma, Unicorn, Passenger etc.. ) or use unix's own 'service' for things like this, Upstart.
https://www.digitalocean.com/community/tutorials/how-to-configure-a-linux-service-to-start-automatically-after-a-crash-or-reboot-part-1-practical-examples#auto-starting-services-with-upstart

Separate port for some URLs in Rails app

My Rails app listens on single port for API calls and browser requests. To increase security I would like to open another port for API and make web page URLs unabailable for this port.
How to do this in Rails? (Possibly without losing current app integrity).
I use WEBrick or Puma during development and Apache+Passenger in production.
P.S.
Currently I'm thinking about making HTTP proxy which will forward API calls.
Unicorn will bind to all interfaces on TCP port 8080 by default. You may use the -l switch to bind to a different address:port. Each worker process can also bind to a private port via the after_fork hook. But I think it is not useful if you have nginx on top layer.

Using unicorn or passenger without nginx? [duplicate]

This question already has an answer here:
Is it necessary to put Unicorn behind Nginx ( or Apache)
(1 answer)
Closed 8 years ago.
I've been reading up on rails deployment and it seems for the two options I'm considering, unicorn and passenger, the tutorials always put them behind a server like nginx. I was under the assumption that both unicorn and passenger were fully functioning web servers themselves. So
Why are they always placed behind something like nginx?
If I use a load balancer nginx or HAProxy, can I have the load balancer directly distribute requests to unicorn or passenger, or do I still have to place them behind nginx?
Unicorn must be placed behind Nginx, by its author's design. The Phusion Passenger Design & Architecture document explains why some app servers are designed to be placed behind Nginx. Basically, it has got to do with I/O concurrency handling and I/O security.
Phusion Passenger however does not need to be placed behind Nginx. Phusion Passenger integrates into Nginx, as an Nginx module. Even the Standalone mode of Phusion Passenger does not need to be placed behind Nginx, because its Standalone mode utilizes a lightweight Nginx core and thus already properly implements I/O security.
If you use HAProxy, you can have it directly connect to Unicorn as long as you configure HAProxy to perform both request and response buffering. For Unicorn, buffering is key. Phusion Passenger on the other hand doesn't care, it works fine regardless of whether you configure buffering or not.

Add a reverse proxy to heroku

I have a rails app running on heroku at, e.g myapp.herokuapp.com.
Now I want to reverse proxy from myapp.heroku.com/proxy/ to somewhereelse.com/ (i.e: myapp.heroku.com/proxy/stuff is reverse proxifed to somewhereelse.com/stuff)
Is that possible on Heroku? How to achieve this?
For anyone coming to this question through a search, this can be done.
Check out https://github.com/ryandotsmith/nginx-buildpack to vendor nginx into your heroku instance. This will place nginx in front of your rails app, and allow you to reverse proxy requests on this domain, having your heroku app configured as apex and allowing somewhereelse.com/stuff go elsewhere.
You dont have access to frontend routing infrastructure so its not possible to do add something like nginx location based reverse proxying or apache's modproxy. From my understanding too you can only bind to one port (the $PORT) within the dyno so its not possible to shadow your Rails app with your own vendored version of nginx (unless it is possible to communicate over a non TCP/IP socket between nginx and your rack/rails app, if this is the case then perhaps you can get rack to listen to /tmp/mysocket.git and nginx to reverse proxy on this, this could be a no starter though, Im just throwing out ideas).
Which means the only probable option if you have to handle this yourself in your rails app, I have only a tiny tiny bit of rails/ruby experience, but if no proxy functionality exists in rails then you perhaps you can explicitly accept the route and then use a http client to invoke the other parts.

Autoscale Rails app on Amazon EC2

Currently we are using a classic configuration: Nginx in front of the instances with 3 mongrels on each.
We want to autoscale our app.
So, we need to use either Elastic Load Balancer + AutoScaling or somehow manually update our Nginx config, then AutoScaling launches new instances, so nginx can route traffic to them.
The problem with ELB is it can't pass the requests to a number of ports on a EC2 instance, only to one. So, we can't run a bunch of mongrels on our instances to gain more performance from a single instance. The only way I see is to use HAproxy on each instance to proxy requests to a bunch of mongrels.
What should we do? Manually update the Nginx config, or use ELB and HAProxy on each working instance? Is there a better way to autoscale a Rails app on Amazon?
We use ELB + AutoScaler + LaunchConfig on Amazon Web Services to scale Rails applications.
Our configuration is nginx + Passenger + Rails and that only needs one port on the instances running Rails. So you can have any size instance you want running as many Rails processes (requests) as it can handle: when the instance gets busy, the AutoScaler will kick in and the ELB will distribute load across all of them. Again, it only needs one port on the instance.

Resources