Is it necessary to use Nginx when deploy a Rails API ONLY app? - ruby-on-rails

Currently I've already read a lot tutorials teaching about Rails App deployment, almost every one of them use Nginx or other alternatives like Apache to serve static pages. Also in this QA Why do we need nginx with thin on production setup? they said Nginx is used for load balancing.
I can understand the reasons mentioned above, but I write a Rails App as a pure API backend service, the only purpose is to serve JSON formatted data for other client-side apps, no pages rendering at all. So my questions are:
In my situation, do I really need Nginx just to deploy a pure API Rails App?
If not, how should I deploy my app? just running it (with unicorn in production env) at background is good enough? like rails server -e production -d?
I'm so curious about these two question, hope someone can explain the details or show me some good references for me, thanks in advance.

See basically, Unicorn or thin are all single threaded servers. Which in a way means they handle single requests at a time, using defering and other techniques.
For a production setup you would generally run many instances of unicorn or thin ( depending on your load etc ), you would need to load balance b/w those rails application server instances, thats why you need Nginx or something similar on top.

to serve JSON formatted data for other client-side apps, no pages rendering at all
You see, it makes no difference. These tasks are similar: format certain data into a specific text-based structure. Much like rendering a view in HAML, ERB or whatever.
The difference is, you won't be serving static assets. At least, it's not practical for pure JSON APIs.
If you aim for compact JSON responses, your best bet is Unicorn (several workers) + nginx.
Unicorn is simpler and aims for fast single-client response. That is, a slow client could force Unicorn to waste a lot of time serving him a response. When backed by nginx though, it fires the entire response into nginx's buffer and heads for the next one only waiting for nginx to accept the response (since it's usually on the same machine, it's blazing fast). nginx then hands out responses. There could possibly be multiple instances of Unicorn, if one is not enough: but using only one could eliminate any kinds of data races on app level (which are possible in multithreaded apps).
Thin is designed by itself to handle multiple clients concurrently by itself, through use of threads and workers. Keep in mind though that MRI ("classic" Ruby) doesn't have "truly concurrent threads" because of GIL. Technoligies it's based on (Ruby+C) make it inferior to nginx (pure C) in terms of resource usage efficiency. nginx is even used sometimes to counter DDoS attacks, efficiency is proven in the wild (<1> <2> <3> and many more).
You could benefit from Thin if your app implied concurrent service for multiple clients, like Server-Sent Events or WebSocket usage, that require maintaining a constant connection. This one doesn't seem to. Don't count on concurrency too much where it's not required.

Related

Thin vs Unicorn on Heroku

Just wanted to get people's opinions on using Unicorn vs Thin as a rails server. Most of the articles/benchmarks I found online seem very incomplete, so it would nice to have a centralized place to discuss it.
Unicron is a multi-processes server, while thin is an event based/non-blocking server. Event-based servers are great... if your code is asynchronous/non-blocking - vanilla rails is blocking. So unless you use non-blocking rails libraries, I really don't see the advantage of using Thin. Even worse, in a non-blocking server, if your i/o loop is blocking you're going to block the entire loop and not be able to handle any more requests until the blocking call returns. Blocking libraries are going to slow thin down!
Why did Heroku choose Thin as their default server (for cedar)? They are smart guys, so I'm sure they had a reason.
Bellow is a link that suggests replacing Thin with 4 Unicorn workers - this makes perfect sense to me.
4 Unicron workers on Heroku
Thin is easy to configure - not optimal, but it just works in the Heroku environment.
Unicorn can be more efficient, but it needs to be configured: How many workers? Preload App? What do you pick?
I have released Unicorn Heroku apps with workers set to 3, 5 and 8 - just based on how big each app is - how much code, how much memory is used and how much traffic you get all go into picking this number, and you need to monitor over time to make sure you got the number right, and your app isn't running out of memory.
Preload false - this will make your app start slower, but when Unicorn restarts a worker, this is 'safer' with network connections (memcache, postgres, mongo etc)
Preload true - this is better, but you need to handle server re-connections correctly in the pre and post fork code.
Thin has none of these issues out of the box, but you only get process of execution.
Summary: It's really hard to configure Unicorn out of the box to work well (or at all) for everyone, whereas Thin can just work to get people running with fewer support requests.
Recently (only a few months ago) the folks behind Phusion Passenger add support to Heroku. Definitely this is an alternative you should try and see if fits your needs.
Is blazing fast even with 1 dyno and the drop in response time is palpable.
A simple Passenger Ruby Heroku Demo is hosted on github.
The main benefits that Passengers on Heroku claims are:
Static asset acceleration through Nginx - Don't let your Ruby app serve static assets, let Nginx do it for you and offload your app for the really important tasks. Nginx will do a much better job.
Multiple worker processes - Instead of running only one worker on a dyno, Phusion Passenger runs multiple worker on a single dyno, thus utilizing its resources to its fullest and giving you more bang for the buck. This approach is similar to Unicorn's. But unlike Unicorn, Phusion Passenger dynamically scales the number of worker processes based on current traffic, thus freeing up resources when they're not necessary.
Memory optimizations - Phusion Passenger uses less memory than Thin and Unicorn. It also supports copy-on-write virtual memory in combination with code preloading, thus making your app use even less memory when run on Ruby 2.0.
Request/response buffering - The included Nginx buffers requests and responses, thus protecting your app against slow clients (e.g. mobile devices on mobile networks) and improving performance.
Out-of-band garbage collection - Ruby's garbage collector is slow, but why bother your visitors with long response times? Fix this by running garbage collection outside of the normal request-response cycle! This concept, first introduced by Unicorn, has been improved upon: Phusion Passenger ensures that only one request at the same time is running out-of-band garbage collection, thus eliminating all the problems Unicorn's out-of-band garbage collection has.
JRuby support - Unicorn's a better choice than Thin, but it doesn't support JRuby. Phusion Passenger does.
Hope this helps.
Heroku does not use intelligent routing - it will randomly assign jobs to dynos regardless of whether the dyno is busy. Thus, if your dyno cannot handle multiple jobs at once, you will get latency (perhaps massive latency) even if you are paying for lots of other dynos that are free. " That's right — if your app needs 80 dynos with an intelligent router, it needs 4,000 with a random router. "
http://news.rapgenius.com/James-somers-herokus-ugly-secret-lyrics
Heroku says they are working on this, and their plan is to make it easier to use Unicorn. They basically said "Oops, we didn't notice that this was a problem for a few years... and now that we look, it's definitely a problem for Thin... so I guess you need to use a different program than the one we've been pushing all this time."
http://news.rapgenius.com/Jesper-joergensen-routing-performance-update-lyrics
From the official Heroku explanation (second link above):
"Rails, in fact, does not yet reliably support concurrent request handling. This leaves Rails developers unable to leverage the additional concurrency capabilities offered by the Cedar stack, unless they move to a concurrent web server like Puma or Unicorn.
Rails apps deployed to Cedar with Thin can rather quickly end up with request queuing problems. Because the Cedar router no longer does any queuing on behalf of the app, requests queued at the dyno must wait until the single Rails process works its way through the queue. Many customers have run into this issue and we failed to take action and provide them with a better approach to deploying Rails apps on Cedar."
Also of interest is that their performance tools, including New Relic, have not been reporting time spent in the dyno queue.
http://news.rapgenius.com/Lemon-money-trees-rap-genius-response-to-heroku-lyrics
Oops.

How is request processing with rails, redis, and node.js asynchronous?

For web development I'd like to mix rails and node.js since I want to get the best out of both worlds (rails for fast web development and node for concurrency). I know that some people choose to just use full ruby stack with eventmachine that is integrated into rails controller so that every request can be nonblocking by using fiber in event-loop model. I have been able to understand how that works in a big picture.
At this moement however I want to try doing nonblocking request processing with rails and node.js with message queue concept. I heard that this can be achieved by using redis as an intermediary. I'm still having trouble trying to figure out how that works as of now. From what I can understand: so we have 2 apps A (rails) and B (node.js) and redis. rails app will handle requests from users that go through controllers in REST manner, and then from there rails will pass that through redis, and then redis will form queues and node.js app will pick up that queue and do whatever necessary afterhand (write or read from backend db).
My questions:
So how would that improve concurrency and scalability? from what i
know since rails handle the requests through controllers
synchronously, and then write to redis, the requests will be
blocking still, even though node.js end can pickup the queue
asynchronously. (I have a feeling that it's not asynchronous yet if it's not end to end
non-blocking).
Would node.js be considered a proxy or an application here if redis
is the intermediary?
I'm new to redis and learning it still. If I'm using 100% noSQL
solution for my backend database, such as mongoDB or couchDB, are they replaceable by redis entirely or is redis more seen as a
messaging queue tool like rabbitMQ?
Is messaging queue a different concurrency concept than threading or
event-loop model or is it supposed to supplement them?
That's all my question. I'm new to message queue concept. Will appreciate any help and pointers to right direction and articles that help me learn more. thanks.
You are mixing some things here that don't go together.
Let's first make sure we are on the same page regarding the strengths/weaknesses of the involved technologies
Rails: Used for it's web-development simplicity and perfect for serving database-backed web-applications.
Not very performant when having to serve a large number of long running requests as you'd run out of threads on your Ruby workers - but well suited for anything that can scale horizontally with more web-nodes (multiple web-servers - 1 db).
Node.js: Great for high-concurrency scenarios. Not as easy as rails to write a regular web-application in it. But can handle near an insane amount of long-running low-cpu tasks efficiently.
Redis: A Key-Value Store that supports operations on it's data-structures (increment/decrement values, append/prepent push/pop to lists - all operations that make this DB work consistently with multiple clients writing at once)
Now as you can see, there is no benefit in having Rails AND Node serve the same request - communicating through Redis. Going through the Rails Stack would not provide any benefit if the requests ends up being handled by the Node server.
And even if you only offload some processing to the node server, it's still the Rails webserver that handles the requests and has to wait for a response from node - killing the desired scalability. It simply makes no sense.
Where you would a setup with Node and Rails together is in certain areas of your app that have drastically different scaling requirements.
If you are for example writing a Website that displays live stats for Football games you can easily see that there are two different concerns in your app: The "normal" Site that contains signup, billing and profile stuff that screams for a quick implementation through rails. And the "live" portion of the site where users see live results and you expect to handle a lot of clients at once - all waiting for something to happen (low cpu - high concurrency).
In such a case it may be beneficial to actually seperate the two parts of the site into a Ruby and a Node app, with then sharing data about the user through a store like Redis (but actually you just need some shared state that both can look at and write to for synchronization purposes).
So you would use for example Rails for the Signup/Login portions - once signed up write the session cookie into redis alongside with the permissions of the user (what game is he allowed to follow) and hand the user off to the Node.js app.
There the Node app can read the session information from Redis and serve the user.
Word of advice:
You don't get scalability by simply throwing Node.js into your Toolbox. You really have to find out what Node.js is good at (low-cpu high-io concurrent operations) and how you can leverage that to remedy some of the problems your currently chosen technology has.
I can answer 3 for you. Redis does not guarantee that when you perform an operation that result will actually be on disk, also transaction handling it a bit "different". It also requires for the whole database to be in memory. Depending on the situation this can be an issue or not. It is however incredibly fast. It is not a messaging queue, you can easily make a queue out of it, but it is not it's purpose. If you want to have a queuing system only you can probably do better with something else.

why do we need an apache server when we deploy a rails app?

i though we could just deploy it with webrick or mongrel
Most Ruby application servers will only run a single Ruby process (and Ruby has a global interpreter lock that makes multithreading quite pointless), which means that it can only serve one request at a time. To say the least, this will not give you very good performance.
There are two ways around this: either you run several Ruby application servers and put a load balancer or reverse proxy in front of them, e.g. Nginx or Apache in front of a pack of Mongrels or Thin servers (the number of processes you run reflects the number of requests you will be able to handle in parallel). Or you run Passenger, which is an Apache or Nginx module that manages a pool of applications that can dynamically grow and shrink as the load changes. The first option gives you more configuration options, but the second option is easier to manage. Which one you want depends on your use case.
There are of course other solutions too, but they are for more specific use cases. You can, for example, write a very performant application and deploy it with Thin -- but it requires that you write an event driven application. You can't deploy a Rails app and expect the same performance.
Before Phusion Passenger allowed Rails hosting with Apache and nginx, deploying a rails app was scary and difficult. Apache is a very mature web server which scales easily and is configurable to meet many needs. (nginx is not as mature but is very efficient, also very configurable and a great alternative to Apache for rails hosting.) Webrick and Mongrel are great for development, but unless you are an expert, it is difficult to set them up for production use.
You can technically, but you don't usually want to, because that will impose a fair bit of overhead when serving static files like css or images.
There are any number of ways you can deploy a Rails app without involving Apache, but Apache is the most popular server around, the most mature server around and among the most stable and scalable. WEBrick and Mongrel both have their own merits, but Apache is just the default assumption for Web servers and the path of least resistance in most cases.

How to balance load in a Apache + Mongrel application

I was wondering if someone can explain how can a rails application be balanced.
Two questions:
Does it even help having separate rails applications reading from the same database in the same dedicated server?
I understand Apache can balance load installing some extra modules? am i right? how can we accomplish this? (please provide explanation for dummies)
I would have a look at using Passenger - it has largely superseded Mongrel and handles running multiple Rails instances.
Rails is single threaded, so when deploying with Mongrel it is "normal" to run several Mongrel instances in a cluster fronted by Apache with mod_proxy installed. This lets Apache dispatch multiple requests to free application instances.
Any reasonable databases is designed for high levels of concurrent requests so should be able to handle a far number of application instances.
Depending on your server resources there is great benefit in running multiple Mongrel instances - it is actually the only way to serve concurrent requests.
Even on a small-memory host (say 512mb), if your Rails app uses 100mb of memory you would be easily able to run several instances without running out of resources - you could then serve as many concurrent requests as you have instances.
Sliecehost has some awesome articles like this one: http://articles.slicehost.com/2009/4/17/centos-apache-rails-and-mongrels

Thin + Nginx Production ready combination for RubyOnRails Application

I have recently installed Nginx + Thin on my deployment server, but i am not sure how this will perform in last requests & responses situation. lets say 1000/req per sec.
so the speed on thin is good with 10-100 req /per sec
I wanted to know on higher volumes of data being processed on the request/response cluster.
Guide me on this :-)
Multiple thin processes and nginx are capable of providing lots of speed, depending on what your application is doing. So, the problem will be your application code, the speed of your application server, and your database server.
Scaling Rails has been recently covered in depth by the Scaling Rails Screencasts. I recommend you start there. My 5 step program to scaling Rails would be:
First step is to have the tools to look at what is slow in your application. Do not spend time optimizing everything in your application when you don't know what the problem is.
The easiest way to be able to handle lots of requests/second is with page caching.
If you can't do that, cache everything possible (fragment caching, use memcached to cache data, etc), to speed up your application.
After that, optimize your application as best as possible, make SQL queries fast, index everything, etc.
If you still need more speed, throw more hardware at the problem. Get a big, powerful database server, a bunch of app servers, and proxy your requests across them. You can start here, too, but it will only delay the optimization process.
If you have a single server I think that the main key is, apart from everything already mentioned, is don't skimp on the specs of it. Trying to get too much to run on too little is just a recipe for disaster.
It is also a good idea to get monit or God monitoring your thin instances, I started out with God, but it leaked memory pretty bad on Ruby 1.8.6 so I stop using it in favour of monit. Monit is written in C I believe and has a tiny memory footprint so I'd recommend that one.
If all that seems like a bit much to keep nginx and thin playing nicely you may want to look into an all in one solution like Passenger or LiteSpeed. I have very little experience with these so can offer no substancial advice for them.

Resources