How rails server on production works? - ruby-on-rails

I wonder, in general is it more like PHP (it loads into memory, executes, and dies for each connect).
Or it like Node.js (single instance stays in memory and accepts all requests)

Technically it's the latter, but depending on the application server, it can be made to look like the former because the former is easier to manage. One example is Phusion Passenger. Take a look at https://www.phusionpassenger.com/ and http://www.modrails.com/documentation/Architectural%20overview.html

Second option.
In fact it Ruby that boot the application (can have multiple instances depending of the case .i.e: using puma you can request multiple workers to handle requests) then as soon as ready (depending of the side of your application .i.e: if your routes.rb file where you build each URLs is huge, it will take more time of course) the application start to handle the requests.

Related

Is it necessary to use Nginx when deploy a Rails API ONLY app?

Currently I've already read a lot tutorials teaching about Rails App deployment, almost every one of them use Nginx or other alternatives like Apache to serve static pages. Also in this QA Why do we need nginx with thin on production setup? they said Nginx is used for load balancing.
I can understand the reasons mentioned above, but I write a Rails App as a pure API backend service, the only purpose is to serve JSON formatted data for other client-side apps, no pages rendering at all. So my questions are:
In my situation, do I really need Nginx just to deploy a pure API Rails App?
If not, how should I deploy my app? just running it (with unicorn in production env) at background is good enough? like rails server -e production -d?
I'm so curious about these two question, hope someone can explain the details or show me some good references for me, thanks in advance.
See basically, Unicorn or thin are all single threaded servers. Which in a way means they handle single requests at a time, using defering and other techniques.
For a production setup you would generally run many instances of unicorn or thin ( depending on your load etc ), you would need to load balance b/w those rails application server instances, thats why you need Nginx or something similar on top.
to serve JSON formatted data for other client-side apps, no pages rendering at all
You see, it makes no difference. These tasks are similar: format certain data into a specific text-based structure. Much like rendering a view in HAML, ERB or whatever.
The difference is, you won't be serving static assets. At least, it's not practical for pure JSON APIs.
If you aim for compact JSON responses, your best bet is Unicorn (several workers) + nginx.
Unicorn is simpler and aims for fast single-client response. That is, a slow client could force Unicorn to waste a lot of time serving him a response. When backed by nginx though, it fires the entire response into nginx's buffer and heads for the next one only waiting for nginx to accept the response (since it's usually on the same machine, it's blazing fast). nginx then hands out responses. There could possibly be multiple instances of Unicorn, if one is not enough: but using only one could eliminate any kinds of data races on app level (which are possible in multithreaded apps).
Thin is designed by itself to handle multiple clients concurrently by itself, through use of threads and workers. Keep in mind though that MRI ("classic" Ruby) doesn't have "truly concurrent threads" because of GIL. Technoligies it's based on (Ruby+C) make it inferior to nginx (pure C) in terms of resource usage efficiency. nginx is even used sometimes to counter DDoS attacks, efficiency is proven in the wild (<1> <2> <3> and many more).
You could benefit from Thin if your app implied concurrent service for multiple clients, like Server-Sent Events or WebSocket usage, that require maintaining a constant connection. This one doesn't seem to. Don't count on concurrency too much where it's not required.

How should I build a VPS that will host multiple small Rails applications?

Up until now, I've always built my VPS dedicated to running just a single application in multiple instances, mostly with Unicorn. That way I could set up the whole environment to fit perfectly for that one specific application and be happy with it.
But now I need to build a VPS, that will host multiple small Ruby applications. Some of them will be Rails and some Sinatra. They will have basically zero traffic (under 100 visits per day), which means I don't even need multiple instances of a single app.
I don't really have experience with other servers than unicorn + nginx, but what I think I need would look something like this.
request to app1, gets loaded into memory and serves the request
request to app2, gets loaded into memory and serves the request
request to app3, there is not enough free memory
app1 gets killed before the app3 is booted to serve the request
I know this isn't an exactly perfect scenario, but imagine having a 10 or 20 small apps on a single server, where each app gets 5 hits a day. They don't exactly need to be up and running at all times.
As far as I know, Heroku does this with their free tier, where Dynos get killed after some idle time, and then they get loaded back up when a request comes in. That's basically what I need to do on my own server.
I would recommend using Apache + Passenger. Passenger by default loads the application only when you need it, e.g. the first request will take a bit longer (actually as long as it takes to load your framework).
If the application is idle for some predefined time, it will be removed from memory.
Setup is very easy and adding new applications is just adding one line in your apache configuration.

Blocking IO / Ruby on Rails

I'm contemplating writing a web application with Rails. Each request made by the user will depend on an external API being called. This external API can randomly be very slow (2-3 seconds), and so obviously this would impact an individual request.
During this time when the code is waiting for the external API to return, will further user requests be blocked?
Just for further clarification as there seems to be some confusion, this is the model I'm anticipating:
Alice makes request to my web app. To fulfill this, a call to API server A is made. API server A is slow and takes 3 seconds to complete.
During this wait time when the Rails app is calling API server A, Bob makes a request which has to make a request to API server B.
Is the Ruby (1.9.3) interpreter (or something in the Rails 3.x framework) going to block Bob's request, requiring him to wait until Alice's request is done?
If you only use one single-threaded, non-evented server (or don't use evented I/O with an evented server), yes. Among other solutions using Thin and EM-Synchrony will avoid this.
Elaborating, based on your update:
No, neither Ruby nor Rails is going to cause your app to block. You left out the part that will, though: the web server. You either need multiple processes, multiple threads, or an evented server coupled with doing your web service requests with an evented I/O library.
#alexd described using multiple processes. I, personally, favor an evented server because I don't need to know/guess ahead of time how many concurrent requests I might have (or use something that spins up processes based on load.) A single nginx process fronting a single thin process can server tons of parallel requests.
The answer to your question depends on the server your Rails application is running on. What are you using right now? Thin? Unicorn? Apache+Passenger?
I wholeheartedly recommend Unicorn for your situation -- it makes it very easy to run multiple server processes in parallel, and you can configure the number of parallel processes simply by changing a number in a configuration file. While one Unicorn worker is handling Alice's high-latency request, another Unicorn worker can be using your free CPU cycles to handle Bob's request.
Most likely, yes. There are ways around this, obviously, but none of them are easy.
The better question is, why do you need to hit the external API on every request? Why not implement a cache layer between your Rails app and the external API and use that for the majority of requests?
This way, with some custom logic for expiring the cache, you'll have a snappy Rails app and still be able to leverage the external API service.

Best way to run rails with long delays

I'm writing a Rails web service that interacts with various pieces of hardware scattered throughout the country.
When a call is made to the web service, the Rails app then attempts to contact the appropriate piece of hardware, get the needed information, and reply to the web client. The time between the client's call and the reply may be up to 10 seconds, depending upon lots of factors.
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
I basically see two options. Either run JRuby and use multithreading or else run several regular Ruby instances and hope that not many people try to use the service at a time. JRuby seems like the much better solution, but it still doesn't seem to be mainstream and have out of the box support at Heroku and EngineYard. The multiple instance solution seems like a total kludge.
1) Am I right about my two options? Is there a better one I'm missing?
2) Is there an easy deployment option for JRuby?
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
From an engineering perspective, this seems like it would be the best alternative.
Why don't you want to do it?
There's a third option: If you host your Rails app with Passenger and enable global queueing, you can do this transparently. I have some actions that take several minutes, with no issues (caveat: some browsers may time out, but that may not be a concern for you).
If you're worried about browser timeout, or you cannot control the deployment environment, you may want to process it in the background:
User requests data
You enter request into a queue
Your web service returns a "ticket" identifier to check the progress
A background process processes the jobs in the queue
The user polls back, referencing the "ticket" id
As far as hosting in JRuby, I've deployed a couple of small internal applications using the glassfish gem, but I'm not sure how much I would trust it for customer-facing apps. Just make sure you run config.threadsafe! in production.rb. I've heard good things about Trinidad, too.
You can also run the web service call in a delayed background job so that it's not hogging up a web-server and can even be run on a separate physical box. This is also a much more scaleable approach. If you make the web call using AJAX then you can ping the server every second or two to see if your results are ready, that way your client is not held in limbo while the results are being calculated and the request does not time out.

How does Phusion Passenger reuse threads and processes?

I am setting up an Apache2 webserver running multiple Ruby on Rails web applications with Phusion Passenger. I know that Passenger spawns Ruby processes for handling requests. I have the following questions:
If more than one request has to be handled at the same time, will Passenger spawn multiple processes or multiple (Ruby) threads? How do I configure it so it always spawns single-threaded processes?
If I have two Rails applications, imagine that a request for app A goes to process 1, then later request for app B arrives. Is it possible that process 1 will handle this request as well? When and how is this possible? In other words, is one process allowed to handle requests for multiple Rails applications?
I have the same Rails application exported in multiple URLs and multiple virtual hosts (such as http:// and https://). Will the same process be able to serve different virtual hosts? (The answer to this seems to be yes, I've set a global variable in answering a request to virtual host A, and I was able to retrieve the value in virtual host B.)
Generally speaking, Passenger spawns new processes by forking an ApplicationSpawner, which has the framework and application code pre-loaded into memory, or a FrameworkSpawner, which just has the framework code.
Passenger, as far as I know, doesn't deal in threads. Instead, as the load increases on an application, it will fork that Application's ApplicationSpawner and initialize another instance. When load decreases, one or more application instances are killed off.
If Passenger is configured in a certain way (I believe by choosing the "smart" spawn method), it will create a FrameworkSpawner, which loads the rails code, but no application code, which can then be forked to load and application using that version of Rails.
So to answer your questions:
It will serve them sequentially, then spawn additional processes if it decides the load is high enough.
No. One process can only belong to a single Rails Application.
I'm kind of sketchy on this one, but your experiment makes sense. Passenger should be smart enough to figure out that even though it's running from different places in the server config, you're talking about the same application. It's probably based on the application's filesystem path.
EDIT: I went and read up on this a bit. Turns out I was mostly right, but the technical details were a bit off. See the Passenger documentation
Yup, Burke is right. In case of the third question, Phusion Passenger recognizes applications by their application root path. So even if you have two virtual hosts, if they both point to the same DocumentRoot then Phusion Passenger will think that they're the same app.

Resources