Rails - is new instance of Rails application created for every http request in nginx/passenger - ruby-on-rails

I have deployed a Rails app at Engineyard in production and staging environment. I am curious to know if every HTTP request for my app initializes new instance of my Rails App or not?

Rails is stateless, which means each request to a Rails application has its own environment and variables that are unique to that request. So, a qualified "yes", each request starts a new instance[1] of your app; you can't determine what happened in previous requests, or other requests happening at the same time. But, bear in mind the app will be served from a fixed set of workers.
With Rails on EY, you will be running something like thin or unicorn as the web server. This will have a defined number of workers, let's say 5. Each worker can handle only one request at a time, because that's how rails works. So if your requests take 200ms each, that means you can handle approximately 5 requests per second, for each worker. If one request takes a long time (a few seconds), that worker is not available to take any other requests. Workers are typically not created and removed on Engineyard; they are set up and run continuously until you re-deploy, though for something like Heroku, your app may not have any workers (dynos) and if there are no requests coming in it will have to spin up.
[1] I'm defining instance, as in, a new instance of the application class. Each model and class will be re-instantiated and the #request and #session built from scratch.

According to what I have understood. No, It will definitely not initialize new instance for every request. Then again two questions might arise.
How can multiple user simultaneously login and access my system without interference?
Even though one user takes up too much processing time, how is another user able to access other features.
Answer to the first question is that HTTP is stateless, everything is stored in session, which is in cookie, which is in client machine and not in server. So when you send a HTTP request for a logged in user, browser actually sends the HTTP request with the required credentials/user information from clients cookies to the server without the user knowing it. Multiple requests are just queued and served accordingly. Since our server are very very fast, I feel its just processing instantly.
For the second query, your might might be concurrency. The server you are using (nginx, passenger) has the capacity to serve multiple request at same time. Even if our server might be busy for a particular user(Lets say for video processing), it might serve another request through another thread so that multiple user can simultaneously access our system.

Related

Rails http request itself in tests hangs

Problem
Making an HTTP request from a model to a route on the same app results in request timeout.
Background
Why would you want to http request itself rather than call a method or something?
Here is my story: there is a rails app A (let's call it shop) and a rails app B (let' call it warehouse) that talk to each other over http.
I'd like to be able to run both of them in a single system test to test end-to-end workflow. Rails only runs a single service, but one can mount app B as a rails engine into the app A, effectively having two apps in a single service. However, they still talk to each other over http and that's the bit that does not work.
Thoughts
It looks as if the second request hits some kind of a thread lock around active record or something. The reason I thinking about active record, is that I was able to make an http call to itself from the controller (that is, before active record related code kicked in)
Question
Is it possible to work around that?

Is there a way to create connection timeout to activate a service-worker?

I'm using Electron, which is based on Chromium, to create an offline desktop application.
The application uses a remote site, and we are using a service worker to offline parts of the site. Everything is working great, except for a certain situation that I call the "airplane wifi situation".
Using Charles, I have restricted download bandwidth to 100bytes/s. The connection is sent through webview.loadURL which eventually calls LoadURLWithParams in Chromium. The problem is that it does not fail and then activate the service worker, like no connection at all would. Once the request is sent, it waits forever for the response.
My question is, how do I timeout the request after a certain amount of time and load everything from the service worker as if the user was truly offline?
An alternative to writing this yourself is to use the sw-toolbox library, which provides routing and runtime caching strategies for service workers, along with some built in options for helping with these sorts of advanced use cases. In particular, you'd want to use the networkTimeoutSeconds parameter to configure the amount of time to wait for a response from the network before you fall back to a previously cached response.
You can use it like the following:
toolbox.router.get(
new RegExp('my-api\\.com'),
toolbox.networkFirst, {
networkTimeoutSeconds: 10
}
);
That would configure a route that matched GET requests with URLs containing my-api.com, and applied a network-first strategy that will automatically fall back to the previously cached response after 10 seconds.

How might Apache cause duplicate requests?

I have two Rails apps that talk to one another. A few times a day, requests from app A show up in duplicate (or triplicate/quadruplicate) at app B. All outbound and inbound requests are logged. The logs show that app A is sending one outbound request and that app B receives that request twice or more during the same second.
App B sits behind Apache and an Amazon Elastic Load Balancer.
I am not sure where to look or even what questions to ask to hone in on what might be causing this issue. If you need more data, I would be happy to provide it.
The retries are likely coming out of the Amazon Elastic Load Balancer or some network component (like a router, for example). I've seen similar behavior when using other load balancers (like Citrix NetScaler) as well.
Basically, the request gets an idle timeout at some level in the request chain. If that timeout doesn't send a proper HTTP 5xx status back to the client (for example it could just silently close the connection) then any components between the source of the timeout and the client can potentially decide to retry the request depending on how they are configured.
Tracking down which components cause the retries can be very challenging. My recommendation is to make sure your Rails applications always respond quickly to each other. If the requests can't complete quickly, consider perhaps a background/polling solution or a non-HTTP communication method (WebSockets for example).

How does Rails handle concurrent request on the different servers?

This has been asked before, but never answered particularly exhaustively.
Let's say you have Rails running on one of the several web servers that support it, such as WEBrick, Mongrel, Apache and Nginx (through Passenger Phusion). The server receives two concurrent GETs, what happens? Is this clearly documented anywhere?
Basically I'm curious:
Is a new instance or rails is created by the server every time?
Does it somehow try to re-use existing instances (ruby processes with Rails already loaded in it?) to handle the request?
Isn't starting a new ruby process and re-loading Rails in it pretty slow?
Thanks! Any links to exhaustive clarifications would be greatly appreciated.
Some use workers (apache, phusion, unicorn), some don't. If you don't
use workers, it really depends wherever your application is threadsafe
or not. If you are, more than one request may be served at a time,
otherwise there's Rack::Lock which blocks that. If there are workers
(separate processes), each of them does a request then goes back to
the pool where the master assigns it a new request. Read
on

Suggestions for how to write a service in Rails 3

I am building an application which will send status requests to users (via email & sms) on a regular basis. I want to execute the service each hour which will:
Query the database for all requests that need to be sent (based on some logic)
Send the requests through Amazon's Simple Email Service (this is already working)
Write a record of the status request notification back to the data store
I am considering wrapping up this series of operations into a single controller with an end point that can be called remotely to kick off the process within the rails app.
Longer term, I will break this process out into an app that can be run independently of my rails app, but for now I'm just trying to keep it simple.
My first inclination is to build the following:
Controller with the following elements:
A method which will orchestrate the steps outlined above (and can be called externally)
A call to the status_request model which will bring back a collection of request needing to be sent
A loop to iterate through the pending requests, which will:
Make a call to my AWS Simple Email Service module to actually send the email, and
Make a call to the status_request model to log the request back to the database
Model:
A method on my status_request model which will bring back a collection of requests that need to be sent
A method in my status_request model which will log that a notification was sent
Since this will behave as a service that gets called periodically from an outside scheduler I don't think I'll need a view for this operation. (Will, of course, need views to show users and admins what requests have been sent, but that's later...).
As someone new to Rails, I'm asking for review of this approach and any suggestions you may have.
Thanks!
Instead of a controller which Jeff pointed out exposes a security risk, you may just want to expose a rake task and use cron to invoke it on an hourly basis.
If you are still interested in building a controller, look at devise gem and its single access token, token_authenticatable, for securing the methods you are exposing.
You may also want to look at delayed_job or resque to offload the call to status_request and the loop to AWS simple service to a background worker process.
You may want a seperate controller and view for the log file so you can review progress on demand.
And if you want to get real fancy use Amazon SNS to send you alerts when the service reaches some unacceptable level of failures, backlog, etc.
Since you are trying to invoke this from an outside process, your approach should work. You could also have a worker process that processes task when they are there.
You will need routes to expose your service, and you may want to also make security decisions. How will the service that invokes your application authenticate so all others can't hit it at will?
Another consideration should be how many emails are you sending. If there are enough, we may want to look into the fact that writing this sort of loop is going to be extremely top heavy; and may affect users on the current system if it's a web application.
In the end, there are many ways to do this. I would focus on the performance/usage you expect as well as security. There's never one perfect way to solve a problem like this, and your way should just be aware of the variables it will need to be operating within.
Resque and Redis might be helpful to you in scheduling and performing operatio n .They are simple and superfast, [here](http://railscasts.com/episodes/271-resque] is a simple tut on same.

Resources