This has been asked before, but never answered particularly exhaustively.
Let's say you have Rails running on one of the several web servers that support it, such as WEBrick, Mongrel, Apache and Nginx (through Passenger Phusion). The server receives two concurrent GETs, what happens? Is this clearly documented anywhere?
Basically I'm curious:
Is a new instance or rails is created by the server every time?
Does it somehow try to re-use existing instances (ruby processes with Rails already loaded in it?) to handle the request?
Isn't starting a new ruby process and re-loading Rails in it pretty slow?
Thanks! Any links to exhaustive clarifications would be greatly appreciated.
Some use workers (apache, phusion, unicorn), some don't. If you don't
use workers, it really depends wherever your application is threadsafe
or not. If you are, more than one request may be served at a time,
otherwise there's Rack::Lock which blocks that. If there are workers
(separate processes), each of them does a request then goes back to
the pool where the master assigns it a new request. Read
on
Related
I'm pretty new to Ruby On Rails webdevelopment, and I've got the following question:
In my Javascript I launch multiple calls to my controller at once with the use of AJAX, however I'm under the impression these requests get handled one by one, which results in a very slow experience (as some of the requests are quite intense and can take a while to process). I'd expect the server to spawn a separate thread for each request. As far as I'm aware I'm using WEBrick as the server on which my application is running. Online I found some posts indicating that WEBrick is by definition single threaded, so I'm out of luck, however some other posts claim it supports multithreading, but it is prohibited by a mutex in Rails. Most posts seem to refer rails 4.1-4.2, I'm currently running 5.0.1.
Use puma instead of webrick in development and unicorn in production and you will be alright.
I have deployed a Rails app at Engineyard in production and staging environment. I am curious to know if every HTTP request for my app initializes new instance of my Rails App or not?
Rails is stateless, which means each request to a Rails application has its own environment and variables that are unique to that request. So, a qualified "yes", each request starts a new instance[1] of your app; you can't determine what happened in previous requests, or other requests happening at the same time. But, bear in mind the app will be served from a fixed set of workers.
With Rails on EY, you will be running something like thin or unicorn as the web server. This will have a defined number of workers, let's say 5. Each worker can handle only one request at a time, because that's how rails works. So if your requests take 200ms each, that means you can handle approximately 5 requests per second, for each worker. If one request takes a long time (a few seconds), that worker is not available to take any other requests. Workers are typically not created and removed on Engineyard; they are set up and run continuously until you re-deploy, though for something like Heroku, your app may not have any workers (dynos) and if there are no requests coming in it will have to spin up.
[1] I'm defining instance, as in, a new instance of the application class. Each model and class will be re-instantiated and the #request and #session built from scratch.
According to what I have understood. No, It will definitely not initialize new instance for every request. Then again two questions might arise.
How can multiple user simultaneously login and access my system without interference?
Even though one user takes up too much processing time, how is another user able to access other features.
Answer to the first question is that HTTP is stateless, everything is stored in session, which is in cookie, which is in client machine and not in server. So when you send a HTTP request for a logged in user, browser actually sends the HTTP request with the required credentials/user information from clients cookies to the server without the user knowing it. Multiple requests are just queued and served accordingly. Since our server are very very fast, I feel its just processing instantly.
For the second query, your might might be concurrency. The server you are using (nginx, passenger) has the capacity to serve multiple request at same time. Even if our server might be busy for a particular user(Lets say for video processing), it might serve another request through another thread so that multiple user can simultaneously access our system.
I am building a pool of PhantomJS instances, and I am trying to make it so that each instance is autonomous (it fetches next job to be done).
My concern is to choose between these two:
Right now I have a Rails app that can give to PhantomJS which URL needs to be parsed next. So, I could do an HTTP get call from PhantomJS to my Rails app and Rails would respond with a URL that is pending to be done (most likely Rails would get that from a queue).
I am thinking on building a stand alone Redis server that PhantomJS would access via Webdis, so Rails would push the jobs there, and PhantomJS instances would fetch from it directly.
I am trying to think what would be the correct decision in terms of performance: PhantomJS hitting the Rails server (so Rails needs to get the job from the queue and send it to PhantomJS), or just making PhantomJS to access a Redis server directly.
Maybe I need more info but why isn't the performance answer obvious? Phantom JS hitting the Redis server directly means less stuff to go through.
I'd consider developing whatever is easier to maintain. What's the ballpark req/minute? What sort of company (how funded / resource-strapped are you)?
There's also more OOTB solutions like IronMQ that may ease the pain
I have inherited the maintenance of a legacy web-application with an "interesting" way to manage concurrent access to the database.
The application is based on ruby-on-rails 2.3.8.
I'd like to set up a development environment and from there have two web browser make simultaneous requests, just to get the gist of what is going on.
Of course this is not going to work if I use Webrick, since it services just one http request at a time, so all the requests are effectively serialized by it.
I thought that mongrel could help me, but
mongrel_rails start -n 5
is actually spawning a single process and it seems to be single-threaded, too.
What is the easiest way of setting my development environment so that it responds to more than one request at a time? I'd like to avoid using apache and mod_passenger because, this being development, I'd like to be able to change the code and have it reloaded automatically on the next request.
In development mode, mod_passenger does reload classes and views. I use passenger exclusively for both development and deployment.
In production, you can (from the root of the rails app):
touch tmp/restart.txt
and passenger will reload the app.
Take a look at thin
http://code.macournoyer.com/thin/
I would like to use the plugin em-eventsource ( https://github.com/AF83/em-eventsource ) for server-sent events in a Rails 3.1-project. My problem is, that there is only explained how to listen on events and receive messages, but not how to fire a specific event up and send the message. I would like to produce the event in an Active Record-Observer. Am I right when I think that I have to defer a operation with EventMachine to produce this event, or how can I solve this?
And yes, it has to be Ruby on Rails. If I don't get this to work with EventMachine, I would try to bypass the whole ruby-part with node.js.
Actually I worked on this library a little with the maintainer. I think you mixed the client part with the server one. em-eventsource is a client library which you can use to consume a ServerSentEvent API, it's not meant to fire SSE.
On the server side, it quite doesn't matter whether you are using Rails or any other stack (nodejs, php…) as long as the server you are running on supports streaming. The default web server shipped with Rails does not (Webrick) but there are many others which do: Thin, Puma, Goliath…
In order to fire SSE in Rails, you would have to use both a streaming-capable server among those cited, and abide by the SSE specification. It mostly falls down to, first, responding with the proper Content-type header ("text/event-stream") so that the client (browser) knows it should hang-on, and then start streaming on the socket. That latter part is the one not easily possible as of today in Rails 3 (yet not impossible!); Rails 4 actually now supports streaming in an easy way, with a clean and simple internal API, so it's definitely coming.
In the mean time, you'd either:
mess with Rack's API in Rails (using EventMachine I guess, there are some examples in the wild)
or have it smart and make use of the streaming feature provided by Sinatra, built on top of Rack (see https://gist.github.com/1476463 for an example of Sinatra app which can be mounted in a Rails one!)
or you could use an external service such as Pusher
or leverage a entirely different stack…
A good overview: http://blog.phusion.nl/2012/08/03/why-rails-4-live-streaming-is-a-big-deal/
Maybe I'm wrong, but if IIRC Rails can't support long pooling. Rails block whole server (or thread if you have more than one running inside server) for each request and can't reuse them unless whole response was send. That's why you should setup reverse proxy (like nginx) in front of Rails application if you suspect there could be many concurrent connections - to proxy slow client requests and send them to Rails when whole request is received. It's just how Rack works, there's not much you can do about this probably.