Improve concurrency on a simple AJAX call in rails - ruby-on-rails

I have created a simple ajax call with the following code:
controller.rb
def locations
sleep 1.2
some_data = [{"name"=> "chris", "age"=> "14"}]
render json: some_data
end
view.js
function getLocation() {
$.get('/location').success(function(data){console.log(data);});
}
$(".button").click(function() {getLocation();});
Routes.rb
get '/location' => 'controller#locations'
Note that the sleep 1.2 in the controller it is to prevent doing background jobs or database calls.
The screenshot below is from the devtools Network tab, it shows I have clicked the button 8 times and all the subsequent calls are stalled until the previous action is finished. I think it is due to Rails being single threaded? Will it be a different case if the server is made with NodeJS? And How can I achieve similar concurrency with Rails for similar AJAX calls?
Thanks!!

Actually, it is not due to Rails, but to the Rails server you are using. Some are single threaded, and others can be launched as multithreaded.
For instance, if you use Phusion passenger, you can configure it to run using several threads and so to improve the concurrency. You should look for Rails "server" comparisons instead of trying to find a solution or a problem with the Rails "framework".
Popular servers are Thin, Unicorn, Puma, Phusion passenger. The default development server is call Webrick.
There are a lot of other stackoverflow questions relating to the differences between servers so I think you should look into them.

Related

Multithreading requests in Rails

I'm pretty new to Ruby On Rails webdevelopment, and I've got the following question:
In my Javascript I launch multiple calls to my controller at once with the use of AJAX, however I'm under the impression these requests get handled one by one, which results in a very slow experience (as some of the requests are quite intense and can take a while to process). I'd expect the server to spawn a separate thread for each request. As far as I'm aware I'm using WEBrick as the server on which my application is running. Online I found some posts indicating that WEBrick is by definition single threaded, so I'm out of luck, however some other posts claim it supports multithreading, but it is prohibited by a mutex in Rails. Most posts seem to refer rails 4.1-4.2, I'm currently running 5.0.1.
Use puma instead of webrick in development and unicorn in production and you will be alright.

How does Rails handle concurrent request on the different servers?

This has been asked before, but never answered particularly exhaustively.
Let's say you have Rails running on one of the several web servers that support it, such as WEBrick, Mongrel, Apache and Nginx (through Passenger Phusion). The server receives two concurrent GETs, what happens? Is this clearly documented anywhere?
Basically I'm curious:
Is a new instance or rails is created by the server every time?
Does it somehow try to re-use existing instances (ruby processes with Rails already loaded in it?) to handle the request?
Isn't starting a new ruby process and re-loading Rails in it pretty slow?
Thanks! Any links to exhaustive clarifications would be greatly appreciated.
Some use workers (apache, phusion, unicorn), some don't. If you don't
use workers, it really depends wherever your application is threadsafe
or not. If you are, more than one request may be served at a time,
otherwise there's Rack::Lock which blocks that. If there are workers
(separate processes), each of them does a request then goes back to
the pool where the master assigns it a new request. Read
on

Rails development: how to respond to several requests at once?

I have inherited the maintenance of a legacy web-application with an "interesting" way to manage concurrent access to the database.
The application is based on ruby-on-rails 2.3.8.
I'd like to set up a development environment and from there have two web browser make simultaneous requests, just to get the gist of what is going on.
Of course this is not going to work if I use Webrick, since it services just one http request at a time, so all the requests are effectively serialized by it.
I thought that mongrel could help me, but
mongrel_rails start -n 5
is actually spawning a single process and it seems to be single-threaded, too.
What is the easiest way of setting my development environment so that it responds to more than one request at a time? I'd like to avoid using apache and mod_passenger because, this being development, I'd like to be able to change the code and have it reloaded automatically on the next request.
In development mode, mod_passenger does reload classes and views. I use passenger exclusively for both development and deployment.
In production, you can (from the root of the rails app):
touch tmp/restart.txt
and passenger will reload the app.
Take a look at thin
http://code.macournoyer.com/thin/

how to run multiple nokogiri screen scrape threads at once

I have a website that requires using Nokogiri on many different websites to extract data. This process is ran as a background job using the delayed_job gem. However it takes around 3-4 seconds per page to run because it has to pause and wait for other websites to respond.
I am currently just running them by basically saying
Websites.all.each do |website|
# screen scrape
end
I would like to execute them in batches rather than one each so that I dont have to wait for a server response from every site (can take up to 20 seconds on occassion).
What would be the best ruby or rails way to do this?
Thanks for your help in advance.
You might want to check out Typhoeus which enables you to make parallel http requests.
I found a short blawg post here about using it with Nokogiri, but I haven't tried this myself.
Wrapped in a DJ, this should do the trick with little client-side latency.
You need to use delayed job. Check out this Railscasts.
Keep in mind most hosts charge for this type of thing.
You can also use the spawn plugin if you don't care about managing threads but it is much much easier!!!
This is literally all you need to do:
rails plugin/install https://github.com/tra/spawn.git
Then in your controller or model add the method
For example:
spawn do
#execute your code here :)
end
http://railscasts.com/episodes/171-delayed-job
https://github.com/tra/spawn
I'm using EventMachine to do something similar to this for a current project. There is a terrific plugin called em-http-request that allows you to make mutliple HTTP requests in parallel, as well as providing options for synchronising the responses.
From the em-http-request github docs:
EventMachine.run {
http1 = EventMachine::HttpRequest.new('http://google.com/').get
http2 = EventMachine::HttpRequest.new('http://yahoo.com/').get
http1.callback { }
http2.callback { }
end
So in your case, you could have
callbacks = []
Websites.all.each do |website|
callbacks << EventMachine::HttpRequest.new(website.url).get
end
callbacks.each do |http|
http.callback { }
end
Run your rails application with the thin webserver in order to get a functioning EventMachine loop:
bundle exec rails server thin
You'll also need the eventmachine and em-http-request gems. Good luck!

Automatically Refreshing Rails Metal In Development Mode

I am trying to develop a rails metal endpoint using sinatra, but is proving to be a pain because I have to restart the server every time I change the code. I am in Jruby and running from within a larger Java app. Is there an easy way to make this code refresh for every request?
Just because I like abstract abstraction, this is Ryan's code v2:
def every s
loop do
sleep s
yield
end
end
every 1 { `touch tmp/restart.txt` }
I don't think there is a way to automatically reload sinatra code, however:
If you were running passenger, you could try running in irb:
loop do
`touch tmp/restart.txt`
sleep(1)
end
Which will then tell the passenger instance to restart the application.

Resources