Asynchronous GET request in Rails - ruby-on-rails

I'm working on a Ruby on Rails app that relies on my app making some simple URL calls for user metrics. For part of the tracking I need to make a server-side call prior to the rendering of my index page. This is achieved by calling a specially formatted URL. Currently I'm achieving this in the following way:
url = URI.parse('https://example.tracking.url')
result = Net::HTTP.start(url.host, use_ssl: true, verify_mode: OpenSSL::SSL::VERIFY_NONE) do
|http| http.get url.request_uri, 'User-Agent' => 'MyLib v1.2'
end
The loading of my page seems to be, at times, somewhat delayed. Short of it being a Database latency issue I assume it's just that sometimes the URL takes a extra time to respond and that this is a synchronous request. What is the best way to make asynchronous requests in Rails, Threads maybe? Thanks.

Have you looked into using a delayed job or Thread.new?
I would move it to a helper method and then call Thread.new on the helper method. Personally, I like using delayed_job for handling things that may present a delay with the user interface.

Related

em-synchrony not asynchronous for identical requests

In a rails app I have an asynchronous method which only works asynchronously when the requests are differents.
In my controller I have this method :
require "em-synchrony/em-http"
def test
EventMachine.synchrony do
page = EventMachine::HttpRequest.new("http://127.0.0.1:8081/").get
render :json => {result: page.response}
request.env['async.callback'].call(response)
end
throw :async
end
in my page I call this method like this :
//Not asynchronous. :(
//The second request takes twice more time than the first one
$.get("/test");
$.get("/test");
However, to make the calls asynchronous, I need the requests to be differents like so :
//Asynchronous. :D
$.get("/test?a");
$.get("/test?b");
Why?
I would like my code to be always asynchronous. Even for identical requests. FYI I'm using the server Thin
I found your question really interesting, because I'm going to implement my first Reactor-pattern based web server and of course I went through em-syncrony.
Have you tried also using aget instead of get?
page = EventMachine::HttpRequest.new("http://127.0.0.1:8081/").aget
Let me know if it makes any difference :)!

Simple way for async/event driven request Ruby on Rails site

I am trying to implement a simple solution to help with some behavior. Basically I want to create a listener on a particular url, like
localhost:3000/listen
I have a callback with a 3rd party service that is sending JSON as a post request. In my rails routes I have the route setup, to accept a post request to that namespace.
The thing that I want to happen, is for some logic to be run anytime a new post comes in and to run that logic async without any disruption to the normal web service. For example, the post request will contain some data, if the data has a boolean "true", we need to fire off a Rails Mailer. I normally could do this with a simple rails controller action but this is not correct.
Any thoughts on the best approach to handle this? Would this best with a gem like eventmachine? If anyone could give their feedback to implement a simple solution that would be great!
I would look at your background jobs. I am a Sidekiq fan, and popular alternatives are Resque and DelayedJob.
Your controller will receive the response, and schedule it to be performed in the background. That will send out the mail (or whatever you need it to do) asynchronously.
class CallbackController < ApplicationController
def listen_third_party
data = params.slice([:params, :you, :care, :about])
if data[:boolean_field] == true
CallbackMailer.perform_async(data)
end
end
end

How to Make the Controller wait for a Delayed Job while the rest of the App continues on?

(This question is a follow-up to How do I handle long requests for a Rails App so other users are not delayed too much? )
A user submits an answer to my Rails app and it gets checked in the back-end for up to 10 seconds. This would cause delays for all other users, so I'm trying out the delayed_job gem to move the checking to a Worker process. The Worker code returns the results back to the controller. However, the controller doesn't realize it's supposed to wait patiently for the results, so it causes an error.
How do I get the controller to wait for the results and let the rest of the app handle simple requests meanwhile?
In Javascript, one would use callbacks to call the function instead of returning a value. Should I do the same thing in Ruby and call back the controller from the Worker?
Update:
Alternatively, how can I call a controller method from the Worker? Then I could just call the relevant actions when its done.
This is the relevant code:
Controller:
def submit
question = Question.find params[:question]
user_answer = params[:user_answer]
#result, #other_stuff = SubmitWorker.new.check(question, user_answer)
render_ajax
end
submit_worker.rb :
class SubmitWorker
def check
#lots of code...
end
handle_asynchronously :check
end
Using DJ to offload the work is absolutely fine and normal, but making the controller wait for the response rather defeats the point.
You can add some form of callback to the end of your check method so that when the job finishes your user can be notified.
You can find some discussion on performing notifications in this question: push-style notifications simliar to Facebook with Rails and jQuery
Alternatively you can have your browser periodically call a controller action that checks for the results of the job - the results would ideally be an ActiveRecord object. Again you can find discussion on periodic javascript in this question: Rails 3 equivalent for periodically_call_remote
I think what you are trying to do here is little contradicting, because you use delayed_job when do done want to interrupt the control flow (so your users don't want to want until the request completes).
But if you want your controller to want until you get the results, then you don't want to use background processes like delayed_job.
You might want to think of different way of notifying the user, after you have done your checking, while keeping the background process as it is.

How to do parallel HTTP requests in Heroku?

I'm building a Ruby on Rails app that access about 6-7 APIs, grabs information from them based on user's input, compares and display results to the users (the information is not saved in the database). I will be using Heroku to deploy the app. I would like those HTTP requests to access the APIs to be done in parallel so the answer time is better instead of doing it sequential. What do you think is the best way to achieve this in Heroku?
Thank you very much for any suggestions!
If you want to actually do the requests on the server side (tfe's javascript solution is a good idea), your best bet would be using EventMachine. Using EventMachine gives a simple way to do non-blocking IO.
Also check out EM-Synchrony for a set of Ruby 1.9 fiber aware clients (including HTTP).
All you need to do for a non-blocking HTTP request is something like:
require "em-synchrony"
require "em-synchrony/em-http"
EM.synchrony do
concurrency = 2
urls = ['http://url.1.com', 'http://url2.com']
# iterator will execute async blocks until completion, .each, .inject also work!
results = EM::Synchrony::Iterator.new(urls, concurrency).map do |url, iter|
# fire async requests, on completion advance the iterator
http = EventMachine::HttpRequest.new(url).aget
http.callback { iter.return(http) }
http.errback { iter.return(http) }
end
p results # all completed requests
EventMachine.stop
end
Goodluck!
You could always make the requests client-side using Javascript. Then not only can you run them in parallel, but you won't even need the round-trip to your own server.
I haven't tried parallelizing requests like that. But I've tried parallel on heroku, works like a charm! This is my simple blog post about it.
http://olemortenamundsen.wordpress.com/2010/10/17/spawning-multiple-threads-at-heroku-using-parallel/
Have a look at creating each request as a background job:
http://blog.heroku.com/archives/2009/7/15/background_jobs_with_dj_on_heroku/
The more 'Workers' you buy from Heroku, the more background jobs can be processed concurrently, leaving your 'Dynos' to serve your users.

My web site need to read a slow web site, how to improve the performance

I'm writing a web site with rails, which can let visitors inputing some domains and check if they had been regiestered.
When user clicked "Submit" button, my web site will try to post some data to another web site, and read the result back. But that website is slow for me, each request need 2 or 3 seconds. So I'm worried about the performance.
For example, if my web server allows 100 processes at most, that there are only 30 or 40 users can visit my website at the same time. This is not acceptable, is there any way to improve the performance?
PS:
At first, I want to use ajax reading that web site, but because of the "cross-domain" problem, it doesn't work. So I have to use this "ajax proxy" solution.
It's a bit more work, but you can use something like DelayedJob to process the requests to the other site in the background.
DelayedJob creates separate worker processes that look at a jobs table for stuff to do. When the user clicks submit, such a job is created, and starts running in one of those workers. This off-loads your Rails workers, and keeps your website snappy.
However, you will have to create some sort of polling mechanism in the browser while the job is running. Perhaps using a refresh or some simple AJAX. That way, the visitor could see a message such as “One moment, please...”, and after a while, the actual results.
Rather than posting some data to the websites, you could use an HTTP HEAD request, which (I believe) should return only the header information for that URL.
I found this code by googling around a bit:
require "net/http"
req = Net::HTTP.new('google.com', 80)
p req.request_head('/')
This will probably be faster than a POST request, and you won't have to wait to receive the entire contents of that resource. You should be able to determine whether the site is in use based on the response code.
Try using typhoeus rather than AJAX to get the body. You can POST the domain names for that site to check using typhoeus and can parse the response fetched. Its extremely fast compared to other solutions. A snippet that i ripped from the wiki page from the github repo http://github.com/pauldix/typhoeus shows that you can run requests in parallel (Which is probably what you want considering that it takes 1 to 2 seconds for an ajax request!!) :
hydra = Typhoeus::Hydra.new
first_request = Typhoeus::Request.new("http://localhost:3000/posts/1.json")
first_request.on_complete do |response|
post = JSON.parse(response.body)
third_request = Typhoeus::Request.new(post.links.first) # get the first url in the post
third_request.on_complete do |response|
# do something with that
end
hydra.queue third_request
return post
end
second_request = Typhoeus::Request.new("http://localhost:3000/users/1.json")
second_request.on_complete do |response|
JSON.parse(response.body)
end
hydra.queue first_request
hydra.queue second_request
hydra.run # this is a blocking call that returns once all requests are complete
first_request.handled_response # the value returned from the on_complete block
second_request.handled_response # the value returned from the on_complete block (parsed JSON)
Also Typhoeus + delayed_job = AWESOME!

Resources