In a rails app I have an asynchronous method which only works asynchronously when the requests are differents.
In my controller I have this method :
require "em-synchrony/em-http"
def test
EventMachine.synchrony do
page = EventMachine::HttpRequest.new("http://127.0.0.1:8081/").get
render :json => {result: page.response}
request.env['async.callback'].call(response)
end
throw :async
end
in my page I call this method like this :
//Not asynchronous. :(
//The second request takes twice more time than the first one
$.get("/test");
$.get("/test");
However, to make the calls asynchronous, I need the requests to be differents like so :
//Asynchronous. :D
$.get("/test?a");
$.get("/test?b");
Why?
I would like my code to be always asynchronous. Even for identical requests. FYI I'm using the server Thin
I found your question really interesting, because I'm going to implement my first Reactor-pattern based web server and of course I went through em-syncrony.
Have you tried also using aget instead of get?
page = EventMachine::HttpRequest.new("http://127.0.0.1:8081/").aget
Let me know if it makes any difference :)!
Related
I'm attempting to stub a request to a Controller using WebMock. However, as I'm creating the stub, the request isn't being intercepted the way I'd expect or want it to be.
The Controller does nothing but render JSON based on the query parameter:
def index
render json: MyThing.search(params[:query]).as_json(only: [:id], methods: [:name_with_path])
end
And the stubbing goes as follows:
mything_val = { ...json values... }
stub_request(:any, mything_path).with(query: { "query" => "a+thing" }).to_return(body: mything_val, status: 200)
page.find('.MyThingInput > input').set('a thing')
# Note: I've tried this with and without the `query:` parameter, as well as
with and without specifying header info.
This is triggering a React component. What it does is, when a word or words are entered into the input, it sends an AJAX request to mything_path with the inputted value, which returns as JSON several suggestions as to what the user might mean. These are in a li element within .MyThingInput-wrapper.
In the spec file, I include:
require 'support/feature_helper'
require 'support/feature_matchers'
require 'webmock/rspec'
WebMock.disable_net_connect!
What's actually happening when I input the text into the React component however is that regardless of the WebMock stub, it's hitting the Controller, making the DB request, and failing due to some restrictions of the testing environment. My understanding of how this should work is that when the request is made to mything_url, it should be intercepted by WebMock which would return the values I pre-defined, and never hit the Controller at all.
My guess is that somehow I'm mocking the wrong URI, but honestly, at this point I'm really uncertain. Any and all input is appreciated, and I'm happy to clarify any points I've made here. Thanks massively!
What ended up solving my problem was stubbing out the model. I'd tried stubbing the Controller but ran into issues; however, this code did the trick:
before do
mything_value = [{ "id" => "fb6135d12-e5d7-4e3r-b1h6-9bhirz48616", "name_with_path" => "New York|USA" }]
allow(MyThing).to receive(:search).and_return(mything_value.to_json)
end
This way, it still hits the controller but stubs out the DB query, which was the real problem because it made use of Elasticsearch (not running in test mode.)
I'm not super happy about hard-coding the JSON like that, but I've tried a few other methods without success. Honestly, at this point I'm just going with what works.
Interestingly enough, I'd tried this method before Infused's suggestion, but couldn't quite get the syntax right; same went with stubbing out the Controller action. Went to bed, woke up, tried it again with what I thought was the same syntax, and it worked. I'm just going to slowly back away and thank the code gods.
If elastic search is the problem, then maybe try
installing Webmock
# in your gemfile
group :test do
gem 'webmock'
end
stubbing out the requests to elasticsearch and returning the JSON
Something like this in spec_helper:
config.before(:each) do
WebMock.enable!
WebMock.stub_request(:get, /#{ELASTICSEARCH_URL}/).to_return(body: File.read('spec/fixtures/elasticsearch/search-res.json'))ELASTICSEARCH_URL
# and presumably, if you are using elasticsearch-rails, you'd want to stub out the updating as well:
WebMock.stub_request(:post, /#{ELASTICSEARCH_URL}/).to_return(status: "200")
WebMock.stub_request(:put, /#{ELASTICSEARCH_URL}/).to_return(status: "200")
WebMock.stub_request(:delete, /#{ELASTICSEARCH_URL}/).to_return(status: "200")
end
Of course, this stubs out all calls to elastic-search and returns the same JSON for all answers. Dig into the documentation for webmock if you need a different response for each query.
(This question is a follow-up to How do I handle long requests for a Rails App so other users are not delayed too much? )
A user submits an answer to my Rails app and it gets checked in the back-end for up to 10 seconds. This would cause delays for all other users, so I'm trying out the delayed_job gem to move the checking to a Worker process. The Worker code returns the results back to the controller. However, the controller doesn't realize it's supposed to wait patiently for the results, so it causes an error.
How do I get the controller to wait for the results and let the rest of the app handle simple requests meanwhile?
In Javascript, one would use callbacks to call the function instead of returning a value. Should I do the same thing in Ruby and call back the controller from the Worker?
Update:
Alternatively, how can I call a controller method from the Worker? Then I could just call the relevant actions when its done.
This is the relevant code:
Controller:
def submit
question = Question.find params[:question]
user_answer = params[:user_answer]
#result, #other_stuff = SubmitWorker.new.check(question, user_answer)
render_ajax
end
submit_worker.rb :
class SubmitWorker
def check
#lots of code...
end
handle_asynchronously :check
end
Using DJ to offload the work is absolutely fine and normal, but making the controller wait for the response rather defeats the point.
You can add some form of callback to the end of your check method so that when the job finishes your user can be notified.
You can find some discussion on performing notifications in this question: push-style notifications simliar to Facebook with Rails and jQuery
Alternatively you can have your browser periodically call a controller action that checks for the results of the job - the results would ideally be an ActiveRecord object. Again you can find discussion on periodic javascript in this question: Rails 3 equivalent for periodically_call_remote
I think what you are trying to do here is little contradicting, because you use delayed_job when do done want to interrupt the control flow (so your users don't want to want until the request completes).
But if you want your controller to want until you get the results, then you don't want to use background processes like delayed_job.
You might want to think of different way of notifying the user, after you have done your checking, while keeping the background process as it is.
I'm working on a Ruby on Rails app that relies on my app making some simple URL calls for user metrics. For part of the tracking I need to make a server-side call prior to the rendering of my index page. This is achieved by calling a specially formatted URL. Currently I'm achieving this in the following way:
url = URI.parse('https://example.tracking.url')
result = Net::HTTP.start(url.host, use_ssl: true, verify_mode: OpenSSL::SSL::VERIFY_NONE) do
|http| http.get url.request_uri, 'User-Agent' => 'MyLib v1.2'
end
The loading of my page seems to be, at times, somewhat delayed. Short of it being a Database latency issue I assume it's just that sometimes the URL takes a extra time to respond and that this is a synchronous request. What is the best way to make asynchronous requests in Rails, Threads maybe? Thanks.
Have you looked into using a delayed job or Thread.new?
I would move it to a helper method and then call Thread.new on the helper method. Personally, I like using delayed_job for handling things that may present a delay with the user interface.
I have an action that takes a long time. I want to be able to provide updates during the process so the user is not confused as to whether he lost the connection or something. Can I do something like this:
class HeavyLiftingController < ApplicationController
def data_mine
render_update :js=>"alert('Just starting!')"
# do some complicated find etc.
render_update :js=>"alert('Found the records!')"
# do some processing ...
render_update :js=>"alert('Done processig')"
# send #results to view
end
end
No, you can only issue ONE render within a controller action. The render does NOTHING until the controller terminates. When data_mine terminates, there will be THREE renders, which will result in an error.
UPDATE:
You'll likely have to set up a JavaScript (jquery) timer in the browser, then periodically send an AJAX request to the server to determine the current status of your long running task.
For example the long running task could write a log as it progresses, and the periodic AJAX request would read that log and create some kind of status display, and return that to the browser for display.
It is impossible to handle the request that way. For each request, you have just one answer.
If your action takes a long time, then maybe it should be performed asynchronously. You could send user e-mails during the process to notify him of the progress.
I suggest that you to take a look on DelayedJob gem:
http://rubygems.org/gems/delayed_job
It will handle most difficult parts of dealing with assync stuff for you (serializing / deserializing your objects, storage, so on...).
Hope it helps you!
I have been looking into the possibility of backgrounding some jobs with EventMachine. In Sinatra this appears to work great but Rails 3 appears to execute all ticks before rendering a view.
When I run the following code under the thin webserver it behaves as expected. The first request returns immediately and the second request is waiting for the 3 second sleep call to finish. This is the expected behavior.
class EMSinatra < Sinatra::Base
get "/" do
EM.next_tick { sleep 3 }
"Hello"
end
end
Whereas in Rails 3 running I am trying to do the same thing: (running under thin)
class EmController < ApplicationController
def index
EM.next_tick {
sleep(3)
}
end
end
In Rails the sleep call happens before rendering the view to the browser. The result is that I am waiting for 3 seconds for the initial page to render.
Does anybody know why this is happening? I am not looking for comments on wether this is a good practice or not. I am simply experimenting. Throwing small tasks into the reactor loop seems like an interesting thing to look into. Why should the client have to wait if I am going to make some non-blocking http-requests?
Im not sure this is the answer you are looking for but i did some research on this before.
Let me tell you a littlebit of background information:
What we wanted to achieve was that rails already flushed parts of the template tree (e.g. the first part of the layout) even when the controller action is taking a long while to load.
The effect of this is that the user already sees something in their browser while the webserver is still doing work.
Ofcourse the main view has to wait with rendering because it probably needs data from the controller action.
This technique is also known as BigPipe and facebook wrote a nice blog about this:
http://www.facebook.com/notes/facebook-engineering/bigpipe-pipelining-web-pages-for-high-performance/389414033919
Anyway, after doing some research to achieve this for rails 3 i found this blog post made by Yehuda Katz.
http://yehudakatz.com/2010/09/07/automatic-flushing-the-rails-3-1-plan/
So for now i think you really have to stick with the waiting for the controller
Using EM.defer instead of EM.next_tick causes the sleep to happen after the response is sent back.