Rails: HTTP Get request from Callback Method (in model) - ruby-on-rails

I want to create a callback in my User model. after a user is created, a callback is initiated to run get_followers to get that users twitter followers (via full contact API).
This is all a bit new to me...
Is this the correct approach putting the request in a callback or should it be in the controller somewhere? And then how do I make the request to the endpoint in rails, and where should I be processing the data that is returned?
EDIT... Is something like this okay?
User.rb
require 'open-uri'
require 'json'
class Customer < ActiveRecord::Base
after_create :get_twitter
private
def get_twitter
source = "url-to-parse.com"
#data = JSON.parse(JSON.load(source))
end

A few things to consider:
The callback will run for every Customer that is created, not just those created in the controller. That may or may not be desirable, depending on your specific needs. For example, you will need to handle this in your tests by mocking out the external API call.
Errors could occur in the callback if the service is down, or if a bad response is returned. You have to decide how to handle those errors.
You should consider having the code in the callback run in a background process rather than in the web request, if it is not required to run immediately. That way errors in the callback will not produce a 500 page, and will improve performance since the response can be returned without waiting for the callback to complete. In such a case the rest of the application must be able to handle a user for whom the callback has not yet completed.

Related

How can I reuse the same watir object in the next controller action

I am using watir with headless browser. I would need to perform three steps add location, add vehicle and fetch product from the another site , for the information which I want from a another website.
I am submitting these three details from my server and performing these all three step in one HTTP request with the help of watir and headless.
I just want to breakdown one http request in to three http request on my server. The request will be:
1)add_location: Fire a http request which will open headless browser and select the location.
2)add_vehicle: Fire a http request which will reuse headless browser in which location added and we will select the vehicle.
3)Fetch product: Fire a http request which will reuse headless browser in which location and vehcile added, will fetch the product list.
I am not getting any way to reuse watir and headless session which is already open in the next http request at rails side.
Code Sample:
class TestsController < ApplicationController
def add_location
#headless = Headless.new
#headless.start
#watir = Watir::Browser.new
#watir.goto('www.google.com')
#watir.text_field(id: 'findstore-input')
.wait_until(&:present?).set(params[:zip_code])
#watir.a(id: 'findstore-button').click
#watir.div(class: 'notifier').wait_while(&:present?)
end
def add_vehicle
#need to resuse above #watir object in this action
end
end
The design change from 1 request to three has a big impact on your API, as even this simple part is now stateful, i.e. you need to keep the state between each of the three request.
Once you understand that, you have different possibilities.
Build your information request after request, and only when it is complete, use watir to get the information you need.
This is basically just changing the API and you store the data in a session, cookie, database or whatever.
It doesn't have a big impact on the changes you have to make, but does not bring any advantage.
Already forget this point, but you could pass around a global reference to your object in a session, but it has a HUGE memory impact and you could run into race condition.
NEVER do this, please
In case you really want to split the watir request into three different step (e.g. because it is too slow), you can use a background job to which you can transmit the user's data when it arrives (using dedicated databases, websocket, or whatever), then wait for your job to end (i.e. get a result), e.g. by trying to access it until it's available.
This solution requires a lot more work, but it keeps your HTTP requests with your client lightweight and allow you to do any kind of complex task in the background, which would otherwise probably timeout.
You can make use of the hooks file, to initiate the browser in headless mode and assign to the variable to call within separate def to pass url to the browser.
For example:
in hooks, you can add it as below
#browser = Watir::Browser.new :chrome, options: {args: ['--headless']}
So you can reuse the #browser.goto('www.google.com') in one def and can use the same instance some other call as well.
def example1:
#browser.goto('www.google.com')
end
def example2:
#browser.goto('www.facebook.com')
end
.
.
.
etc
Hope this helps.

Webhook firing multiple times, causing heavy API calls

My app has some heavy callback validations when I create a new customer. Basically I check multiple APIs to see if there's a match before creating a new customer record. I don't want this to happen after create, because I'd rather not save the record in the first place if there aren't any matches.
I have a webhook setup that creates a new customer. The problem is that, because my customer validations take so long, the webhook continues to fire because it doesn't get the immediate response.
Here's my Customer model:
validates :shopify_id, uniqueness: true, if: 'shopify_id.present?'
before_validation :get_external_data, :on => :create
def get_external_data
## heavy API calls that I don't want to perform multiple times
end
My hook:
customer = shop.customers.new(:first_name => first_name, :last_name => last_name, :email => email, :shopify_url => shopify_url, :shopify_id => id)
customer.save
head :ok
customer.save is taking about 20 seconds.
To clarify, here's the issue:
Webhook is fired
Heavy API Calls are made
Second Webhook is fired (API calls still being made from first webhook). Runs Heavy API Calls
Third Webhook is fired
This happens until finally the first record is saved so that I can now check to make sure shopify_id is unique
Is there a way around this? How can I defensively program to make sure no duplicate records start to get processed?
What an interesting question, thank you.
Asynchronicity
The main issue here is the dependency on external web hooks.
The latency required to test these will not only impact your save times, but also prevent your server from handling other requests (unless you're using some sort of multi processing).
It's generally not a good idea to have your flow dependent on more than one external resource. In this case, it's legit.
The only real suggestion I have is to make it an asynchronous flow...
--
Asynchronous vs synchronous execution, what does it really mean?
When you execute something synchronously, you wait for it to finish
before moving on to another task. When you execute something
asynchronously, you can move on to another task before it finishes.
In JS, the most famous example of making something asynchronous is to use an Ajax callback... IE sending a request through Ajax, using some sort of "waiting" process to keep user updated, then returning the response.
I would propose implementing this for the front-end. The back-end would have to ensure the server's hands are not tied whilst processing the external API calls. This would either have to be done using some other part of the system (not requiring the use of the web server process), or separating the functionality into some other format.
Ajax
I would most definitely use Ajax on the front-end, or another asynchronous technology (web sockets?).
Either way, when a user creates an account, I would create a "pending" screen. Using ajax is the simplest example of this; however, it is massively limited in scope (IE if the user refreshes the page, he's lost his connection).
Maybe someone could suggest a way to regain state in an asynchronous system?
You could handle it with Ajax callbacks:
#app/views/users/new.html.erb
<%= form_for #user, remote: true do |f| %>
<%= f.text_field ... %>
<%= f.submit %>
<% end %>
#app/assets/javascripts/application.js
$(document).on("ajax:beforeSend", "#new_user", function(xhr, settings){
//start "pending" screen
}).on("ajax:send", "#new_user", function(xhr){
// keep user updated somehow
}).on("ajax:success", "#new_user", function(event, data, status, xhr){
// Remove "pending" screen, show response
});
This will give you a front-end flow which does not jam up the server. IE you can still do "stuff" on the page whilst the request is processing.
--
Queueing
The second part of this will be to do with how your server processes the request.
Specifically, how it deals with the API requests, as they are what are going to be causing the delay.
The only way I can think of at present will be to queue up requests, and have a separate process go through them. The main benefit here being that it will make your Rails app's request asynchronous, instead of having to wait around for the responses to come.
You could use a gem such as Resque to queue the requests (it uses Redis), allowing you to send the request to the Resque queue & capture its response. This response will then form your response to your ajax request.
You'd probably have to set up a temporary user before doing this:
#app/models/user.rb
class User < ActiveRecord::Base
after_create :check_shopify_id
private
def check_shopify_id
#send to resque/redis
end
end
Of course, this is a very high level suggestion. Hopefully it gives you some better perspective.
This is a tricky issue since your customer creation is dependant on an expensive validation. I see a few ways you can mitigate this, but it will be a "lesser of evils" type decision:
Can you pre-call/pre-load the customer list? If so you can cache the list of customers and validate against that instead of querying on each create. This would require a cron job to keep a list of customers updated.
Create the customer and then perform the customer check as a "validation" step. As in, set a validated flag on the customer and then run the check once in a background task. If the customer exists, merge with the existing customer; if not, mark the customer as valid.
Either choice will require work arounds to avoid the expensive calls.

Simple way for async/event driven request Ruby on Rails site

I am trying to implement a simple solution to help with some behavior. Basically I want to create a listener on a particular url, like
localhost:3000/listen
I have a callback with a 3rd party service that is sending JSON as a post request. In my rails routes I have the route setup, to accept a post request to that namespace.
The thing that I want to happen, is for some logic to be run anytime a new post comes in and to run that logic async without any disruption to the normal web service. For example, the post request will contain some data, if the data has a boolean "true", we need to fire off a Rails Mailer. I normally could do this with a simple rails controller action but this is not correct.
Any thoughts on the best approach to handle this? Would this best with a gem like eventmachine? If anyone could give their feedback to implement a simple solution that would be great!
I would look at your background jobs. I am a Sidekiq fan, and popular alternatives are Resque and DelayedJob.
Your controller will receive the response, and schedule it to be performed in the background. That will send out the mail (or whatever you need it to do) asynchronously.
class CallbackController < ApplicationController
def listen_third_party
data = params.slice([:params, :you, :care, :about])
if data[:boolean_field] == true
CallbackMailer.perform_async(data)
end
end
end

Rails HTTParty Getting Timeout::Error

I have an API centric application (/api/v1/users) it simply return all users restfully with JSON format.
My problem is, if I call that route on the controller, it returns "Timeout::Error"
What is the problem?
class BaseController < ApplicationController
def index
return HTTParty.get('http://localhost:3000/api/v1/users').body
end
end
Update
users_controller.rb (/api/v1/users)
application_controller.rb
https://gist.github.com/4359591
Logs
http://pastie.org/5565618
If I understand correctly, you have an API end-point, at /api/v1/users, and your BaseController#index is calling that method?
If that is correct, inside the same rails process, and you are testing in development mode (as I can tell from your url), then you only have a single process running, which can only handle a single request at once. So if you start a request to BaseController#index, it will start another request to your own test-server, which is busy, and it will just wait until it times out.
If you want to test your API, I would look at a client tool like e.g. Postman.
HTH.

Providing updates during a long Rails controller action

I have an action that takes a long time. I want to be able to provide updates during the process so the user is not confused as to whether he lost the connection or something. Can I do something like this:
class HeavyLiftingController < ApplicationController
def data_mine
render_update :js=>"alert('Just starting!')"
# do some complicated find etc.
render_update :js=>"alert('Found the records!')"
# do some processing ...
render_update :js=>"alert('Done processig')"
# send #results to view
end
end
No, you can only issue ONE render within a controller action. The render does NOTHING until the controller terminates. When data_mine terminates, there will be THREE renders, which will result in an error.
UPDATE:
You'll likely have to set up a JavaScript (jquery) timer in the browser, then periodically send an AJAX request to the server to determine the current status of your long running task.
For example the long running task could write a log as it progresses, and the periodic AJAX request would read that log and create some kind of status display, and return that to the browser for display.
It is impossible to handle the request that way. For each request, you have just one answer.
If your action takes a long time, then maybe it should be performed asynchronously. You could send user e-mails during the process to notify him of the progress.
I suggest that you to take a look on DelayedJob gem:
http://rubygems.org/gems/delayed_job
It will handle most difficult parts of dealing with assync stuff for you (serializing / deserializing your objects, storage, so on...).
Hope it helps you!

Resources