On a high Twitter app site. Where the app sends tweets via the users oauth credentials. Should the tweets be sent in the background, via a background worker (Resque, Delayed Job, etc)? Or should the web process handle it?
It really depends on your use case. Twitter itself I think sends an AJAX request to the API. You could do the same if it makes sense in your interface, but it does mean that you're using a web process to do this. One of the benefits to this is that you can verify that the request was successful before returning a resopnse to the user. This is much easier than a scenario where you queue something in the background, it fails, and you want to alert the user (e.g. through a "real-time" ajax/socket-based message system or a flash notice on another request).
If you aren't worried about showing the Tweets (e.g. your application is sending as part of a larger action), then doing it in the background is definitely the way to go.
Resque is great and jobs are really lightweight, so you could a quick integration to process these in the background pretty quickly.
# app/jobs/send_tweet.rb
class SendTweet
#queue = :tweets
def self.perform(user_id, content)
user = User.find(user_id)
# send Tweet
end
end
# app/controllers/tweet_controller.rb
def create
# assuming some things here, like validation and a `current_user` method
Resque.enqueue(SendTweet, current_user.id, params[:tweet][:message])
redirect_to :index
end
Related
I have the following system: my Rails server issues commands to the Flask server and the latest one responses immediately with status 200. After that Flask server runs a background task with some time-consuming function. After a little while, it comes up with some results and designed to send data back to the Rails server via HTTP (see diagram)
Each Flask data portion can affect several Rails models (User, Post etc...). Here I faced with two questions:
How I should structure my controllers/actions on the Rails side in this case? Currently, I think about one controller and each action of it corresponds to Python 'delayed' data portion.
Is it a normal way of microservices communication? Or I can organize it in a different, more simple way?
This sounds like pretty much your standard webhook process. Rails pokes Flask with a GET or POST request and Flask pokes back after a while.
For example lets say we have reports, and after creating the report we need flask to verify the report:
class ReportsController
# POST /reports
def create
#report = Report.new(report_params)
if #report.save
FlaskClient.new.verify(report) # this could be delegated to a background job
redirect_to #report
else
render :new
end
end
# PATCH /reports/:id/verify
def verify
# process request from flask
end
end
class FlaskClient
include Httparty
base_uri 'example.com/api'
format 'json'
def verify(report)
self.class.post('/somepath', data: { id: report.id, callback_url: "/reports/#{report.id}/verify", ... })
end
end
Of course the Rails app does not actually know when Flask will respond or that Flask and the background service are different. It just sends and responds to http requests. And you definitely don't want rails to wait around so save what you have and then later the hook can update the data.
If you have to update the UI on the Rails side without the user having to refresh manually you can use polling or websockets in the form of ActionCable.
Hi I am processing some background jobs and I need to redirect the URL from the module or directly from the worker but as per my knowledge, there is only one method i.e redirect_to but it's not available in module and worker as per the rails MVC architecture but I need to do this.
Please see below is my code:-
#oauth = Koala::Facebook::OAuth.new(Figaro.env.fb_app_id,Figaro.env.fb_secret_token,Figaro.env.fb_callback_url)
oauth_code_url = #oauth.url_for_oauth_code
redirect_to oauth_code_url
I have also included the include ActionController::UrlFor to get the redirect_to method in Module and Worker but it's again throwing the error and I was not able to call controller methods into the module or worker. could anyone please suggest what would be the best approach to do this?
Redirects only make sense inside the Request/Response cycle, workers usually happen on the background and asynchronously, so the user that might have initiated isn't waiting for the worker's response.
If you do want to wait (run the worker synchronously), it's not up to the worker to redirect, the worker should simply "signal" the controller to perform the redirect (this is because you want to keep a separation of concerns).
I have a Rails production application that is down several times per day. This application, in addition to serving its users, is the endpoint for a 3rd party website that sends it updates.
Occasionally, these updates will come flooding in so fast that the requests back up and the application becomes unavailable for long periods of time. It is a legitimate usage which ends up causing a Denial of Service.
The request from the 3rd party is pretty simple:
class NotificationsController < ApplicationController
def notify
begin
notification_xml = request.body.read
notification_hash = Hash.from_xml(item_response_xml)['Envelope']['Body']['NotificationResponse']
user = User.find(notification_hash['UserID'])
user.delay.set_notification(notification_hash)
rescue Exception => bang
logger.error bang.backtrace
unless user.blank?
alert_file_name = "#{user.id}_#{notification_hash['Message']['MessageID']}_#{notification_hash['NotificationEventName']}_#{notification_hash['Timestamp']}.xml"
File.open(alert_file_name, 'w') {|f| f.write(notification_xml) }
end
end
render nothing: true, status: 200
end
end
I have two app servers against a very large database. However, when this 3rd party website really hits us with the notification requests, over 200 per minute up to close to 1,000 requests per minute, both webservers get completely tied up.
You can also see above that I'm using the .delay call since I'm using Sidekiq. I thought that would help, and it did for a while, but the application can't handle that many requests.
Other than handling the requests in a separate application, which I'm not sure is really possible in my EngineYard installation, is there something I can do to speed up the handling of this request?
If it takes too much to process all those request, try a different approach.
Create a new model (I will call it Request) with only one field (I'll name it message) - the xml sent to you by that 3rd party app.
Rewrite your notify action to be very simple and fast:
def notify
Request.create(message: request.body)
render nothing: true, status: 200
end
Create a new action, let's say process_requests like this:
def process_requests
Request.order('id ASC')find_in_batches(100) do |group|
group.each do |request|
process_request(request)
request.destroy
end
end
end
def process_request(notification_xml)
begin
notification_hash = Hash.from_xml(item_response_xml)['Envelope']['Body']['NotificationResponse']
user = User.find(notification_hash['UserID'])
user.set_notification(notification_hash)
rescue Exception => bang
logger.error bang.backtrace
unless user.blank?
alert_file_name = "#{user.id}_#{notification_hash['Message']['MessageID']}_#{notification_hash['NotificationEventName']}_#{notification_hash['Timestamp']}.xml"
File.open(alert_file_name, 'w') {|f| f.write(notification_xml) }
end
end
Create a cron and call process_requests at a defined interval (few minutes).
I never used Sidekiq so I preferred to use find_in_batches (I used a batch of 100 results just for the sake of example).
notify action shouldn't run for more than a few milliseconds (inserts are pretty fast) so this should be able to handle the incoming traffic in your critical moments.
If you try something similar and it helps your servers to reduce the load in critical moments let me know :D
If this will be useful and you insert background processing here too, please post that for the others to see.
If you're monitoring this app with New Relic/AppNet/something else, checking your reports might give you an idea of some long-hanging fruit. We've only got a small picture of the application here; it's possible that enhancements elsewhere in the app might help as well.
With that said, here are a few ideas which can be applied separately or together:
Do Less Work on Intake
Right now you're doing a bunch of XML processing—which is expensive—before you pass the job off to Sidekiq. That's a choke point, and by running in the app process it's tying up your application.
If your Redis instance has enough memory, consider refactoring notify so the whole XML payload gets passed off to Sidekiq. You're already always returning a 200 response to the API consumer, so there's no impact on your existing external API.
Your worker instances can then process the XML payloads at their own pace without impacting the application.
Implement API Throttling
The third-party site is hammering you at a tremendous rate not normally permitted even by huge sites. That's a problem.
If you can't get them to address it on their end, play like the big dogs: Implement request throttling on your end. You likely have some ability to do this at the Rack level on EngineYard (though a quick search of their docs didn't immediately yield anything), but even doing it at the application level is likely to improve things.
There's a previous Stack Overflow discussion that may offer a couple options.
Proxy the API
A few services exist that will proxy your API for you, allowing you to easily implement features like rate limiting, throttling, and quotas that might otherwise be difficult to add.
The one I'm familiar with off the top of my head is Azure's API Management service. If this isn't a revenue-generating project, the cost might be prohibitive. ($49/month postpaid, though it would be cheaper prepaid, or could even be free if you qualify for BizSpark.)
Farm the API Out
The more advanced cousin of API proxies, "API as a Service" actually lets you run your API on its own VM instance—as well as offering the features a proxy does. If your database isn't a choke point, this can be a way to spread the load out and help prevent machine clients from affecting the experience of human clients.
The ten thousand pound gorilla is Apigee, though there are a variety of other established and startup options.
There is a catch: Most of these services are built around Node.js. If your Rails app is already leaning toward service-oriented architecture, and if you know and like JavaScript, this may not be an issue for you. Otherwise, the need to build an interface between services and maintain a service in a second language may be a bridge too far.
I have a rails app, where every user can connect his Facebook account, and give permission to send messages from the app wich is using. So, every logged user with connected Facebook account must has one Jabber Client authorized with his Facebook-id, token etc, I'm doing it with xmpp4r GEM.
The connected facebook account with token, and facebook data is stored in Database as Mailman object.The Mailman class has also methods to control the Jabber client like run_client, connect_client, authorize_client, stop_client, get_client etc. The most important methods for me are connect_client and get_client.
class Mailman < ActiveRecord::Base
##clients = {} unless defined? ##clients
def connect_client
#some code
##clients[self.id] = Jabber::Client.new Jabber::JID.new(facebook_chat_id)
#some code
end
def get_client
##clients[self.id]
end
#other stuff
end
As you can see in the code, every Mailman object has get_client method which should return Jabber::Client object, and it's true, it is working, but only in a scope of running application, because the ##clients variable is stored only for specifc running app.
This is problem for me because I would like to use cron task to close idle clients, and the cron task is using different initalization of the app, so Mailman.find(x).get_client will return always nil, even if it returns Jabber::Client object in a production app.
How are you dealing with such issues? For example, is it possible to get a pointer to memory for Jabber::Client object and save it to database, so any other app's initalization could use it? I have no idea how to achive that. Thank you for any advice!
Even if you manage to store a "pointer to memory" in your database, it will be of no use to a cron job. The cron job is started as a new process, and the OS ensures that it won't have access to the memory space of any other process.
The best way is to create a controller to manage your running XMPP clients. This will provide a restful API to your cron job, allowing you to terminate idle clients using HTTP requests.
I have a rails 4 application, that is doing a background calculus. So the user press a button and it launches a background job for the long calculus (using delay_job) and arrive on a waiting page. Despite the fact that the calculus is done asynchronously, I would like to find a way to warn the user and automatically reload it page once the calculus is finish.
I have looked at several solutions :
Faye or private_pub => I can't afford to use another server for this very specific job
ActionController::Live => I am not sure how to trigger the action once the job is finished
Polling => best solution for the moment, but I expect to find something that is less greedy than polling
If you have any smart solution for this problem, or any advise for my limitation in the listed ideas.
I'll explain how this works for you
HTTP
When you send the request to Rails, your browser doesn't have any way to "listen" to what happens to anything other than a direct response. The HTTP request is "sent" & doesn't have any mechanism to wait until the asynchronous process is complete
Your job is to give your app a "listener" so that your front-end can receive updates as they are generated (out of scope of HTTP), hence the websocket or SSE stuff
We've set up what you're looking for before, using Pusher
"Live"
Achieving "live" functionality is what you need
This basically means keeping a request open "perpetually" to the server; listening to any of the "events" the server sends back. This is done with JS on the front-end, where the client will "subscribe" to a channel (typically a user-centric one), and will then have all the data sent to it
We used a third-party system called Pusher to achieve this recently:
#GemFile
gem "pusher", "~> 0.12.0"
#app/controllers/message.rb
def send_message
public_key = self.user.public_key
Pusher['private-user-' + public_key].trigger('message_sent', {
message: "Message Sent"
})
end
#app/assets/javascripts/application.js
pusher = new Pusher("***********",
cluster: 'eu'
)
channel = pusher.subscribe("private-user-#{gon.user}")
channel.bind "message_sent", (data) ->
alert data.message
Hope this gives another option for you