I am currently using gon gem to load the client_token in braintree.
Below shows the controller methods:
def new
#rental_info = display_rental_info(params[:rental_request_new_form])
#product = Product.find(params[:rental_request_new_form][:product_id])
gon.client_token = generate_client_token
end
private
def generate_client_token
Braintree::ClientToken.generate(customer_id: current_user.braintree_customer_id)
end
Soon enough, I realised the potential problem of this way. If the connection to Braintree is slow, it will just hold the request and block all other requests. Sometimes (in a rare probability), it will take 6-10s to load the request. And one time it actually result in Net::OpenTimeout - execution expired error after waiting for 60seconds..
I wonder what is a good way to come around this and prevent it from blocking other requests
I work at Braintree. The response times that you are seeing for our ClientToken.generate endpoint are unusual for our production environment, but may be experienced in our sandbox environment. I would suggest that you reach out to our support team to further diagnose this issue.
Further, calling out to Braintree in a single request should not block other requests to your web server. Web servers handle multiple requests concurrently. If you attempted to make the call to ClientToken.generate asynchronously, it would allow you to do other server side processing for this request while the Braintree token is being received, but I would weigh the benefits of parallelizing the processing for a single request before committing to the additional complexity.
Related
I am using Stripe gem in my rails application, it's working fine in the development environment but in my production environment getting an exception.
Stripe::APIError: (Status 409) with message There is currently another in-progress request using this Idempotent Key (that probably means you submitted twice, and the other request is still going through).
How can I rescue this or handle this exception?
Any help would be appreciated.
The retry logic is supposed to be for when your application doesn't know Stripe's response, primarily during network issues likes timeouts. In this case your server received a response from stripe and one of two things could be happening. Either you're retring the same event that is currently in progress /or/ the event in progress is actually different than the one you're trying but given some issue in your application stack you actually chose the same indempotency token for two api requests.
For further details please read this link :- https://stripe.com/docs/api/idempotent_requests
I have a Rails production application that is down several times per day. This application, in addition to serving its users, is the endpoint for a 3rd party website that sends it updates.
Occasionally, these updates will come flooding in so fast that the requests back up and the application becomes unavailable for long periods of time. It is a legitimate usage which ends up causing a Denial of Service.
The request from the 3rd party is pretty simple:
class NotificationsController < ApplicationController
def notify
begin
notification_xml = request.body.read
notification_hash = Hash.from_xml(item_response_xml)['Envelope']['Body']['NotificationResponse']
user = User.find(notification_hash['UserID'])
user.delay.set_notification(notification_hash)
rescue Exception => bang
logger.error bang.backtrace
unless user.blank?
alert_file_name = "#{user.id}_#{notification_hash['Message']['MessageID']}_#{notification_hash['NotificationEventName']}_#{notification_hash['Timestamp']}.xml"
File.open(alert_file_name, 'w') {|f| f.write(notification_xml) }
end
end
render nothing: true, status: 200
end
end
I have two app servers against a very large database. However, when this 3rd party website really hits us with the notification requests, over 200 per minute up to close to 1,000 requests per minute, both webservers get completely tied up.
You can also see above that I'm using the .delay call since I'm using Sidekiq. I thought that would help, and it did for a while, but the application can't handle that many requests.
Other than handling the requests in a separate application, which I'm not sure is really possible in my EngineYard installation, is there something I can do to speed up the handling of this request?
If it takes too much to process all those request, try a different approach.
Create a new model (I will call it Request) with only one field (I'll name it message) - the xml sent to you by that 3rd party app.
Rewrite your notify action to be very simple and fast:
def notify
Request.create(message: request.body)
render nothing: true, status: 200
end
Create a new action, let's say process_requests like this:
def process_requests
Request.order('id ASC')find_in_batches(100) do |group|
group.each do |request|
process_request(request)
request.destroy
end
end
end
def process_request(notification_xml)
begin
notification_hash = Hash.from_xml(item_response_xml)['Envelope']['Body']['NotificationResponse']
user = User.find(notification_hash['UserID'])
user.set_notification(notification_hash)
rescue Exception => bang
logger.error bang.backtrace
unless user.blank?
alert_file_name = "#{user.id}_#{notification_hash['Message']['MessageID']}_#{notification_hash['NotificationEventName']}_#{notification_hash['Timestamp']}.xml"
File.open(alert_file_name, 'w') {|f| f.write(notification_xml) }
end
end
Create a cron and call process_requests at a defined interval (few minutes).
I never used Sidekiq so I preferred to use find_in_batches (I used a batch of 100 results just for the sake of example).
notify action shouldn't run for more than a few milliseconds (inserts are pretty fast) so this should be able to handle the incoming traffic in your critical moments.
If you try something similar and it helps your servers to reduce the load in critical moments let me know :D
If this will be useful and you insert background processing here too, please post that for the others to see.
If you're monitoring this app with New Relic/AppNet/something else, checking your reports might give you an idea of some long-hanging fruit. We've only got a small picture of the application here; it's possible that enhancements elsewhere in the app might help as well.
With that said, here are a few ideas which can be applied separately or together:
Do Less Work on Intake
Right now you're doing a bunch of XML processing—which is expensive—before you pass the job off to Sidekiq. That's a choke point, and by running in the app process it's tying up your application.
If your Redis instance has enough memory, consider refactoring notify so the whole XML payload gets passed off to Sidekiq. You're already always returning a 200 response to the API consumer, so there's no impact on your existing external API.
Your worker instances can then process the XML payloads at their own pace without impacting the application.
Implement API Throttling
The third-party site is hammering you at a tremendous rate not normally permitted even by huge sites. That's a problem.
If you can't get them to address it on their end, play like the big dogs: Implement request throttling on your end. You likely have some ability to do this at the Rack level on EngineYard (though a quick search of their docs didn't immediately yield anything), but even doing it at the application level is likely to improve things.
There's a previous Stack Overflow discussion that may offer a couple options.
Proxy the API
A few services exist that will proxy your API for you, allowing you to easily implement features like rate limiting, throttling, and quotas that might otherwise be difficult to add.
The one I'm familiar with off the top of my head is Azure's API Management service. If this isn't a revenue-generating project, the cost might be prohibitive. ($49/month postpaid, though it would be cheaper prepaid, or could even be free if you qualify for BizSpark.)
Farm the API Out
The more advanced cousin of API proxies, "API as a Service" actually lets you run your API on its own VM instance—as well as offering the features a proxy does. If your database isn't a choke point, this can be a way to spread the load out and help prevent machine clients from affecting the experience of human clients.
The ten thousand pound gorilla is Apigee, though there are a variety of other established and startup options.
There is a catch: Most of these services are built around Node.js. If your Rails app is already leaning toward service-oriented architecture, and if you know and like JavaScript, this may not be an issue for you. Otherwise, the need to build an interface between services and maintain a service in a second language may be a bridge too far.
I'm building an API using Rails where requests come in and they need to be executed by a cluster of workers running on a different server (these workers call remote APIs and parse the data, etc...). I'm going to be using Sidekiq or Resque to handle the queueing/processing of that.
My issue is the client needs to wait while this is happening and the controller needs to return the response to the client once it's complete. How would I handle this in the controller? We're using a redis backend, so I was thinking something along the lines of subscribing to a pub/sub channel and waiting for the worker to publish a status message. The controller would wait for a set time period and then return a 'check back later' response to the client if it doesn't receive a message in time. What would be the best way to implement that, or is there a better solution?
Do not make your clients wait! There are a lot of issues if you make the controller block for a long running job:
Other programs may assume the request timed out (proxies, browsers, scripts, etc.)
It makes your API endpoints become a source for denial of service
It requires you to put more engineering work into web servers (since a rails process can't handle another web request while it's handling the blocking call)
Part of the reason of using Sidekiq or Resque is the avoid controllers that do heavily lifting during the http request.
Instead, background jobs should report their status to the database. Then web server should query and return to the client the latest status from the database.
If clients need more immediate feedback, you can:
make clients constantly poll
post request to the client (if the API consumer is another webserver)
use another protocol mechanism (eg - websockets).
I'm working on a project that aims to charge user by request. He will have an account, and monthly a receipt should be sent to him describing how much of the service he used on the month. Is there some gem or documentaion about this?
My concern is about just registering as a legitm use, when the request completes correctly, so as to avoid charging him error requests. I believe its a very careful part of the application. Have anyone seen anything about this?
I don't think you'll find a gem for that. Probably you'll end up creating a Rack middleware and add it to the end of the middlewares chain, check if the status code is 200 and increment the user's usage counter.
I noticed that in a standard grails environment, a request is always executed to the end, even when the client connection is lost and the result can't be delivered anymore.
Is there a way to configure the environment in such a way that execution of a request is canceled as soon as the client connection is lost?
Update: Thanx fo the answers. Yes - most of the problems I am trying to avoid can be avoided by better coding:
caching can make nearly every page fast
a token can help to avoid submitting something twice
but there are some requests which still could consume some time. Let's take a map service as example. Calculating a route will take some time. One solution to avoid resubmitting the request could be a "calculationInProgress" flag together with a message to the user. But then it is still possible to create a lot of sessions and thus a lot of requests in order to do a DOS attack...
I am still curious: is there no way to configure the server to cancel the request? I used to develop on a system where the server behaved this way and it was great :-)
Probably there is no such way. And I'm sure grails (and your webcontainer) is designed to
accept incoming request
process it on server side
send response
if something happened during phase 2, i'll know about it only on send response phase. Actually you can send data to HttpSerlvetRespone by yourself, handle IOException, etc - but it will be too much low-level way, I think. And it will not help you with canceling your DB operations, while you're preparing data to send.
Btw, it's common pattern to use an web frontend, like nginx, that accepts incomming request and and handle all this problems with cancelled requests, slow requests (i guess it's the real problem?), etc.
According to your comment it is reload and multiple clicks that you are trying to avoid. The proper technique should be to use Grails support for handling multiple form submissions:
http://grails.org/doc/2.0.x/guide/theWebLayer.html#formtokens