How do I handle timeouts when using ActiveMerchant? - timeout

Sometimes when developing locally my connection drops, and whilst this hopefully shouldn't happen on production it raises an issue that I should probably handle timeouts from ActiveMerchant when it goes out to the payment gateway, in my case to SagePay.
I can't see a way in the documentation to do this, I was wondering if there was a best practice way to handle this?

I believe the question is connected to ruby implementation of AM (correct me if I'm wrong, please).
ActiveMerchant raises ActiveMerchant::ConnectionError when timeout occurs (link to source). Therefore we can simply handle the exception. e.g.:
begin
# Your ActiveMerchant staff here
rescue ActiveMerchant::ConnectionError => e
# timeout handler
end
Also sometimes it can be useful to control the timeouts using open_timeout and read_timeout class attributes (link to source), in case of SagePay:
ActiveMerchant::SagePay.open_timeout = 5 # timeout for opening the connection is set to 5 seconds
ActiveMerchant::SagePay.read_timeout = 10 # timeout for reading from opened connection is set to 10

Related

HBase 0.98.1 Put operations never timeout

I am using 0.98.1 version of HBase server and client. My application has strict response time requirements. As far as HBase is concerned, I would like to abort the HBase operation if the execution exceeds 1 or 2 seconds. This task timeout is useful in case of Region-Server being non-responsive or has crashed.
I tired configuring
1) HBASE_RPC_TIMEOUT_KEY = "hbase.rpc.timeout";
2) HBASE_CLIENT_RETRIES_NUMBER = "hbase.client.retries.number";
However, the Put operations never timeout (I am using sync flush). The operations return only after the Put is successful.
I looked through the code and found that the function receiveGlobalFailure in AsyncProcess class keeps resubmitting the task without any check on the retires. This is in version 0.98.1
I do see that in 0.99.1 there have been some changes to AsyncProcess class that might do what I want. I have not verified it though.
My questions are:
Is there any other configuration that I missed that can give me
the desired functionality.
Do I have to use 0.99.1 client to
solve my problem? Does 0.99.1 solve my problem?
If I have to use 0.99.1 client, then do I have to use 0.99.1 server or can I still use my existing 0.98.1 region-server.

Ruby on Rails with IMAP IDLE for multiple accounts

I'm currently building a Ruby on Rails app that allows users to sign in via Gmail and it have a constant IDLE connection to their Inbox. Emails need to arrive in the app as soon as they come into their Gmail Inbox.
Currently I have the following in terms of implementation, and some issues that I really need some help figuring out.
At the moment, when the Rails app boots up, it creates a thread per user which authenticates and runs in a loop to keep the IDLE connection alive.
Every 10-15 minutes, the thread will "bounce IDLE", so that a little data is transferred to make sure the IDLE connection stays alive.
The major issue I think is in terms of scalability and how many connections the app has to Postgres. It seems that each thread requires a connection to Postgres, this will be heavily limited on Heroku by the number of max connections (20 for basic and 500 for any plans after that).
I really need help with the following:
What's the best way to keep all these IDLE connections alive, but reducing the number of threads and connections needed to the database?
Note: user token refresh may happen if the refresh token to Gmail runs out, so this requires access to the database
Are there any other suggestions for how this may be implemented?
EDIT:
I have implemented something similar to the OP in this question: Ruby IMAP IDLE concurrency - how to tackle?
There is no need to spawn a new thread for each IMAP session. These can be done in a single thread.
Maintain an Array (or Hash) of all users and their IMAP sessions. Spawn a thread, in that thread, send IDLE keep-alive to each of the connections one after the other. Run the loop periodically. This will definitely give you far more concurrency than your current approach.
A long term approach will be to use EventMachine. That will allow using many IMAP connections in the same thread. If you are processing web requests in the same process, you should create a separate thread for Event Machine. This approach can provide you phenomenal concurrency. See https://github.com/ConradIrwin/em-imap for Eventmachine compatible IMAP library.
Start an EventMachine in Rails
Since you are on Heroku, you are probably using thin, which already starts an EventMachine for you. However, should you ever move to another host and use some other web server (e.g. Phusion Passenger), you can start an EventMachine with a Rails initializer:
module IMAPManager
def self.start
if defined?(PhusionPassenger)
PhusionPassenger.on_event(:starting_worker_process) do |forked|
# for passenger, we need to avoid orphaned threads
if forked && EM.reactor_running?
EM.stop
end
Thread.new { EM.run }
die_gracefully_on_signal
end
else
# faciliates debugging
Thread.abort_on_exception = true
# just spawn a thread and start it up
Thread.new { EM.run } unless defined?(Thin)
# Thin is built on EventMachine, doesn't need this thread
end
end
def self.die_gracefully_on_signal
Signal.trap("INT") { EM.stop }
Signal.trap("TERM") { EM.stop }
end
end
IMAPManager.start
(adapted from a blog post by Joshua Siler.)
Share 1 connection
What you have is a good start, but having O(n) threads with O(n) connections to the database is probably hard to scale. However, since most of these database connections are not doing anything most of the time, one might consider sharing one database connection.
As #Deepak Kumar mentioned, you can use the EM IMAP adapter to maintain the IMAP IDLE connections. In fact, since you are using EM within Rails, you might be able to simply use Rails' database connection pool by making your changes through the Rails models. More information on configuring the connection pool can be found here.

end of file reached EOFError (Databasedotcom + Rails + Heroku)

After much frustration trying to figure it out myself, I am reaching for SO guys (you!) to help me trace this formidable error:
Message: end of file reached EOFError Backtrace:
["/app/vendor/ruby-1.9.3/lib/ruby/1.9.1/openssl/buffering.rb:174:in
`sysread_nonblock
Background: My App is a Rails 3 app hosted on Heroku and 100% back-end app. It uses Redis/Resque workers to process it's payload received from Salesforce using Chatter REST API.
Trouble: Unlike other similar errors of EOF in HTTPS/OpenSSL in Ruby, my error happens very random (since I can't yet predict when will this come up).
Usual Suspects: The error has been noticed quite frequently when I try to create 45 Resque workers, and try to sync data from 45 different Salesforce Chatter REST API connections all at once! It's so frequent that my processing fails 20% or more of the total and all due to this error.
Remedy Steps:
I am using Databasedotcom gem which uses HTTPS and follows all the required steps to connect to create a sane HTTPS connection.
So...
Use SSL set in HTTPS - checked
URI Encode - checked
Ruby 1.9.3 - checked
HTTP read timeout is set to 900 (15 minutes)
I retry this EOF error MAX of 30 times after sleeping 30 seconds before each retry!
Still, it fails some of the data.
Any help here please?
Have you considered that Salesforce might not like so many connections at once from a single source and you are blocked by an DDOS-preventer?
Also, setting those long timeouts is quite useless. If the connection fails, drop it and reschedule a new one yourself. That is what Resque does fine, it will add up those waiting times if it keeps having problems...

Ruby mod_passenger process timeout

A few Ruby apps I've worked with hang for a long time on slow calls causing processes to backup on the machine eventually requiring a reboot. Is there a quick and easy way in Passenger to limit a execution time for a single Apache call.
In PHP if a process exceeds the max execution time setting in php.ini the process returns an error to Apache and the server keeps merrily plugging away.
I would take a look at fixing the application. Cutting off requests at the web server level is really more of a band aid and not addressing the core problem - which is request failures, one way or another. If the Ruby app is dependent on another service that is timing out, you can patch the code like this, using the timeout.rb library:
require 'timeout'
status = Timeout::timeout(5) {
# Something that should be interrupted if it takes too much time...
}
This will let the code "give up" and close out the request gracefully when needed.

Rails Resque workers fail with PGError: server closed the connection unexpectedly

I have site running rails application and resque workers running in production mode, on Ubuntu 9.10, Rails 2.3.4, ruby-ee 2010.01, PostgreSQL 8.4.2
Workers constantly raised errors: PGError: server closed the connection unexpectedly.
My best guess is that master resque process establishes connection to db (e.g. authlogic does that when use User.acts_as_authentic), while loading rails app classes, and that connection becomes corrupted in fork()ed process (on exit?), so next forked children get kind of broken global ActiveRecord::Base.connection
I could reproduce very similar behaviour with this sample code imitating fork/processing in resque worker. (AFAIK, users of libpq recommended to recreate connections in forked process anyway, otherwise it's not safe )
But, the odd thing is that when I use pgbouncer or pgpool-II instead of direct pgsql connection, such errors do not appear.
So, the question is where and how should I dig to find out why it is broken for plain connection and is working with connection pools? Or reasonable workaround?
After doing a bit of research / trial and error. For anyone who is coming across the same issue. To clarify what gc mentioned.
Resque.after_fork = Proc.new { ActiveRecord::Base.establish_connection }
Above code should be placed in: /lib/tasks/resque.rake
For example:
require 'resque/tasks'
task "resque:setup" => :environment do
ENV['QUEUE'] = '*'
Resque.after_fork do |job|
ActiveRecord::Base.establish_connection
end
end
desc "Alias for resque:work (To run workers on Heroku)"
task "jobs:work" => "resque:work"
Hope this helps someone, as much as it did for me.
When I created Nestor, I had the same kind of problem. The solution was to re-establish the connection in the forked process. See the relevant code at http://github.com/francois/nestor/blob/master/lib/nestor/mappers/rails/test/unit.rb#L162
From my very limited look at Resque code, I believe a call to #establish_connection should be done right about here: https://github.com/resque/resque/blob/master/lib/resque/worker.rb#L123
You cannot pass a libpq reference across a fork() (or to a new thread), unless your application takes very close care of not using it in conflicting ways. (Like, a mutex around every single attempt to use it, and you must never close it). This is the same for both direct connections and using pgbouncer. If it worked in pgbouncer, that was pure luck in missing a race condition for some reason, and will eventually break.
If your program uses forking, you must create the connection after the fork.
Change Apache configuration and add
PassengerSpawnMethod conservative
I had this issue with all of my Mailer classes and I needed to call ActiveRecord::Base.verify_active_connections! within the mailer methods in order to ensure a connection was made.

Resources