I am using the Rack-timeout gem and a Rack::Timeout::RequestTimeoutException occurs. I did no configuration outside of putting this gem into my gemfile.
How do I handle these exceptions so that they don't stop the normal app's procedure but instead just log and let me know about them?
You can catch the exception in your application with
# in your app/controllers/application_controller.rb
rescue_from Rack::Timeout::RequestTimeoutException do |exception|
# do something
end
But as it's an exception I don't believe it's possible to return execution to where it was interrupted.
However, timeout also drops a log message every 1 second like this:
source=rack-timeout id=1123e70d486cbca9796077dc96279126 timeout=20000ms
service=1018ms state=active
Perhaps you could increase the interval of these to, say 5 seconds, and change the timeout to something high like 120 seconds, that way it's unlikely to actually interrupt anything, but you will get log messages to tell you that something is running long.
The whole purpose of that gem is to raise exceptions after a timeout
"Abort requests that are taking too long; an exception is raised."
If that's not what you want to do, perhaps you shouldn't be using that particular gem? Random google hit https://github.com/moove-it/rack-slow-log
to change the timeout, run
export RACK_TIMEOUT_SERVICE_TIMEOUT=30
Related
Trying to program a job that after 10 retries (from all exception types) will report a failure and die. Can't get it to work. Tried this answer and this one too. Neither worked.
The best solution would be to access retry_count from within the perform method.
I think what you're asking for is the sidekiq_retries_exhausted hook. It will be called once your retries are up and job will move to dead queue. Just set retries to 10 and implement that hook.
config.death_handlers might also be interesting.
See docs here: https://github.com/mperham/sidekiq/wiki/Error-Handling#configuration
I want to stop the app start up from an initializer.
Something like if a config isn't present, stop server/console, etc.
Also send a message in order to explain the error.
Is there a way to do that?
I looked into initialization events but I cannot make it happen.
Thanks in advance.
Yeah, just raise an exception like you normally would:
raise StandardError, "Stopping app start up because something is missing"
If you're doing this because some config is missing, consider using something like Figaro which does this for you.
Figaro.require_keys("pusher_app_id", "pusher_key", "pusher_secret")
https://github.com/laserlemon/figaro
You can use the Kernel#abort method to do it. It'll stop the application with your provided message and won't throw up any error.
Example:
abort('You need to pass more info to start the application') if some_check_fails?
I'm using sidekiq to process thousands of jobs per hour - all of which ping an external API (Google). One out of X thousand requests will return an unexpected (or empty) result. As far as I can tell, this is unavoidable when dealing with an external API.
Currently, when I encounter such response, I raise an Exception so that the retry logic will automatically take care of it on the next try. Something is only really wrong with the same job fails over and over many times. Exceptions are handled by Airbrake.
However my airbrake gets clogged up with these mini-outages that aren't really 'issues'. I'd like Airbrake to only be notified of these issues if the same job has failed X times already.
Is it possible to either
disable the automated airbrake integration so that I can use the sidekiq_retries_exhausted to report the error manually via Airbrake.notify
Rescue the error somehow so it doesn't notify Airbrake but keep retrying it?
Do this in a different way that I'm not thinking of?
Here's my code outline
class GoogleApiWorker
include Sidekiq::Worker
sidekiq_options queue: :critical, backtrace: 5
def perform
# Do stuff interacting with the google API
rescue Exception => e
if is_a_mini_google_outage? e
# How do i make it so this harmless error DOES NOT get reported to Airbrake but still gets retried?
raise e
end
end
def is_a_mini_google_outage? e
# check to see if this is a harmless outage
end
end
As far as I know Sidekiq has a class for retries and jobs, you can get your current job through arguments (comparing - cannot he effective) or jid (in this case you'd need to record the jid somewhere), check the number of retries and then notify or not Airbrake.
https://github.com/mperham/sidekiq/wiki/API
https://github.com/mperham/sidekiq/blob/master/lib/sidekiq/api.rb
(I just don't give more info because I'm not able to)
if you look for Sidekiq solution https://blog.eq8.eu/til/retry-active-job-sidekiq-when-exception.html
if you are more interested in configuring Airbrake so you don't get these errors untill certain retry check Airbrake::Sidekiq::RetryableJobsFilter
https://github.com/airbrake/airbrake#airbrakesidekiqretryablejobsfilter
I am using Unicorn as my app server for my Rails app, and am trying to figure out why there sometimes is sometimes a non-trivial (> 5 seconds) delay between the start of a request, and when it reaches my controller.
This is what my production.log prints out:
Started GET "/search/articles.json?q=mashable.com" for 138.7.7.33 at 2015-07-23 14:59:19 -0400**
Parameters: {"q"=>"mashable.com"}
Searching articles for keyword: mashable.com, format: json, Time: 2015-07-23 14:59:26 -0400
Notice how there is a 7 second delay in between STARTED GET: and "Searching articles for keyword", which is the first thing the controller method does.
articles.json is routed to my controller method "articles" which simply does this for now:
def articles
format = params[:format]
keyword = params["q"]
Rails.logger.info "Searching articles for keyword: #{keyword}, format: #{format}, Time: #{Time.now.to_s}"
end
This is my routes.rb
MyApp::Application.routes.draw do
match '/search/articles' => 'search#articles'
#more routes here, but articles is the first route
end
What could possibly cause this delay? Is it because an Unicorn worker is busy? Is it because an Unicorn worker is taking up too much memory which leads the system to be slow?
Note: I don't believe the delay is in making any database connections but I could be wrong. The code doesn't need to make a database call, and the max connections for my database is 1000, and there are usually at most 1-2 connections.
Three thoughts:
You'll probably be better served using Puma instead of Unicorn
It could be that your system is running out of memory, or it could have plenty of memory available: install New Relic to troubleshoot where the bottleneck is
It could also be that you have more Unicorn instances than the number of connections your DB allows, in which case the instance is having to wait for others to disconnect before it can connect. This would likely manifest itself with irregular 5-second delays rather than happening every time.
Actually, it might be caused by an before_filter callback, you should check it
I think it can be because of lack of memory and thus frequent garbage collection, which freeze whole system.
If it's a production problem it could be caused by slow clients sending requests. New Relic and Monit are good options. You could consider sending signals to Unicorn workers to restart them to better understand the problem.
You could also try adding preload_app true in your Unicorn config to speed up the startup time of worker processes.
I sought, but did not find, a max-requests-per-worker option in unicorn similar to gunicorn's max_requests or apache's MaxRequestsPerChild.
Does it exist?
If not, has anyone implemented it?
I'm thinking of putting it in the file where I have oobgc, since that gets control after every requests anyway. Does that sound about right?
The problem is that my unicorn workers are getting big and fat, and garbage collection is taking more and more of my CPU.
i've just released 'unicorn-worker-killer' gem. This enables you to kill Unicorn worker based on 1) Max number of requests and 2) Process memory size (RSS), without affecting the request. It's really easy to use. At first, please add this line to your Gemfile.
gem 'unicorn-worker-killer'
Then, please add the following lines to your config.ru.
# Unicorn self-process killer
require 'unicorn/worker_killer'
# Max requests per worker
use Unicorn::WorkerKiller::MaxRequests, 3072, 4096
# Max memory size (RSS) per worker
use Unicorn::WorkerKiller::Oom, (256*(1024**2)), (384*(1024**2))
It's highly recommended to randomize the threshold to avoid killing all workers at once.
Unicorn doesn't offer a max-requests.
The unicorn master will re-spawn any worker which exits and a worker will gracefully exit at the end of the current request when it receives a QUIT signal, so you could easily roll your own max request logic into your worker request life-cycle.
With Rails, something like the following in your application controller (alternatively, similar logic in a rack middleware)
after_filter do
##request_count ||= 0
Process.kill('QUIT',$$) if (##request_count += 1) > MAX_REQUESTS
end