Raise exception if RSpec fails - ruby-on-rails

I'd like to write rake script, that runs all RSpec tests in my application. If any of the tests fails, id like to throw an Exception in the task (later on I will catch this exception in NewRelic alert system - I use it for other tasks as well).
Is it possible?

First, you don't need to raise an exception to let newrelic know of it. You can let newrelic know by directly posting error details to their api: https://docs.newrelic.com/docs/agents/ruby-agent/troubleshooting/sending-new-relic-handled-errors
notice_error(exception, options = { })
where exception can be an exception object (StandardError.new, for example) or a message.
Also, you can omit all this exception business and check exit code of rspec command line tool. If tests are green, it'll be zero. If errors are present, it will not be zero. Something like this
if system('rspec spec') # return true if command was successful, false otherwise
# if green
else
# if red
end

Related

Rails formatting logs to use with aws-logs and CloudWatch

AWS has this very cool log collection tool using aws-logs
However, I do not understand how I can format my log / configure the tool to be smarter and regroup the same error message. Right now AWS shows one message per line (because every line is timestamped)
My current log configuration indeed captures one new log entry per message. How can I go around it
[rails/production.log]
file = /var/www/xxx/shared/log/production.log
log_group_name = /rails/production.log
log_stream_name = {instance_id}
time_zone = LOCAL
datetime_format = %Y-%m-%dT%H:%M:%S
I actually partly solved the problem using lograge and JSON output which is parsed correctly by Amazon and lets your regroup most requests correctly.
However I still have some problems with errors, which are not outputted the same way, and still generate one line per call stack trace on awslogs
EDIT : We are now using a Rails API and regular exceptions thrown during JSON requests are rescued with a json:api error handler renderer. Furthermore, we are using Rollbar to log actual errors, so it becomes irrelevant to have the full error log
In our API::ApplicationController
# We don't want error reports for those errors
RESCUABLE_ERRORS = [
ActionController::ParameterMissing,
ActiveModel::ForbiddenAttributesError,
StrongerParameters::InvalidParameter,
Mongoid::Errors::Validations,
Mongoid::Errors::DocumentNotFound
]
# Note that in tests, we want to actually do not want to rescue non-Runtime exceptions straight away because most likely this indicates a real bug that you should fix, but in production we want to rescue any error so the frontend does not get the default HTML response but a JSON:api error
rescue_from(Rails.env.test? ? RuntimeError : Exception) do |e|
handle_exception(e)
notify_exception(e, 'Rescued from API controller - Rendering JSONAPI Error')
end
rescue_from(*RESCUABLE_ERRORS) do |e|
handle_exception(e)
end
In our controllers that inherit API::ApplicationController, we add as many lines of rescue_from depending whether we want to report the exception as an error (notify_exception) or just convert to a JSON payload (handle_exception)
rescue_from(SPECIFIC_ERROR_CLASS) do |exception|
handle_exception(exception) # will render a json:api error payload
# notify_exception(exception) # Optional : ExceptionNotifier to broadcast the error to email/Rollbar, etc. if this error should not happen.
end

How to mark a sidekiq task/job for retry without raising an error?

I use a Sidekiq queue to process communications with an unreliable, 3rd party API. Since this API is often down for a couple minutes at a time and then back up again, Sidekiq has been handy. When a connection issue happens, an error is raised and Sidekiq throws the job back in the queue to be retried again later, after some time has passed.
I use NewRelic to not only help debug crashes, but also for monitoring. My problem is that this current methodology above creates errors in NewRelic. If the 3rd party API is down for more than a couple of minutes, the error count accumulates enough to cause notifications to send out through NewRelic.
What I'd like to do is only raise an error from my worker when a certain number of retries have occurred for a job. I'm using sidekiq_retries_exhausted to do this. My problem is that I'm not quite sure how to put jobs back in the queue after they have an error without raising an error.
Does Sidekiq provide any facilities to return a job to a queue, increment the number of retries for the job, and have it sit there until it's due to run again, as if an exception was raised in the worker class?
You raise a specific error and tell the error service to ignore errors of that type. For NewRelic:
https://docs.newrelic.com/docs/agents/ruby-agent/installation-configuration/ruby-agent-configuration#error_collector.ignore_errors
Here is what I did to keep intentional retry errors out of AirBrake:
class TaskWorker
include Sidekiq::Worker
class RetryNotAnError < RuntimeError
end
def perform task_id
task = Task.find(task_id)
task.do_cool_stuff
if task.finished?
#log.debug "Task #{task_id} was successful."
return false
else
#log.debug "Task #{task_id} will try again later."
raise RetryNotAnError, task_id
end
end
end
Tell Airbrake to ignore it:
Airbrake.configure do |config|
config.ignore << 'RetryNotAnError'
end
It's good to make your exception name OBVIOUSLY not an error (e.g. RetryLaterNotAnError), as it will still show up in logs and such, and you don't want to freak people out when they see a bunch of them.
ps. That said, I would really like to see Sidekiq to provide an explicit, errorless retry mechanism.
If using Sidekiq Enterprise, one other option might be to utilize the optional set of additional error types that will then get treated as Sidekiq::Limiter::OverLimit violations.
For my purposes, I've used a new error class and then added it to the list in the config. Here are the notes from the sidekiq-ent code (not in the public sidekiq repo) on how to modify your config file:
# An optional set of additional error types which would be
# treated as a rate limit violation, so the job would automatically
# be rescheduled as with Sidekiq::Limiter::OverLimit.
#
# Sidekiq::Limiter.errors << MyApp::TooMuch
# Sidekiq::Limiter.errors = [Foo::Error, MyApp::Limited]
Inside the specific job you can specify the max_retries, or it will default to 20:
sidekiq_options max_limiter_retries: 10
Inside the job, I'll rescue the "expected" intermittent error that I'd rather not ignore completely and then raise the error I've added to the list, something like this:
rescue RestClient::RequestTimeout => e
raise SidekiqSoftRetry.new(e.inspect)
end
Here's what that looks like in my initialization file-- and Mike Perham was kind enough to respond with the option to update the global retry limit.
class SidekiqSoftRetry < RuntimeError
end
Sidekiq::Limiter::DEFAULT_OPTIONS[:reschedule] = 10
Sidekiq::Limiter.configure do |config|
config.errors.concat(
[
SidekiqSoftRetry,
]
)
end

How to retry a rake task if you get a Bad Gateway error response from a web source

I am trying to run a rake task to get all the data with a specific tag from Instagram, and then input some of the data into my server.
The task runs just fine, except sometimes I'll get an error response. It's sort of random, so I think it just happens sometimes, and since it's a fairly long running task, it'll happen eventually.
This is the error on my console:
Instagram::BadGateway: GET https://api.instagram.com/v1/tags/xxx/media/recent.json?access_token=xxxxx&max_id=996890856542960826: 502: The server returned an invalid or incomplete response.
When this happens, I don't know what else to do except run the task again starting from that max_id. However, it would be nice if I could get the whole thing to automate itself, and retry itself from that point when it gets that error.
My task looks something like this:
task :download => :environment do
igs = Instagram.tag_recent_media("xxx")
begin
sleep 0.2
igs.each do |ig|
dl = Instadownload.new
dl.instagram_url = ig.link
dl.image_url = ig.images.standard_resolution.url
dl.caption = ig.caption.text if ig.caption
dl.taken_at = Time.at(ig.created_time.to_i)
dl.save!
end
if igs.pagination.next_max_id?
igs = Instagram.tag_recent_media("xxx", max_id: igs.pagination.next_max_id)
moreigs = true
else
moreigs = false
end
end while moreigs
end
Chad Pytel and Tammer Saleh call this "Fire and forget" antipattern in their Rails Antipatterns book:
Assuming that the request always succeeds or simply not caring if it
fails may be valid in rare circumstances, but in most cases it's
unsufficient. On the other hand, rescuing all the exceptions would be
a bad practice aswell. The proper solution would be to understand the
actual exceptions that will be raised by external service and rescue
those only.
So, what you should do is to wrap your code block into begin/rescue block with the appropriate set of errors raised by Instagram (list of those errors can be found here). I'm not sure which particular line of your code snippet ends with 502 code, so just to give you and idea of what it could look like:
begin
dl = Instadownload.new
dl.instagram_url = ig.link
dl.image_url = ig.images.standard_resolution.url
dl.caption = ig.caption.text if ig.caption
dl.taken_at = Time.at(ig.created_time.to_i)
dl.save!
rescue Instagram::BadGateway => e # list of acceptable errors can be expanded
retry # restart from beginning
end

What kind of exception is raised by ActiveRecord::lock!?

I am using lock! in my code and want to catch the exception thrown if lock! fails for some reason (e.g. cannot get the lock). What kind of exceptions can lock! throw? I checked the ruby docs but couldn't find the specific Exception classes.
Thanks.
When in doubt, probe.
Consider the following pair of functions:
def long_hold
ActiveRecord::Base.transaction do
u = User.find(220)
u.lock!
sleep 100.seconds
u.email="foo#bar.com"
u.save!
end
end
def short_hold
ActiveRecord::Base.transaction do
u = User.find(220)
u.lock!
u.email="foo#bar.com"
u.save!
end
end
In my setup (OSX 10.11, ruby 2.2.4, rails 4.2, postgres 9.5), running long_hold in one rails console and then running short_hold in a second console, I observe short_hold blocks until long_hold completes; moreover, instrumenting the code with puts, we see that while long_hold is sleeping, short_hold is waiting to acquire the lock.
Assuming no caveats about the independence of rails consoles, this suggests that no exceptions are thrown if a second process tries to lock a row that is already locked, but that process blocks until the first completes.
Here is the source for that locking call. It calls reload and its source looks like this:
# File lib/active_record/base.rb, line 2333
2333: def reload(options = nil)
2334: clear_aggregation_cache
2335: clear_association_cache
2336: #attributes.update(self.class.find(self.id, options).instance_variable_get('#attributes'))
2337: #attributes_cache = {}
2338: self
2339: end
so when you call reload(:lock => lock) as the call to lock does it it really updating the attributes of that record.
There are a lot of different situations here. You could try to lock a record that dosnt exist, or lock one that has been locked elsewhere. What error are you interested in catching?

how can we tell delayed_job when a delayed task fails so it will auto-retry?

Our app is hosted on heroku and we use delayed job when sending info to a remote system (via a GET to a url with some url params)
the remote system returns a success code usually, but it it;s real busy it returns a tryagain code.
suppose the our method is
def send_info
the_url = "http://mydomain.com/dosomething?arg=#{self.someval}"
the_result = open(the_url).read
successflag = get_success_flag_from(the_result)
end
and so somewhere in our code we do
#widget.delay.send_info
and that all works fine.
Except it does not automatically handle the case where the remote said to try back later.
Is there any way for the send_info method (which is what delayed job will execute) to "tell" delayed_job "retry me again"? Do we need to throw some custom exception or something?
Raising any kind of exception ought to cause delayed_job to requeue it (subject to only-trying-so-many-times); if you don't especially need a custom exception you can just raise a RuntimeError.

Resources