How to prevent DelayedJob from re-queuing for certain exceptions - ruby-on-rails

We have a delayed_job class that gets unprocessable responses for certain API requests. The responses aren't going to change, so there is no point in re-queuing the job when it fails. However we still want to reqeue other failures, just not this one.
I can avoid the re-queue by capture the exception in perform, but then the failure is not even logged. I would like to have it logged as an error, not as a success.
It seems to me there would be a way to indicate certain exception classes as being exempt from re-queuing - is there?
I thought of using the error hook to simply delete the job, but my guess would be that something would explode if the job disappeared out from under DJ while it was processing it.
Any ideas?

Related

Boundary error events do not seem to work in Processmaker 4

I have tried defining a process like in the image:
My understanding was that boundary error events would triguer if an error ocurs in the task they are binded to. In this case, tascs A and B are scripts that make an HTTP request. When for some reason the service they call is not available, the code throws an exception (either timeout of empty response). If I do not use the boundary error events, the process simply fails and reports an error.
The idea behind this workflow was that if there was an error of this sort, using the boundary error event, I would direct flow of the process to a task assigned to the administrator. Then the administrator could check if the services were running, and once the error is corrected, could proceed with the proces by executing those tasks again.
Unfortunately, when I use the boundary error event, instead of the process failing as before, it just stays in "in progress" status, but no task gets assigned to the administrator.
Am I using the boundary error events wrong? Or are they simply not working in Processmaker 4?
Definitely the boundary error event is catching the error for some reason, because the task is not failin, but it is not directing the flow to the form that I designed and therefore can not continue with the process.
Well, it turns out that boundary error events work. The issue seem to be related with the assignment that the task had. I assigned it to an "Administrators" group and it didn't work. But when I assign it directly to the Admin user, it worked perfectly.

In Rails 3, how do I call some code via a controller but completely after the Request/Response cycle is done?

I have a very weird situation: I have a system where a client app (Client) makes an HTTP GET call to my Rails server, and that controller does some handling and then needs to make a separate call to the Client via a different pathway (i.e. it actually goes via Rabbit to a proxy and the proxy calls the Client). I can't change the pathway for that different call and I can't change the Client at all (it's a 3rd party system).
However: the issue is: the call via the different pathway fails UNLESS the HTTP GET from the client is completed.
So I'm trying to figure out: is there a way to have Rails finish the HTTP GET response and then make this additional call?
I've tried:
1) after_filter: this doesn't work because the after filter is apparently still within the Request/Response cycle so the TCP/HTTP response back to the Client hasn't completed.
2) enqueuing a worker: this works, but it is not ideal because if the workers are backed up, this call back to the client may not happen right away and it really does need to happen right after the Client calls the Rails app
3) starting a separate thread: this may work, but it makes me nervous: adding threading explicitly in Rails could be fraught with peril.
I welcome any ideas/suggestions.
Again, in short, the goal is: process the HTTP GET call to the Rails app and return a 200 OK back to the Client, completely finishing the HTTP request/response cycle and then call some extra code
I can provide any further details if that would help. I've found both #1 and #2 as recommended options but neither of them are quite what I need.
Ideally, there would be some "after_response" callback in Rails that allows some code to run but after the full request/response cycle is done.
Possibly use an around filter? Around filters allow us to define methods that wrap around every action that rails calls. So if I had an around filter for the above controller, I could control the execution of every action, execute code before calling the action, and after calling it, and also completely skip calling the action under certain circumstances if I wanted to.
So what I ended up doing was using a gem that I had long ago helped with: Spawnling
It turns out that this works well, although it required a tweak to get it working with Rails 3.2. It allows me to spawn a thread to do the extra, out-of-band callback to the Client, but let the normal, controller process complete. And I don't have to worry about thread management, or AR connection management. Spawnling handles that.
It's still not ideal, but pretty close. And it's slightly better than enqueuing a Resque/Sidekiq worker as there's no risk of worker backlog causing an unexpected delay.
I still wish there was an "after_response_sent" callback or something, but I guess this is too unusual a request.

Heroku - Issue due to multiple Dynos

In my rails app, I'm using the SendGrid parse API which posts mail to my server. Every now and then SendGrid's Parse API submits the same email twice.
When I get a posted mail I place it in the IncomingMail model. so in order to prevent this double submitting issue, I look at each IncomingMail when processing to see if there is a duplicate in the table within the last minute. That tested great on development, it caught all the double submissions.
Now I pushed that live to heroku, where I have 2+ dynos and it didn't work. My guess being that it has something to do with replication. So that being the case, how can scalable sites with multiple server deal with something like this?
Thanks
You should look at using a background job queue. Heroku has "Workers" (which was Delayed Job). Rather than sending the email immediately, you push it onto the queue. Then one or more Heroku 'workers' need to be added to your account, and each one will pull jobs in sequence. This means there can be a short delay (depending on load) before the email is sent, but this delay is not presented to the user, and should there be a lot of email to send you just add more workers.
Waiting for an external service like an email provider on each user action is dangerous because any network problem will take down your site as several users have to 'wait' for their HTTP requests to be responded to while Heroku is blocked with these third party calls.
In this situation with workers each job would fail but would be retried and eventually succeed.
This sounds like it could be a transaction issue. If you have multiple workers running simultaneously their operation may be 'interleaved'. For instance this sequence of events would result in 2 mails being sent.
Worker A : Checks for an existing record and doesn't find one
Worker B : Checks for an existing record and doesn't find one
Worker A : Post to Sendgrid
Worker B : Post to Sendgrid
You could wrap everything in a transaction to keep this from happening. Something like this should do it.
class IncomingMail < ActiveRecord::Base
def check_and_send(email_address)
transaction do
# your existing code for preventing duplicates and sending
end
end
end

Delayed::Jobs keeps sending emails if it fails

I used Delayed_Jobs to send emails. Except I think that if it fails to send to 'every single email' then it tries and reruns the entire batch again.
How do I make it skip an email address if its not correct?
if an exception occurs delayed_job will treat the job as failed and keep rerunning it.
You should capture exceptions to make sure that at the end a job will always be considered as succesful.

executing code in rails after response sent to browser

Does Rails provide a way to execute code on the server after the view is rendered and after the response is sent to the browser?
I have an action in my application that performs a lot of database transactions, which results in a slow response time for the user. What I'd like is to (1) perform some computations, (2) send the results of those computations to the browser, and then (3) save the results to the database.
It sounds like you want to implement a background job processor. This allows you to put the job into a queue to be processed asynchronously and for your users to not notice a long page load.
There are many options available. I have used and had no issues with delayed_job. Another popular one lately, which I have not used is resque.

Resources