How do you do exception management with delayed job? - ruby-on-rails

My application needs to parse a user-generated CSV file. And, once uploaded, the application will queue it in delayed job to be processed. My question is, how do you usually handle the exceptions that might happen during the content parsing stage? Do you store all the error messages in exception-objects before display it to user?
Thank you.

As the job is delayed, I would like to give all the errors in the CSV file at once. So that users do not end up iterating multiple times (fixing 1 error at a time).
One thing you can do is store all errors in a DB (in an suitable object). This would also enable to you analyze what kind of errors users are generally having and help them in reducing those.

Related

JSR-352: Save chunk checkpoint after ItemReader reads items

Using JSR-352 batch job along with Java EE, I'm trying to process items on chunk from a source in partitions. On retriable exception I want to be able to return to a past checkpoint, so I could get items already read from the source.
The nature of the source is such that in parallel environment I cannot require the same chunk of items twice. The only feasible way to be able to get the exact same items when reading twice is by having to restart the whole job.
I need to write a generic ItemReader which can manage sources of such a kind (so it may be reusable). This basically means that want to find nice and clear design/implementation of such a reader.
To achieve the required behavior of ItemReader to process the source, what I currently do is getting the items in the beginning of the readItem() if they have not been fetched for the current chunk, and then iterate one by one through them. In order to manage retriable exceptions I'm trying to use the checkpoint properties of the ItemReader.
The problem I'm facing is that the behavior of checkpoints is such that they are loaded in open(...) method, before readItem() and saved only after the chunk has been successful. This results in a problem with saving all the items of the chunk into a valid checkpoint before I must actually retry the chunk in case of an retriable exception.
My question is there a way to make augment the behavior of checkpoints, so they are saved after the initial readItem(), or do you happen to know any other nice and clear strategy, without the usage of additional listeners, userTransientData which would make the reader hard to integrate into other batch job steps with the same read behavior?

Delayed Job executes with wrong data when I have a big amount of jobs

I read a lot and I saw that Delayed Job doesn't actually use serialized data but it retrieves information using the deserialized id.
This isn't the behavior that I was expected when I choose that gem, but I can deal with it.
The real problem is that I use DJ to fire some alerts based on some data using an after_save callback and sometimes that data fires the alert too much in the future. So basically if I save three times the medical result for different reasons and the third time finalizes it, I will fire three alerts because DJ works three times on the finalized result.
Does exist a way to enqueue a job in the same queue, for the same method just once? I saw that handler isn't exposed and handle_asyncronously doesn't accept a parameter to identify the process.
The best solution would have been to work directly on serialized data but also execute it once is acceptable.
Thank you in advance!

Suggestion for trigger that sends email if threshold is broken

This is quite a broad question but ill try and summarise it as best I can.
I have an MVC front end which displays/allows processing of records which are classed as outstanding. I also have a scheduled console app which runs nightly and attempts to resolve each of these records using some logic I wrote.
I have a new requirement, which is to have an email sent every time the total number of outstanding records exceeds a certain amount, this amount needs to be configurable.
The table will contain every record with a flag to say if they have been resolved or not, so I will need to count the outstanding's then fire an email to notify if the threshold is broken.
I initially thought about adding a SQL Server trigger on insert however I soon realised that if no more records were added for a few days but the total number stayed above the threshold because nobody resolved them, then no further email would be sent.
I need the email to send every day on a schedule independently of insert/update.
So now I'm thinking possibly a SQL Server job, or an SSIS package or even a service which runs, but I'm aware this threshold number needs to be configurable.
So what would be the quickest simplest solution to my requirements, I'm open to any suggestion as long as it ticks all the boxes.
Given that the OP already has a console app running on a schedule, the most logical choice would be to simply add this check to the console app along with the email sending logic. It will be much easier to send emails that way, anyways, especially if you employ something like Postal, which will let you use MVC-style views to create your emails.
An SQL Server scheduled job seems to me to be the simplest way to go.
you can add a table to your database that will hold the threshold number and read it's value from there.
In many cases a GeneralParams table is a good thing to have anyway.
The other option you mentioned (windows service) is also configurable in many ways: you can use a GeneralParams table, or the App.Config file of the service (but you will have to restart it every time you change the app.config), or even a simple text file. anything goes. the downside is that it's outside of your sql server, but the upside is that it is probably easier to send emails from.

Download data option for customers

I have a multi tenant Rails app where the data of a customer is separated with a global scope. Now I want to give the customers the option to download all their own data in a single download. What is the best way to achieve this? Is it best to output everything into a CSV file?
Putting it into 1 csv file is going to likely cause you headaches.
I agree with Alex to do it as a background job if you go with CSV.
I will walk you through 2 approaches (CSV and Feed) and then you can choose what works for you.
CSV
Normally there are many tables that you want to export. If you put it all into one CSV file, it will be a bit messy of a file.
Instead I would set a nightly process for each customer for each table.
These generate CSVs for each customer for each table and stage them.
Finally for each customer I would bring those files together into a compressed file, and prep for delivery (Web download, FTP, Email etc)
The downside really is the lack of real time.
If you need real time (or if the data set is large), then you have to think about the impact this will have to your production database. It could cause serious performance degredation over time.
One option to get by this is to have read only replicated databases and you can deploy/utilize as needed.
Change Management
Instead of creating these ever-growing files every night, or on each request you can process data as it changes.
For example, if your customers really need to get this data, it could be for dropping in their database. I would move away from downloading CSV's or excel and offer an API.
When data changes come into your system, you notify interested components of the change. This way they do not have to go to the DB to get the changes. The API can have a pickup location that serves up the changed data whenever it exists.
We have used this mechanism in large scale, high volume environments with great success.
Push Notifications
Finally, there are web hooks. Basically when changes you post the data to their web server.
I would suggest if you are going to go with the CSV route, you look at the long term read impacts. You may not need to make a change now, but you should have in your plan an item and solution ready.
Finally I would break the task into many small tasks over 1 long running.
CSV is a commonly used format for this. There is a good rails cast on how to achieve this: http://railscasts.com/episodes/362-exporting-csv-and-excel
From my experience I can advice you to implement it as a background scheduled process because export could be expensive in resources and take long time to finish. After the task is finished you can email a user with the download link for example.

How to fail gracefully and get notified if screen scraping fails in ruby on rails

I am working on a Rails 3 project that relies heavily on screen scraping to collect data mainly using Nokogiri. I'm aggregating essentially all the same data but I'm grabbing it from many difference sources and as time goes on I will be adding more and more. However I am acutely aware that screen scraping can be notoriously unreliable.
As such I am interested in how other people have handled the problem of verifying the data and then also getting notified if it is failing.
My current plan is as follow.
I am going to have validation on my model for most of the fields. If they fail I won't get bad data into my system. Although logging this failure in a meaningful way is still a problem.
I was thinking of some kind of counter where after so many failures from a particular source I somehow turn it off. Not sure how to keep track of that. I guess the only way is to have a field on my Source model that counts it and can be reset.
Logging is 800 pound gorilla I'm not sure how to deal with. I could just do standard writing to logs but if something fails I'd like to store the entire html so I can figure it out. Also I need to notify myself somehow so I can address the issues. I thought of maybe just creating a model for all this and storing it in the database. If I did this I'd probably have to store the html on s3 or something. I'm running this on heroku so that influences what I can do.
Setup begin and rescue blocks around every field. I was trying to figure out a to code this in a nicer ruby way so I just don't have a page of them but although I do have some fields are just straight up doc.css_at("#whatever") there are quite a number that require various formatting or calculations so I think it makes sense to try to rescue that so I can then log what went wrong. The other option is to let the exception bubble up and catch it when I try to create the model.
Anyway I'm sure I'm not even thinking of everything but that is why I'm trying to figure out how other people have handled this problem.
Our team does something similar to this, so here's some ideas:
we use a really high level begin/rescue transaction to make sure we don't get into weird half loaded states:
begin
ActiveRecord::Base.transaction do
...try to load a data source...
end
rescue
...error handling...
end
Email/page yourself when certain errors occur. We use exception_notifier but if you're sitting on Heroku the Exceptional plugin also seems like a good option. I've also heard of people having success w/ hoptoad
Capturing state is VERY important for troubleshooting issues. Something that's worked quite well for us is GMail. Our loaders effectively have two phases:
capture data and send it to our gmail account
log into gmail, download latest data and parse it
The second phase is the complex one, and if it fails a developer can simply log into the gmail account and easily inspect the failed message. This process has some limitations (per email and per mailbox storage limits, two phase pipeline, etc.) and we started out doing it because we had no other option, but it's proven shockingly resilient and convenient. Keep email in mind as a cheap/easy way to store noncritical state. We didn't start out thinking of using it that way and are now really glad we do. Logging into GMail feels better than digging through log files.
Build a dashboard UI. We have a simple dashboard with a grid of sources by day that looks like this. Each box is colored either red or green based on whether the load for that source on that day succeeded. You can go one step further and set up a monitor on this UI (mon.itor.us or equivalent) that alarms if some error threshold is met.

Resources