I got an application that creates loads of PDF documents which, sometimes take some time to create, hence I moved all the PDF creation to a resque background job. However, some PDFs need also to be sent out by mail which now is a problem because I don't know how to tell the mailer to wait for the PDFs to be created.
Before I had this:
#contract.create_invoice
ContractMailer.send_invoice(#contract).deliver
Now I have this:
Resque.enqueue(InvoiceCreator, #contract)
ContractMailer.send_invoice(#contract).deliver
So ContractMailer always fails because the pdf is not yet created. Anyone has an idea how to solve this elegantly?
Thanks!
Related
Is it possible to access to the information being saved into a rails log file without reading the log file. To be clear I do not want to send the log file as a batch process but rather every event that is written into the log file I want to also send as a background job to a separate database.
I have multiple apps running in docker containers and wish to save the log entries of each into a shared telemetry database running on the server. Currently the logs are formatted with lograge but I have not figured out how to access this information directly and send it to a background job to be processed.(as stated before I would like direct access to the data being written to the log and send that via a background job)
I am aware of the command Rails.logger.instance_variable_get(:#logger) however what I am looking for is the actual data being saved to the logs so I can ship it to a database.
The reasoning behind this is that there are multiple rails api's running in docker containers. I have an after action set up to run a background job that I hoped would send just the individual log entry but this is where I am stuck. Sizing isn't an issue as the data stored in this database to be purged every 2 weeks. This is moreso a tool for the in-house devs to track telemetry through a dashboard. I appreciate you taking the time to respond
You would probably have to go through your app code and manually save the output from the logger into a table/field in your database inline. Theoretically, any data that ends up in your log should be accessible from within your app.
Depending on what how much data you're planning on saving this may not be the best idea as it has the potential to grow your database extremely quickly (it's not uncommon for apps to create GBs worth of logs in a single day).
You could write a background job that opens the log files, searches for data, and saves it to your database, but the configuration required for that will depend largely on your hosting setup.
So I got a solution working and in fairness it wasn't as difficult as I had thought. As I was using the lograge gem for formatting the logs I created a custom formatter through the guide in this Link.
As I wanted the Son format I just copied this format but was able to put in the call for a background job at this point and also cleanse some data I did not want.
module Lograge
module Formatters
class SomeService < Lograge::Formatters::Json
def call(data)
data = data.delete_if do |k|
[:format, :view, :db].include? k
end
::JSON.dump(data)
# faktory job to ship data
LogSenderJob.perform_async(data)
super
end
end
end
end
This was just one solution to the problem that was made easier as I was able to get the data formatted via lograge but another solution was to create a custom logger and in there I could tell it to write to a database if necessary.
I'm using paperclip 4.1.0 with Amazon S3.
I was wondering why requests were so slow and found that "copy_to_local_file" is called whenever I am updating attributes of a model with an attachment, even if it's just one attribute unrelated to the attachment (in my case, a cache_count, which means that every time someone votes an instance up, the attachment is downloaded locally!).
I understand it is used in case a rollback is required, but it seems overkill when the attribute isn't directly related to the attachment.
Am I using paperclip in the wrong way or is it something that could be improved ?
Thanks for your help
Just my 2 cents:
the attachment gets downloaded locally only after ActiveRecord::Base#save is called.
Would calling the 'base#save' in a cron on a daily basis instead help with the load?
Otherwise, either remove the calling of method copy_to_local_file if possible
or editing the source of paperclip's method of copy_to_local_file(style, local_dest_path) to exclude the downloading of attachment.
This was an issue with paperclip, it's fixed on the master branch!
I generate xls-file in background with help of delayed_job gem. After this I would like to send file to the user. Is there some way to call send_file method outside of controller in delayed_job class?
If you are backgrounding a task to delayed_job, then the user's request will no longer exist. If it does still exist, then you don't need to background the task at all (because it defeats the whole purpose of backgrounding).
My recommendation is to save the file off to disk with a corresponding database record once delayed_job generates it. While the user waits, have Ajax occasionally asking the server if the file is ready. When it is ready, use Ajax to make a new request for the file, display a download link, or whatever works best.
I am using paperclip to save photos to my users' profiles. Rails is saving the paperclip attachments multiple times every time a page loads. Any ideas on why this is happening?
It looks like User object is updated every time the page loads. This can happen if you are recording last_login_time for the user. The paperclip does not save the file every time. It prints the trace messages during save. Even though it is annoying, it is mostly harmless.
I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions.
One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary)
The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails.
Both seem equally messy.
Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.
Unless there's a particular reason not to use paperclip I'd highly recommend it. Used in conjunction with delayed job and delayed paperclip the user uploads the file to your server filesystem where you perform whatever validation you need. A delayed job then processes and stores it on s3. Really, really easy to set up and a better user experience.