Delayed job to check if a carrierwave_direct upload has been completed and attached to a model - carrierwave

I'm using carrierwave_direct and delayed_job with user uploaded images. The process is:
User drag & drops or selects a file.
File is uploaded to S3 directly using carrierwave_direct to do all the form signing etc.
User is presented with a dialog to crop their image, the key (from carrierwave_direct) is passed in through a hidden field. At this point no record exists in the database, but a file is stored on S3. Should the user now move away from the page, we will have an abandoned / unattached file on S3 that does not belong to any model.
If user completes the process, a record will be created and processing (cropping and creating several thumbnail versions) is done by a worker queue.
I had to make a delayed job that is put into the queue as soon as the file is uploaded. It will have a delay of several hours (or days). This job checks to see if the file exists, then checks through all recently created image records in the database to see if any are attached to that file. If not, it deletes the file.
Lastly, I wanted to make use of Carrierwave classes to handle all the Fog stuff, since I was using it anyway.
I could not find this anywhere, so here is my version. Leaving this here for people to come across in the future.

This is how I did it.
def new
# STEP 1: Show upload dialog that user can drop an image unto
#image = current_user.portfolio.images.new.images
# Status 201 returns an XML
#image.success_action_status = "201"
end
def crop
# STEP 2: A file is now on Amazon S3, but no record exists in the database yet. Make the user crop the image and enter a title.
# Meanwhile, also setup a delayed job that will later check if this step has been completed.
# Note: the crop view gets retrieved and inserted using ajax in images.js.coffee
#image = Image.new(key: params[:key])
Delayed::Job.enqueue ImageDeleteIfUnattachedJob.new(params[:key]), 0, 15.minute.from_now.getutc
render :partial => "images/crop.html.erb", :object => #image
end
ImageDeleteIfUnattachedJob = Struct.new(:key) do
def perform
# Do any of the images created in the last week match the key that was passed in?
# If not, the user probably went through the upload process, which then either went wrong or was cancelled.
unless Image.where("created_at > ?", 1.week.ago).order('created_at DESC').any? { |image| key == image.images.path }
# We spawn these to let Carrierwave handle the Fog stuff.
#uploader = ImageUploader.new
#storage = CarrierWave::Storage::Fog.new(#uploader)
#file = CarrierWave::Storage::Fog::File.new(#uploader, #storage, key)
if #file.exists?
# Indeed, the file is still there. Destroy it.
#file.delete
else
return true
end
end
end
def max_attempts
3
end
end

Related

Ruby File.exist? returns true before file is readable

I've got a Rails app where I'm trying to pass the creation of a large PDF to a background process and allow the user to see it when it's finished. It's a multi-page document combined with the combine_pdf gem and passed to delayed_job.
I have 3 actions: the first creates and saves the file, the second is called repeatedly with short delays via an asynchronous request to check if the file exists yet, and the third shows the PDF in the browser.
I'm having trouble with the second part, as it uses File.exist?('my_file.pdf'), but this is returning true before the file has finished saving. The link that is then shown to view the PDF results in an error (ActionController::MissingFile). The file actually becomes available about 10 seconds later, at which point the link works correctly.
I'm guessing the file is still being written at the point that it's checked? How can I check the file saving has completed and the file is actually available to be read?
This is (very broadly and somewhat roughly) how I do it:
First, I call a post action on the controller that is calling the background create process. This action creates a ServiceRequest (a model in my app) with relevant service_request.details and a status of created. The service_request is then sent to the background process (I use RabbitMQ). And the action returns the service_request.id.
The front end starts pinging (via AJAX) the service request end point (something like service_requests/:id), and the ServiceRequestController's show action sends back the service_request.status (along with other stuff, including service_request.results. This loops while the service_request.status is neither completed nor failed.
Meanwhile, the background process creates the PDF. When it is done, it sets the service_request.status to completed. And, it sets service_request.results to contain the data the front end needs to locate and retrieve the PDF. (I store my PDFs to and AWS bucket since I'm on Heroku.)
When the front end finally receives a service_request.status of completed it uses the service_request.results to fetch and display the PDF.
You could write to a temp file then rename it once it's finished. You haven't included your code so far but something like this could work for you:
def write_and_replace(path, file, file_name, data)
file_path = File.join(folder_path, "#{file_name}.pdf")
temp_file_path = File.join(folder_path, "temp_#{file_name}.pdf")
File.open(temp_file_path, 'w') { |f| f.write(data) }
# Args: existing file, new name
File.rename(path, file_path)
File.rename(temp_file_path, file)
File.delete(file_path)
end

Rails-RMagick handle single image

I'm building a rails app where a user uploads an image, then it gets sent to rmagick to be modified and then gets rendered. Since the user only handles one image, I was initially thinking that I could store it in memory instead of the database, but that seems to not be feasible. The model name is AppImage, so I then thought about displaying AppImage.last and deleting all previous AppImages right before rendering it, but I'm wondering if that would cause problems with multiple users.
Is the best solution to have each user get a user profile according to their IP, and have one AppImage per user? Should I be thinking about session hashes?
Edit: I am currently using paperclip, but just am not sure how to structure the program.
if an user uploads an image, you could set a session variable to true, and check in your uploads controller if the session variable is set or not..dependent on that you allow the user to upload an image, or not. you can set the session store to db, further you can define a range how long the session is saved.
controller:
def new
#upload = YourUploadModel.new
session[:image_uploaded] ||= true
end
def create
if session[:image_uploaded] && session[:image_uploaded] == true
redirect_to root_path, :notice => "Already uploaded an image today!"
else
# create your upload..
end
end
app/config/initializers/session_store.rb:
# Use the database for sessions instead of the cookie-based default,
# which shouldn't be used to store highly confidential information
# (create the session table with "rails generate session_migration")
YourAppname::Application.config.session_store :active_record_store, {
expire_after: 1.days
}

How to crop a temp image with Paperclip and store it in Amazon S3 in Rails 3

I'm having a great trouble trying to do some out-of-ordinary tricks with paperclip.
This is my situation:
My app users have an avatar image and my idea is to let them crop their avatars through Jcrop. My app is hosted in Heroku so that I must to upload the images to Amazon S3.
I've used this popular railscast to implement the cropping functionality but it requires to process two times the images. Here is where the problems started.
I think than a posible solution might be not process the images in the first time (when the user selectes an image) but do it the second. I've implemented this code in my controller:
def change_avatar
#user = current_user
paperclip_parameters = params[:user][:avatar] #first time call
if #temp_image_object.present? || params[:avatar].present?
if check_crop_params #second controller call
#user.avatar = File.new(#user.tmp_avatar_path) #overrides the
redirect_to #user, notice: t("messages.update.user.avatar") if #user.save
else #first controller call
#temp_path = #user.generate_temp_image(paperclip_parameters)
#user.tmp_avatar_path = #new_path #store the custom path to retrieve it in the second call
render :action => 'avatar_crop' if #user.save
end
end
end
def check_crop_params
!params[:user][:crop_x].blank? && !params[:user][:crop_y].blank? && !params[:user][:crop_w].blank? && !params[:user][:crop_h ].blank?
end
and in my user model:
#this method copies the original image tempfile when user upload the image to a custom path and returns the custom path
def generate_temp_image(paperclip_parameters)
uploaded_img_path = uploaded_img.tempfile.path
temp_dir = Rails.root.join('public/tmp')
Dir.mkdir(temp_dir) unless Dir.exists?(temp_dir)
FileUtils.cp(uploaded_img.tempfile, temp_dir)
new_path = uploaded_img_path
end
I also have the custom processor for jcrop, that takes the crop variables when proccessing the images.
When I upload an image (first controller call) the change_avatar method works well but when I crop the image (second controller call) the image isn't cropped, paperclip creates the image styles files but ignores the cropping I did.
Any ideas? what I should do?
I had forgotten an small detail.. looking in the server logs I realized that the paperclip process wasn't cropping the images, so I looked in my custom processor and I find that it depended on the crop parameters : crop_x, crop_y, crop_w, crop_h.
For some reason this parameters didn't reach the processor, so the images never were going to be cropped. All I had to do is to manually assign this params to the user vars, and it works!
The best strategy is direct-upload original upload to s3. This tutorial has an example talking about how to run the paperclip post-processing method after uploading to S3. It also discussed how to avoid the Heroku H12 error in case the post-processing takes too long on S3.
You may also check out this article. It has a demo code using paperclip and jcrop.

Multi step form with image uploader

I want to build 3 step user registration with avatar uploading on 2nd step. So i follow Ryan Bates's guide http://railscasts.com/episodes/217-multistep-forms . I'm using CarrierWave gem to handle uploads. But it seems like i can't store uploaded file info in user session (i'm getting can't dump File error). I use following technique in controller
if params[:user][:img_path]
#uploader = FirmImgUploader.new
#uploader.store!(params[:user][:img_path])
session[:img] = #uploader
params[:user].delete(:img_path)
end
It actually helps. But when i upload forbidden file type everything's crashes on this line
#uploader.store!(params[:user][:img_path])
with this error
CarrierWave::IntegrityError in UsersController#create
You are not allowed to upload "docx" files, allowed types: ["jpg", "jpeg", "gif", "png"]
instead of normal form validation error.
How can i solve this problem ? Thanks !
Actually I solved my problem. Here's working code for multistep forms with file uploading using carrierwave
if params[:user][:img_path]
#uploaded = params[:user][:img_path]
params[:user].delete(:img_path)
end
session[:user_data].deep_merge!(params[:user]) if params[:user]
#user = User.new(session[:user_data])
if #uploaded
# here how validation will work
#user.img_path = #uploaded
end
#user.current_stage = session[:register_stage]
if #user.valid?
if #user.last_stage?
#user.img_path = session[:img] if #user.last_stage?
#user.save
else
#user.next_stage
end
# now we can store carrierwave object in session
session[:img] = #user.img_path
session[:register_stage] = #user.current_stage
end
This may be a little late for the OP, but hopefully this helps someone. I needed to store an uploaded image in the user's session (again for a multi-step form), and I too started with Ryan's Railscast #217, but the app quickly evolved beyond that. Note that my environment was Rails 4 on Ruby 2, using Carrierwave and MiniMagick, as well as activerecord-session_store, which I'll explain below.
I believe the problem that both OP and I had was that we were trying to add all of the POST params to the user's session, but with a file upload, one of the params was an actual UploadedFile object, which is way to big for that. The approach described below is another solution to that problem.
Disclaimer: As is widely noted, it's not ideal to store complex objects in a user's session, better to store record identifiers or other identifier data (e.g. an image's path) and look up that data when it's needed. Two major reasons for this keeping the session and model/database data in sync (a non-trivial task), and the default Rails session store (using cookies) is limited to 4kb.
My Model (submission.rb):
class Submission < ActiveRecord::Base
mount_uploader :image_original, ImageUploader
# ...
end
Controller (submissions_controller.rb):
def create
# If the submission POST contains an image, set it as an instance variable,
# because we're going to remove it from the params
if params[:submission] && params[:submission][:image_original] && !params[:submission][:image_original].is_a?(String)
# Store the UploadedFile object as an instance variable
#image = params[:submission][:image_original]
# Remove the uploaded object from the submission POST params, since we
# don't want to merge the whole object into the user's session
params[:submission].delete(:image_original)
end
# Merge existing session with POST params
session[:submission_params].deep_merge!(params[:submission]) if params[:submission]
# Instantiate model from newly merged session/params
#submission = Submission.new(session[:submission_params])
# Increment the current step in the session form
#submission.current_step = session[:submission_step]
# ... other steps in the form
# After deep_merge, bring back the image
if #image
# This adds the image back to the Carrierwave mounted uploader (which
# re-runs any processing/versions specified in the uploader class):
#submission.image_original = #image
# The mounted uploader now has the image stored in the Carrierwave cache,
# and provides us with the cache identifier, which is what we will save
# in our session:
session[:submission_params][:image_original] = #submission.image_original_cache
session[:image_processed_cached] = #submission.image_original.url(:image_processed)
end
# ... other steps in the form
# If we're on the last step of the form, fetch the image and save the model
if #submission.last_step?
# Re-populate the Carrierwave uploader's cache with the cache identifier
# saved in the session
#submission.image_original_cache = session[:submission_params][:image_original]
# Save the model
#submission.save
# ... render/redirect_to ...
end
end
My uploader file was mostly stock with some custom processing.
Note: to beef up sessions, I'm using activerecord-session_store, which is a gem that was extracted from the Rails core in v4 that provides a database-backed session store (thus increasing the 4kb session limit). Follow the documentation for installation instructions, but in my case it was pretty quick and painless to set it and forget it. Note for high-traffic users: the leftover session records don't seem to be purged by the gem, so if you get enough traffic this table could potentially balloon to untold numbers of rows.

Paperclip - delete a file from Amazon S3?

I need to be able to delete files from S3 that are stored by users, such as profile photos. Just calling #user.logo.destroy doesn't seem to do the trick - I get [paperclip] Saving attachments. in the logs and the file stays right there in the S3 bucket.
How can the file itself be removed?
This are the methods from Paperclip that can be used to remove the attachments:
# Clears out the attachment. Has the same effect as previously assigning
# nil to the attachment. Does NOT save. If you wish to clear AND save,
# use #destroy.
def clear(*styles_to_clear)
if styles_to_clear.any?
queue_some_for_delete(*styles_to_clear)
else
queue_all_for_delete
#queued_for_write = {}
#errors = {}
end
end
# Destroys the attachment. Has the same effect as previously assigning
# nil to the attachment *and saving*. This is permanent. If you wish to
# wipe out the existing attachment but not save, use #clear.
def destroy
clear
save
end
So you see, destroy only removes the attachment if no error occurs. I have tried it with my own setup against S3 so I know that destroy works.
Could the problem in your case possible be that you have any validations that cancels the save? I.e validates_attachment_presence or something similar?
I think one way to find out would be to try #user.logo.destroy and then check the content of #user.errors to see if it reports any error messages.
This seems like an answer to your question, although I don't totally understand their distinction between destroy and clear (I don't know which model has_attached_file, page or image):
Rails Paperclip how to delete attachment?

Resources