Paperclip offers nice validator methods like
validates :image, attachment_size: { in 0..2.megabytes }
My problem is that attachment files get uploaded to S3 even though the validators would add errors to the attachment hosting object. So if the image is too big it's getting uploaded and the ActiveRecord-Object is getting errors on it when validating. That's okay but for my situation it would be more clean to reject uploads that are too big.
Is there a way to tap into the process and prevent a file from being uploaded to S3 under certain conditions?
Currently my implementation cares for the errors and deletes the attachment afterwards if the hosting object is not valid.
The described situation refers to Rails 4.0 application using Ruby 2.0.
The described problem does not occur in more recent Paperclip versions (most recent version at the time I'm writing this: 4.2). Files won't be uploaded to S3 when validations have attached errors to the AR-Object then.
Related
I'm new to S3 and Shrine, and I'm working with a Shrine uploader in Ruby on Rails to upload files to Amazon S3, which has been in place for a couple of years on this Rails app.
The thing I'm working on has a goal to have S3 generate a checksum when uploading files, and according to these docs for adding a "trailing checksum" the ChecksumAlgorithm needs to be used: https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
In the ruby SDK docs, it lists checksum_algorithm as a param.
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Object.html#put-instance_method
When I add the param in the Shrine uploader (plugin :upload_options, { checksum_algorithm: 'SHA256' }) and upload the file, I get the error ArgumentError: unexpected value at params[:checksum_algorithm] from aws-sdk-core/param_validator.rb:33:in 'validate!' https://github.com/aws/aws-sdk-ruby/blob/version-3/gems/aws-sdk-core/lib/aws-sdk-core/param_validator.rb#L14.
I've tried different cases, with and without the dash, and anything else I can think of syntax-wise, but no luck.
It turns out I was using an older version of aws-sdk-s3 and updating the gem solved the problem. Thanks #Janko
I want to allow my user to download a bundle of files that are stored on s3 using the zipline gem. The files are already hosted on an s3 server but they aren't there as part of a paperclip or carrierwave attachment in my app. Will I need to create some records in my database to sort of trick zipline into thinking they are paperclip attachments, or is there a way I can send the zip file without bothering with an attachment gem? At the moment, trying to download the files with zipline doesn't throw an error message at all. It just seems to skip right over and nothing downloads.
See the part of the zipline README where an enumerator gets used to include remote files into the ZIP. It uses absolute URLs, to generate those from your S3 objects you will need to use presigned URLs (which Zipline is going to pass on to Curb):
Aws::S3::Bucket.new(your_bucket_name).object(your_key).presigned_url(:get)
We are looking to add a simple file uploader to our rails 3.2 app which is a business application (with Rails engines). Here are what we are looking for with the file uploader:
Allow access control to who can do what. For example, sales can upload a contract and acct can view the uploaded contract.
No change to current model. The file uploader acts on its own about file uploading, checking, storing and removing. We are thinking to have a file uploader engine and attach the engine to the Rails app.
File uploaded belongs to a model. For example, uploaded contract copy belongs to a project.
May need to upload file to a remote server.
We are evaluating options of developing our own uploader engine or find a upload gem such as carrierwave or paperclip. Can someone shed a light on rails file uploading and its related issue?
Using a combination of cancan and paperclip is a good option.
I'm in the process of upgrading from Ruby 1.8.7 to 1.9.3 and from Rails 2.3 to 3.2 As part of that upgrade, I'm moving from Paperclip 2.2.9 to 3.5.2. My ImageMagick version is 6.8.6. One issue that I've discovered as part of the upgrade process is that upload performance is very poor when it comes to large (~1 MB) text files. The files in question do not need to specifically be .txt files, anything in plain text format (.xml files, for example) are also effected.
For your reference, here is my Paperclip setup:
has_attached_file :attachment,
:url => "/shared_documents/:id/:basename.:extension",
:path => ":rails_root/user_uploaded_content/shared_documents/:id/:basename.:extension"
For simplicity, I'm omitting our validations and so on as we are simply checking file size and presence.
Watching the top processes running on my development machine, it seems that the bottleneck occurs when Paperclip is calling ImageMagick's identify command. Calling identify on a variety of files through the command line has allowed me to verify that metadata is returned almost immediately for image files but large, non-image text files take a very long time to process.
For my application, I am allowing users to upload documents in whatever format they like so I must be able to efficiently process both images and text files. Has anyone else encountered this issue? Is there a way to selectively disable calling identify on certain file formats in Paperclip but not others? Failing that, we could live with simply not calling identify if that is an option. Perhaps there a way to set configure ImageMagick to more gracefully handle large text files?
If you're not actually post processing the files, just tell Paperclip not to post process them. From the Paperclip documentation, you can do this a couple of ways. One is to supply an empty list of styles in the model:
has_attached_file :attachment,
styles:{},
url: "/shared_documents/:id/:basename.:extension",
path: ":rails_root/user_uploaded_content/shared_documents/:id/:basename.:extension"
or, you may just supply no processors
has_attached_file :attachment,
processors:[],
url: "/shared_documents/:id/:basename.:extension",
path: ":rails_root/user_uploaded_content/shared_documents/:id/:basename.:extension"
or, you could possibly use the before_post_process callback in your model and return false to halt the process, but Paperclip may call identify first to validate the file, which would make this option pointless for your situation:
has_attached_file :attachment,
url: "/shared_documents/:id/:basename.:extension",
path: ":rails_root/user_uploaded_content/shared_documents/:id/:basename.:extension"
before_post_process :skip_processing
def skip_processing
false
end
I have an application which uploads images to an S3 bucket using Paperclip. It's been working fine for months, but suddenly my files are not being uploaded to the S3 bucket. Unfortunately, I've been doing a refactoring in a number of unrelated areas, and it's possible that something I changed broke my upload.
I'm using paperclip 2.3.1.
That said, there are a number of confusing aspects to this and frankly I am at a loss. First, there are no errors in the log indicating that the upload failed. The paperclip attachment attributes are populated in the database. The application thinks the upload occurred successfully. But when I look in S3, the file is not there.
Second, I have an almost identical attachment on a different model, which uploads to the same S3 bucket successfully -- the code is almost identical, and there clearly cannot be a permissions issue.
I found references in several places that suggested removing the right_aws game and instead only having the aws_s3 gem...which I did...but to no avail. Moreover, I never saw the (5 for 4) error in my log regardless.
Does anyone have any suggestions on how I can further diagnose this? Are there any options in paperclip to increase the verbosity of logging?
Thanks!
I had this issue as well and the cause was that my :multipart => true key/value hadn't been nested correctly in the :html key of the form_for helper.
It turns out the app was using Paperclip 2.3.4 which introduced some S3 issues.
Upgrading to 2.3.5 solved the issue for me.