Paperclip upload to S3 is failing silently...help! - ruby-on-rails

I have an application which uploads images to an S3 bucket using Paperclip. It's been working fine for months, but suddenly my files are not being uploaded to the S3 bucket. Unfortunately, I've been doing a refactoring in a number of unrelated areas, and it's possible that something I changed broke my upload.
I'm using paperclip 2.3.1.
That said, there are a number of confusing aspects to this and frankly I am at a loss. First, there are no errors in the log indicating that the upload failed. The paperclip attachment attributes are populated in the database. The application thinks the upload occurred successfully. But when I look in S3, the file is not there.
Second, I have an almost identical attachment on a different model, which uploads to the same S3 bucket successfully -- the code is almost identical, and there clearly cannot be a permissions issue.
I found references in several places that suggested removing the right_aws game and instead only having the aws_s3 gem...which I did...but to no avail. Moreover, I never saw the (5 for 4) error in my log regardless.
Does anyone have any suggestions on how I can further diagnose this? Are there any options in paperclip to increase the verbosity of logging?
Thanks!

I had this issue as well and the cause was that my :multipart => true key/value hadn't been nested correctly in the :html key of the form_for helper.

It turns out the app was using Paperclip 2.3.4 which introduced some S3 issues.
Upgrading to 2.3.5 solved the issue for me.

Related

Rails carrierwave and cloudinary multiple file uploads

So I'm trying to get carrierwave to work with cloudinary for multiple file uploads but it keeps giving me this error that says:
undefined method `all_versions_processors' for Array
I followed the carrierwave documentation where I added a listing_images attribute to my Listings table which is of type json.
I also set the multiple to true option in the form file input.
And in my ListingsController I have specified as one of the permitted params the following:
listing_images: []
I'm pretty sure everything is configured properly but I can't figure out why this error is thrown. Any help would be greatly appreciated.
It's on the road-map to officially support multiple uploads with Carrierwave on Cloudinary's GEM. In the meantime, as a workaround, you can accomplish multiple uploads a bit differently. Here's a basic sample project that demonstrates it:
https://github.com/taragano/Cloudinary_multiple_uploads

Paperclip: Validate/process attachment before upload

Paperclip offers nice validator methods like
validates :image, attachment_size: { in 0..2.megabytes }
My problem is that attachment files get uploaded to S3 even though the validators would add errors to the attachment hosting object. So if the image is too big it's getting uploaded and the ActiveRecord-Object is getting errors on it when validating. That's okay but for my situation it would be more clean to reject uploads that are too big.
Is there a way to tap into the process and prevent a file from being uploaded to S3 under certain conditions?
Currently my implementation cares for the errors and deletes the attachment afterwards if the hosting object is not valid.
The described situation refers to Rails 4.0 application using Ruby 2.0.
The described problem does not occur in more recent Paperclip versions (most recent version at the time I'm writing this: 4.2). Files won't be uploaded to S3 when validations have attached errors to the AR-Object then.

Carrierwave/fog are uploading files to S3, but the app is still trying to use the local path to access images

I know this is a broad question, and I'm biting off a little more than I can chew for a first stab at a rails app, but here I am.
I tried to add an image upload/crop to a basic status app. It was working just fine uploading the images and cropping them with carrierwave, but as soon as I started using Fog to upload to S3, I ran into issues.
The image, and it's different sizes, appear to be ending up on S3 just fine, but the app is still trying to access the image as "/assets/uploads/entry/image/65/large_IMG_0035.jpg"
Locally, it just shows a broken image, but on Heroku it breaks the whole thing because
ActionView::Template::Error (uploads/entry/image/1/large_IMG_0035.jpg isn't precompiled
The heroku error makes sense to me because it shouldn't be there. I've combed through the app but don't know what's forcing this. I'll post any code anybody thinks will work? Thanks in advance!
Clarification:
Just to clarify, the images are uploading to S3 fine, the problem is how the app is trying to display the image_url
The app is using a local path in the asset pipeline, not the S3 path that it's actually uploading to.
I was having the same issue. In my Carrierwave Initializer I was setting host to s3.amazonaws.com but when I removed that line altogether urls started working.
I hope this helps you resolve your issue, I fought this for several hours!
I believe this issue is related to how you are accessing your image in your view.
If you have mounted an uploader on the field avatar in the following manner:
class User < ActiveRecord::Base
mount_uploader :avatar, AvatarUploader
end
You would access it in your ERB as follows:
<%= image_tag(#user.avatar_url) %>
I would also suggest watching the following Railscast on the topic.
http://railscasts.com/episodes/253-carrierwave-file-uploads
Re-reading issue, I bet it has to do with Carrierwave using Herkou.
Give this a glance and see if it helps.
https://github.com/jnicklas/carrierwave/wiki/How-to%3A-Make-Carrierwave-work-on-Heroku
I am not clear what exactly do you want to achieve.
But for now I have 2 ideas:
For assets host in CDN, you can take a look at this:
https://devcenter.heroku.com/articles/cdn-asset-host-rails31
If you want the images to be part of a model-relation, here's my rough idea:
Put the images path in a table column.
For further information about this you can browse carrierwave github site.(It has many docs and tutorial)

CKEditor won't link files (backed by rails, mongoid, paperclip, s3)

I'm having issues with CKEditor. I can upload and insert pictures without issues, but when I try to do the same with files, the link to my file is set to something like javascript:void(0)/*130*/, with the number changing. This is happening on FF/Safari/Chrome.
My app runs on rails 3.1.3, using MongoDB/Mongoid as database/ODM, with paperclip for handling attachments and using S3 for hosting assets. When I explore my bucket I can see that the files are uploaded correctly, so the problem (probably) come from somewhere else. I'm using this gem, and both the rc2 & the master branch doesn't fix that.
Thanks for your time.
Well it's been a while. I solved it by forking the gem (cf ksol/ckeditor), but the diff is too obfuscated to remember what was wrong. Hopefully the original gem is working now.

Paperclip, large file uploads, and AWS

So, I'm using Paperclip and AWS-S3, which is awesome. And it works great. Just one problem, though: I need to upload really large files. As in over 50 Megabytes. And so, nginx dies. So apparently Paperclip stores things to disk before going to S3?
I found this really cool article, but it also seems to be going to disk first, and then doing everything else in the background.
Ideally, I'd be able to upload the file in the background... I have a small amount of experience doing this with PHP, but nothing with Rails as of yet. Could anyone point me in a general direction, even?
You can bypass the server entirely and upload directly to S3 which will prevent the timeout. The same thing happens on Heroku. If you are using Rails 3, please check out my sample projects:
Sample project using Rails 3, Flash and MooTools-based FancyUploader to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-FancyUploader
Sample project using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload
By the way, you can do post-processing with Paperclip using something like this blog post (that Nico wrote) describes:
http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip
Maybe you have to increase the timeout in the ngix configs?
You might be interested in my post here:
http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip
Its about uploading multiple files (with progress bars, simultaneously) directly to S3 without hitting the server.
I was having a similar problem but with using paperclip, passenger and apache.
Like nginx, apache has a Timeout directive in apache which I increased to solve my problem.
Also there's an interesting thing passenger does when uploading large files.
Anything over 8k is written to /tmp/passenger. and if apache doesn't have
permissions to write there you get 500 errors also.
Here's the article.
http://tinyw.in/fwVB

Resources