Adding Additional Headers to Carrierwave for Amazon s3 Encryption - ruby-on-rails

In short
In short I want to know if I can send additional headers through a carrierwave and fog connection to Amazon s3?
In depth
I recently found that amazon supports Client and Server side encryption of files. more info ยป http://docs.amazonwebservices.com/AmazonS3/latest/dev/SSEUsingRESTAPI.html
I'm currently using carrierwave in a rails app to upload files to amazon s3.
For server side encryption amazon asks for a header of x-amz-server-side-encryption=AES256 added to the request.
So I'm looking to figure out how to send additional headers through with my carrierwave and fog.
My thought was that maybe I could use the fog_attribute config line something like the following and maybe that might work but I'm not sure the fog_attribute is for partiular attribute or just a blanket header section.
config.fog_attributes = {'x-amz-server-side-encryption' => 'AES256','Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
So I finally got my app in shape to test this but unfortunately it didn't work.
I also found this: https://github.com/geemus/fog/commit/070e2565d3eb08d0daaa258ad340b6254a9c6ef2 commit in the fog repository that make me feel the fog_attributes method is for a defined list of attributes.
There has got to be a way to make this work. Anyone?

I believe that should actually be correct, note however that I don't believe the server side encryption stuff has been released, so you would need to use edge fog to get this behavior. I hope to do a release soon though and then it should be good to go. If you find that you still can't get it working on edge let me know though and we'll try and see what can be done.

I cannot speak about CarrierWave, but this works for saving files with AWS256 encryption with the (currently) standard Fog distribution:
file.attributes[:encryption ] = "AES256"
result = file.save()
However, that does not work for copying files. What works for copying is:
fogfile.copy(#bucket_archived, newfilename, {'x-amz-server-side-encryption' => 'AES256'})

Related

Cloudinary upload image with HTTPS url

I am using a Rails app to upload images (or files) to cloudinary. I am using Carrierwave as uploader. Everything works fine locally, but I see that images are uploaded with http://something.
This does not work in production, since my domain is HTTPS, therefore, when I ask for a HTTP, it fails, since I need HTTPS.
Any help? I checked the documentation and they say that they work with HTTPS, that's why I am confused. I also wrote them, but it would be nice if someone had the same issue and knows how to solve it.
The upload response includes "secure_url", or you can just use "https://res.cloudinary.com..."
--Yakir

Use CDN and HTTPS for Spree::Images

Is it possible to use a CDN like Amazon Cloudfront with spree? I know I can set config.action_controller.asset_host in production.rb, but this doesn't affect the Spree::Image, or any spree helper functions like product_image().
Also, the /admin/image_settings/edit url with setting for s3_protocol, which seems to have no effect, even setting it to blank. I would like to be protocol agnostic, and have the URLs formed like //foo.cloudfront.com
Spree's image uploader is provided the Paperclip gem. There's a handy guide for Using Cloudfront with Paperclip. Paperclip will not use asset_host.
The first step would be to get your S3 image hosting working the way you want, and then get it to work through Cloudfront.
s3_protocol being '' should use protocol relative URL's as shown in this pull request.

Uploading an image to a Rails server via an Ember.js app

Like it says on the tin, I'm trying to upload an image with my Ember.js app to a Rails backend that's using Paperclip to manage file uploads. I had a look around and couldn't see any simple way to do this, does anyone know of a good solution here?
I faced similar recently, and it turns out that there are lots of complications with file uploading - does the device support it, do you want to be able to style the input that triggers the upload, etc.
We opted for Jquery File Upload: https://github.com/blueimp/jQuery-File-Upload
The approach I took was to upload directly to S3 from the browser, and then set the token that S3 returns as a property on a model, then save that to the server. Then on the server, you set off a background job to pull in that file from S3 and put it where it should be.
I wrote a fairly simple ember.js file upload example a few months back that shows how you can write a custom view + a custom adapter that allows you to post a multipart form back to the server. The example I did is built for python / django but the concepts should apply
https://github.com/toranb/ember-file-upload
I recently upgraded this to RC1 (like 5 minutes ago) and it appears to still work :D
There is now an Ember Uploader plugin for Ember. I'm just in the process of integrating it right now.
I have a couple kinks I'm ironing out, but it seems pretty legit. Probably less configuration than using the jquery file upload.

Sync-ing files in Rails repo to S3

I am thinking of implementing a rake task that would sync certain files in my repository to S3. The catch is that I only want to update the files when they are changed in my Repo. So if file A gets modified and B stays the same, only file A will be synchronized to S3 during my next app deploy.
What is a reliable way to determine that a file has been modified? I am thinking of using git to determine whether the file has been changed locally.... is there any other way to do this? Does S3 provide similar functionality to this?
S3 does not presently support conditional PUTs, which would be the ideal solution, but you can get this behavior with two requests instead. Your sync operation would look something like:
For each file that you want on S3:
Calculate the MD5 of the local file.
Issue a HEAD request for that S3 object.
Issue a PUT request if the object's Content-MD5 differs or the object does not exist.
That said, this sounds a lot like something you'd do with assets, in which case you'd be reinventing the wheel. The Rails 3 asset pipeline addresses this problem well -- in particular, fingerprinting assets and putting the hash in the URL allows you to serve them with insanely long max-age values since they're immutable -- and the asset_sync gem can already put your assets on S3 automatically.
What about deleted files? The easy way to do it is to blast the whole directory with latest.

Paperclip, large file uploads, and AWS

So, I'm using Paperclip and AWS-S3, which is awesome. And it works great. Just one problem, though: I need to upload really large files. As in over 50 Megabytes. And so, nginx dies. So apparently Paperclip stores things to disk before going to S3?
I found this really cool article, but it also seems to be going to disk first, and then doing everything else in the background.
Ideally, I'd be able to upload the file in the background... I have a small amount of experience doing this with PHP, but nothing with Rails as of yet. Could anyone point me in a general direction, even?
You can bypass the server entirely and upload directly to S3 which will prevent the timeout. The same thing happens on Heroku. If you are using Rails 3, please check out my sample projects:
Sample project using Rails 3, Flash and MooTools-based FancyUploader to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-FancyUploader
Sample project using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload
By the way, you can do post-processing with Paperclip using something like this blog post (that Nico wrote) describes:
http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip
Maybe you have to increase the timeout in the ngix configs?
You might be interested in my post here:
http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip
Its about uploading multiple files (with progress bars, simultaneously) directly to S3 without hitting the server.
I was having a similar problem but with using paperclip, passenger and apache.
Like nginx, apache has a Timeout directive in apache which I increased to solve my problem.
Also there's an interesting thing passenger does when uploading large files.
Anything over 8k is written to /tmp/passenger. and if apache doesn't have
permissions to write there you get 500 errors also.
Here's the article.
http://tinyw.in/fwVB

Resources