How to generate an S3 Access policy in Ruby on Rails - ruby-on-rails

Server side is Rails.
Client side is Flash, users will upload directly to S3
I need a flexible way to generate S3 policy files, base64 encode them, and then distribute the resulting signed policy to the client.
Is there a good library/gem for this, or do I need to roll my own?
I'll be using paperclip to store the file, as per:
http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip
I've had a look at:
https://github.com/geemus/fog
https://github.com/jnicklas/carrierwave
https://github.com/marcel/aws-s3
These look like they'll help me get bits done, but I can't tell if they'll help me generate flexible policies.
EDIT: Going to give the "Generate an upload signature..." bit here a shot:
http://www.kiakroas.com/blog/44/

Here is a sample project for how to do this using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload

Related

How to implement AWS S3 Multipart Upload with Rails and Active Storage?

I'm using vanilla Rails Active Storage file upload with multiple:true option. The files are stored on S3. The setup is working well. However, I was thinking for very large files it would be beneficial to implement Multipart Upload for optimal speed and reliability.
I found a description of AWS S3 multipart upload here: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
I also found a Ruby specific page: https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu-ruby-sdk.html
However, I couldn't find any reference on how to implement this feature with Rails and Active Storage.
I would like to receive some direction on how best to go about implementing multipart upload without ripping out Active Storage if possible.
In case somebody is looking for an answer on this. Active Storage will support multipart upload starting from Rails 6.1. Active Storage direct upload automatically switches to multipart for large files. No settings changes are required.
You can customise the threshold for what is considered a large file. The default is 100MB, and you can change the default by adding this to your storage.yml under the amazon settings:
upload:
multipart_threshold: <%= 250.megabytes %>
Reference: https://github.com/rails/rails/blob/master/activestorage/CHANGELOG.md

Is it possible to upload directly without touching my server?

Is it possible to use carrier wave to upload directly to amazon's S3 without using my server?
What i mean is, I don't want the images first going to my ec2 instance, and then uploaded to s3. I believe there is a way to upload directly to S3 to save my server's resources from having to process/stream the file.
I am just looking into carierwave, does it support nice html5 uploads where the user can just drag and drop the file on the web page?
If you want to upload directly to S3 from the browser you must do it with Javascript.
Heroku provides a nice tutorial : https://devcenter.heroku.com/articles/direct-to-s3-image-uploads-in-rails
Once uploaded, you can pass the finale S3 public URL of the image in a hidden field and download it server-side with carrierwave for further manipulation (resizing, ...)

How to see the speedup when using Cloudinary "direct upload" method?

I have a RoR web app that allow users upload images and use Cloudinary as cloud storage. I read their document and find a cool way called "direct uploading" which reduce my server's loading. To my knowledge, the spirit is changing workflow
image -> server -> Cloudinary
to
image -> Cloudinary
and my server only store an Cloudinary url to database, not the image file (Tell me if I'm wrong, thx).
So my question is, how to check whether I have changed to "direct uploading" method successfully? Open element inspector to see time cost for each POST and GET requests? Other better options?
I expect big advances via this way, but how can I feel it?
Thanks form a rookie =)
# The app is deployed on heroku.
# Doesn't change to direct uploading method yet.
# This app is private, only serve for around 10 people.
You can indeed (and it is very recommended to) bypass your server and let Cloudinary take care of the upload processing directly. This indeed lowers the processing of your server to simply store the uploaded image's details, and the image is directly stored in your Cloudinary account. This indeed quickens the upload process. You can test out the sample project which demonstrates both server-side and client-side uploads.

Carrierwave, Fog and URL Rewriting

I'm storing images in Amazon S3 using Fog and Carrierwave. It returns a url like bucket.s3.amazonaws.com/my_image.jpg.
DNS entries have been set up so that images.mysite.com points to bucket.s3.amazonaws.com.
I want to adjust my views an APIs to use the images.mysite.com/my_image.jpg URL. Carrierwave, however, only spits out the Amazon based one. Is there a simple way to tell Carrierwave and/or Fog to use a different host for its URLs than normal? If not, how would I modify the uploader to spit it out?
Come to find out that, as of June 6th, 2012, Amazon AWS does not support custom SSL certs, which makes this a moot point.

Amazon S3 Multipart Upload with plupload and Rails 3

Amazon has multipart upload functionality where you can send a file in chunks and get it assembled on S3. This allows for some nice resume like functionality for uploading to S3. From another question i got this nice link: Rails 3 & Plupload
My question is does anyone have any examples where they used the plupload chunking feature with the Amazon multipart feature? Ideally with carrierwave & fog.
I can see it doing the following:
Generate Unique ID for the upload with plupload,
can we do an event when the plupload
starts?
Attaching an ajax request to
the chunk completed with the ID
Having ajax controller method on the
server which uploads to s3 using the
ID
when all are complete fire a
controller action to reassemble
There is supposedly some PHP code which does some combining, but not with S3 and i can't stand to read PHP.
this is very similar, you should find Interesting
enjoy & feel free to fork/pull request ... & so on
You can find simple core java code without AWS library, This will help u implement in any technology..
http://dextercoder.blogspot.in/2012/02/multipart-upload-to-amazon-s3-in-three.html

Resources