invalidate files in cloudfront using Ruby on Rails - ruby-on-rails

I want to invalidate files in cloudfront cache. The files are saved in amazon S3.
My requirement is
When I delete a post in my application, I want delete the file in S3 and send an invalidation request to cloudfront.
File delete from S3 is done. But I do not know how to send invalidation request to S3. I read about cloudfront-invalidator gem from https://github.com/reidiculous/cloudfront-invalidator/network/members. But I am not getting any specific example using that gem.

I got my solution using cloudfront-invalidator.
I changed as per my requirement and using it successfully..
I get problem in my rails 2.3.8 and ruby 1.8.7. So customize it and now it is successfully running
Here is the link
https://github.com/krishnasahoo/cloudfront-invalidator

Related

Can I use zipline gem to download from s3 without model associations with paperclip or carrierwave

I want to allow my user to download a bundle of files that are stored on s3 using the zipline gem. The files are already hosted on an s3 server but they aren't there as part of a paperclip or carrierwave attachment in my app. Will I need to create some records in my database to sort of trick zipline into thinking they are paperclip attachments, or is there a way I can send the zip file without bothering with an attachment gem? At the moment, trying to download the files with zipline doesn't throw an error message at all. It just seems to skip right over and nothing downloads.
See the part of the zipline README where an enumerator gets used to include remote files into the ZIP. It uses absolute URLs, to generate those from your S3 objects you will need to use presigned URLs (which Zipline is going to pass on to Curb):
Aws::S3::Bucket.new(your_bucket_name).object(your_key).presigned_url(:get)

Configuring a Heroku Rails app for CKEditor to work on Cloudfront

I'm using this gem to integrate CKEditor into my Rails app, with paperclip to handle image uploading.
I no longer want Heroku to serve my CKEditor-uploaded images anymore, so I'm switching to Cloudfront and found this tutorial.
As I understood (from the tutorial), I can use Cloudfront without using an S3 bucket, as Cloudfront will automatically fetch my precompiled static assets stored on Heroku. But I'm so confused:
Will image uploaded via CKEditor be included in the asset pipeline? So that when precompiled, will be served by Cloudfront?
The tutorial said I need to change all image links to <%= image_tag('...') %> so that it'll point to Cloudfront and work. But this is only possible when I hard-coded the image, not the case when a user uploaded one in his text and stored in the database. Am I wrong and how to solve it?
Will this method (not using S3 bucket) work for other "dynamic" images such as User's avatars, post's cover images, etc...?
I don't want to use S3 bucket as it will involve the use of asset_sync. Any help is appreciated, thanks!

Using Paperclip to direct upload files to S3

so I've got paperclip set up with uploadify to upload things to S3. I have made my setup so that stuff gets loaded directly to S3 and then when it's done I post to my webserver the results...
All I get back is the file name and size. am I supposed to build my own processor or before_post_process method to "download" the file from S3 in order to process it? or am I missing something and uploadify should have provided me a stream with the file inside it after it was done posting to S3?
How do you guys go about direct uploads to S3 and then notifying your paperclip backed model? Do you have to pull files from the server and do post-processing on them or will paperclip handle all of that?
Here are a couple blog posts describing how to do it...
http://www.railstoolkit.com/posts/uploading-files-directly-to-amazon-s3-using-fancyupload
http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip
They use FancyUploader (which uses MooTools/Flash) to upload directly to S3, bypassing Heroku and their dreaded 30 second request timeout all together, and then use DelayedJob to queue up post-processing tasks like thumbnailing and PaperClip to do the actual processing of the files.
If I can get this working with CarrierWave, I will post up a project on GitHub to share (in a week or so once I get time)
Update:
Sample project using Rails 3, Flash and MooTools-based FancyUploader to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-FancyUploader
Sample project using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload
I will add the post-processing example once I have time.
You can either create a processor or use the callback methods but the file will definitively be on your server before going to S3.
If you are in the callback method for example you can access it using something like:
self.file.to_file
Once that is done processing and uploading the file will be deleted from your server. You don't need to do anything to notify or post process. Paperclip will handle it.

Uploading to s3, using s3 servers

Does anyone have any sample code (preferrably in rails) that uploads to s3, using s3's servers.
Again, uploading directly to s3, where the actual upload/streaming is also preformed on amazon's servers.
Requirements:
Plupload, jQuery
Idea:
Authorize Upload via your app (sign it on server-side)
Use the signed request to upload the file to S3
Notify your app that the upload is done
Check whether S3 has received the file
I posted the code as a gist at https://gist.github.com/759939, it misses commments and you might run into some issues due to missing methods (had to rip it from our codebase).
stored_file.rb contains a model for your DB. Has many of paperclips helper methods inlined (which we used before we switched to direct upload to S3).
I hope you can use it as a sample to get your stuff running.
If you are using Rails 3, please check out my sample projects:
Sample project using Rails 3, Flash and MooTools-based FancyUploader to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-FancyUploader
Sample project using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload
To simply copy files, this is easy to use:
Smart Copy Script into S3
Amazon wrote a Ruby library for the S3 REST API. I haven't used it yet.
http://amazon.rubyforge.org/

Uploading & Unzipping files to S3 through Rails hosted on Heroku?

I'd like to be able to upload a zip file to my Rails application that contains a number of images. Then I'd like Rails to unzip that file and attach the images inside to my Photo's model via Paperclip, so that they are ultimately stored on my Amazon S3 account (configured through Paperclip).
I'd like do do this all on my Rails site hosted on Heroku, which unfortunately doesn't allow local storage of any kind (so far as I'm aware) to temporarily do the unzipping before the Paperclip parsing.
How would I do this??
I would recommend uploading directly to S3 which bypasses Heroku entirely so you're not restricted to the 30 second request timeout they enforce (which drops your uploads after that time is hit) or the 1gb /tmp directory limit. After the file is uploaded, you can make a POST to your Rails app with the file's name and location and then do your unzipping operation. If you'd like to use Paperclip for post-processing, I have attached a link below. If you end up going the route of uploading directly to S3 which offloads the work from your Rails server, please check out my sample projects:
Sample project using Rails 3, Flash and MooTools-based FancyUploader to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-FancyUploader
Sample project using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload
Here is the link for the Paperclip post processing for an example like images:
http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip
dmagkic is correct about the rails_root/tmp. I recommend something like the following:
Upload files through heroku to S3
Setup a background job to zip the files (store the file names that you need to group)
run the BJ that downloads the files from S3, zips them, sends the zip to S3, removes the unzipped files.
That way your application will still be responsive'ish during the upload process.
If you try to upload multiple files, you COULD write to /tmp, but just make sure that all the files come across in the same post request.
Heroku does allow writing to #{RAILS_ROOT}/tmp.
But you need to take in mind that file will be there only as long as request lasts. Probably longer, but that is not guaranteed. You could try to block request while you unzip and send to S3, but you should take care of the time it takes.
It sounds to me like you need some flash uploader that can unzip and send to S3, without Heroku.

Resources