How can I migrate attachment_fu out of the database? - ruby-on-rails

I'm working on a Rails project that currently receives uploaded files using attachment_fu and stores files in the database. I'd like to move them to use the filesystem. The problem is that there are currently several thousand uploaded files in the database, and we need to migrate them out. I can't seem to find anything to help with this; it seems the only migration anyone is posting tips for is filesystem -> S3. How would I go about migrating my files out of the database?

If you are ultimately attempting to serve these static files via S3/CloudFront to reduce load on your web/app servers one thing I may suggest is utilizing the new Custom Origin functionality of CloudFront which would allow you to keep your source files where they are. After being set up the process would basically be:
Your app tells the browser to retrieve the file from http://your-coudfront-host/path/to/file
The browser requests the file
If CloudFront has that file it returns it
If CloudFront does not have the file it retrieves it from your application and caches it for future requests (I believe up to 24 hours.)
This is what I am doing for product images that are dynamically generated on the fly in an application I am currently writing.
The upside of this is that you do not have to get into the overhead of constantly synchronizing data to S3 and if you decide to remove the whole setup you can still service your asset directly like nothing happened.

Related

How do I use AWS to update my iOS App without an official app update?

So I have an app that is pretty simple. It has data stored in .htm files which can either be displayed in a UIWebView or parsed through and return a string of the results(the app does both). The data changes weekly, and obviously it takes longer than that just to review an update submission. I want the user to be able to click a button and check for updates on a server and then download/replace the .htm files as needed. The app can take it from there. What service through Amazon Web Services is the best to check for new files and download them? If you think there is an easier service than AWS than I'm definitely open to other ideas.
You could store your files in S3. Set up your app so it downloads the file from S3, then compare it with the one you've got locally (you could hash them both and compare the hashes).
If it's changed then you can use the new one.

profile pictures - how to store [duplicate]

This question already has answers here:
Storing Images in DB - Yea or Nay?
(56 answers)
Closed 10 years ago.
in my website, user can upload a profile image.
I wanted to know what is the best way to store those images.
my thought is simply dedicated directory. the image name will be the user_id.
is that a good solution, or there's a smarter one?
You have two options, store the images or use an external source (gravatar).
If you're going to store the images, do you want these images to be publically available or are they private? If they are publically available, then you can store them in your public folder.
You can use something like carrierwave to handle the uploading, versioning and storing of the images.
For public stuff, I'll store the file in the public directory under the uploader/model name/field name/id location. This is more for organizational purposes on my part.
Check out http://railscasts.com/episodes/253-carrierwave-file-uploads for a good tutorial.
For private images, I'll set the store directory to something outside of the public folder and will create a download action within the controller with the file. This way, the user cannot download the file unless it goes through the controller action. With authorization (cancan) I can allow or disallow a user to access the download action for that particular file (hence making it somewhat secure). If you are going to be using a production server like apache or nginx, make sure that you set the appropriate handlers for sending the file (ie x_sendfile).
Its very common to store images in a directory for small applications. However there are a few of things to take into consideration here:
Do you have anticipate a lot of users? If you have a million users, storing everyone's photo in your directory will take up a lot of memory when running your application
Are you deploying on Heroku? Many RoR apps are, and if you deploy on Heroku it will destroy any files you store locally when your app is moved to a different dyno (and you generally have no way of predicting when this will happen). You can read about the Ephemeral filesystem here https://devcenter.heroku.com/articles/dynos#isolation-and-security
In general I would advise against storing all your images locally because rewriting the code as you scale will become painful. I recommend you upload to an Amazon S3 Bucket and download the images as you need (and cache them for when your user is logged in). Its helpful becasue you might have to deal with image processing (for example resizing the images that are uploaded, creating thumbnail versions of the uploaded images) and its easier to do this when you have background processes that have persistent access to these files. I've used the 'aws' gem and S3 libraries for this, and its really easy to use, you can read more about it here: http://amazon.rubyforge.org/
However, if you intend for this to be a small app and are not deploying on Heroku, just saving it to a local directory is a lot easier and pain-free

Rails, Heroku, S3, and static resources

I am working on a Rails web application, running on a Heroku stack, that handles looking after some documents that are attached to a Rails database object. i.e. suppose we have an object called product_i of class/table Product/products, and product_i_prospectus.pdf is the associated product prospectus, where each product has a single prospectus.
Since I am working on Heroku, and thus do not have root access, I plan to use Amazon S3 to store the static resource associated with product_i. So far, so good.
Now suppose that product_i_attributes.txt is also a file I want to upload, and indeed I want to actually fill out information in the product_i object (i.e. the row in the table corresponding to product_i), based on information in the file product_i_attributes.txt.
In a sentence: I want to create, or alter, database objects, based on the content of static text files uploaded to my S3 bucket.
I don't actually have to be able to access them once they are in the bucket strictly speaking, I just need to create some stuff out of a text file.
I have done something similar with csv files. I would not try to process the file directly at upload as it can be resource intensive.
My solution was to upload the file to s3 and then call a background job method(delayed_job, resque, etc.) that processed the csv after upload. You could then call a delete after the job processed to remove the file from s3 if you no longer needed it after processing.
For Heroku this will require that you add a worker (if you don't already have one) to process the background jobs that will process the text files.
Take a look at the aws-sdk-for-ruby gem. This will allow you to access your S3 bucket.

Mananging upload of images to create custom pdfs on heroku - right tools

Im desiging an app which allows users to upload images (max 500k per image, roughly 20 images) from their hard drive to the site so as to be able to make some custom boardgames (e.g. snakes and ladders) in pdf formate. These will be created with prawn instantly and then made available for instant download.
Neither the images uploaded nor the pdfs created need to be saved on my apps side permanently. The moment the user downloads the pdf they are no longer needed.
Heroku doesn't support saving files to the system (it does allow to the tmp directory but says you shouldnt rely on it striking it out for me). I'm wondering what tools / services I should be looking into to get round this. Ive looked into paperclip, I'm wondering if this is right for this type of job.
Paperclip is on the right track, but the key insight is you need to use the S3 storage backend (Paperclip uses the FS by default which as you've noticed is no good on Heroku). It's pretty handy; instead of flushing writes out to the file system, it uses the AWS::S3 gem to upload them to S3. You can read more about it in the rdoc here: http://github.com/thoughtbot/paperclip/blob/master/lib/paperclip/storage/s3.rb
Here's how the flow would work:
I'd let your users upload their multiple source images. Here's an article on allowing multiple attachments to one model with paperclip: http://www.cordinc.com/blog/2009/04/multiple-attachments-with-vali.html.
Then when you're ready to generate the PDF (probably in a background job, right?), what you do is download all the source images to somewhere in tmp/ (make sure the directory is based on your model id or something so if two people do this at once, the files don't get stepped on). Once you've got all the images downloaded, you can generate your PDF. I know this is using the file system, but as long as you do all your filesystem interactions in one request or job cycle, it will work, your files will still be there. I use this method in a couple production web apps. You can't count on tmp/ being there between requests, but within one it's reliably there.
Storing your generated PDF on S3 with paperclip makes sense too, since then you can just hand your users the S3 URL. If you want you can make something to clear the files off every so often if you don't want to pay the S3 costs, but they should be trivial.
Paperclip sounds like an ideal candidate. It will save images in RAILS_ROOT/public/system/, which is both persistent and private (shouldn't be able to be enumerated on shared hosting).
You can configure it to produce thumbnails of your images if you wish.
And it can remove the images it manages when the associated model is destroyed - after your user downloads their PDF, and you delete the record from the database.
Prawn might not be appropriate, depending on the complexity of the PDFs you need to generate. If you have $$$, go for PrinceXML and the princely gem. I've had some success with wkhtmltopdf, which generates PDFs from a Webkit render of HTML/CSS - but it doesn't support any of the advanced page manipulation stuff that Prince does.

How can I prevent double file uploading with Amazon S3?

I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions.
One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary)
The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails.
Both seem equally messy.
Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.
Unless there's a particular reason not to use paperclip I'd highly recommend it. Used in conjunction with delayed job and delayed paperclip the user uploads the file to your server filesystem where you perform whatever validation you need. A delayed job then processes and stores it on s3. Really, really easy to set up and a better user experience.

Resources