I have an existing Rails RefineryCMS application, which have been running for quite some time. It has alot of image and document uploads, which always have been uploaded to the local filesystem.
But we are moving to Heroku, then this will be a problem, since Heroku doesn't persist these files.
So, we need to get all the existing images and document exported to a Amazon S3.
How could we achieve this?
Would it be plain simple as just copying over the existing files from the current production environment to the S3 bucket?
Kind regards
The plain solution was just to copy the existing folders that are generated by fileupload from Dragonfly in "app/public" to Amazon S3.
Related
I'm brand new to AWS products, ruby on rails, web development, and coding of any type. For my first project after a quick (and dirty) bootcamp, I'm trying to build a ruby-on-rails website that stores images and allows the user to download them as a zip file. I used the RubyZip gem to accomplish this in my EC2 dev environment, but I have deployed to Elastic Beanstalk with S3 file storage, and the RubyZip gem seems unable to handle this structure without traditional directory targets for zipping.
My question is what's the best setup to achieve this functionality in EB? Disregarding the ruby constraint, zipping an S3 directory seems tricky. Should I move to EFS or another storage system? I plan to erase the folders regularly, and limit them to ~100 photos, so long term and large size storage are not a concern. Thanks very much!
Edit: I am attached to Ruby (only language I know), but not RubyZip, AWS, or much of anything else if they are not the best approach for this task.
I think you're on the right path as far as using S3 as a solution. The problem that your facing is that when you're interacting with S3 it's not like a folder on your local system, instead you're hitting the S3 API to interact with the files. (upload, edit, delete ect). This will be a problem you'll encounter with every AWS based storage solution.
I think the solution, in your case, is to fetch all of the photos and download them to a temporary folder on your local system. Then, you could zip them using Ruby, locally. After it's zipped, upload it back to S3.
Edit: By locally I mean onto the server where the Ruby application is running (not client-side)
I have a Heroku-hosted app that uses Paperclip to store User photos on Amazon S3
I want to move some (not all) files to a new bucket based on some internal logic (the app is multi-tenant and I'm separating AWS file storage and my Postgres DB into separate tenants/schemas)
I have 2 options I'm considering (drawn above)
Option 1 - Use the AWS Cli to move files directly between buckets
This option is AWS native, but it has the drawback of having to worry about an entire folder structure for each file (thumbnails, etc..). Moving a file involves moving all the various styles of the file - original, medium size, thumbnail, etc.. so it's not as straightforward as copying 1 file over.
It also copies everything over to the new bucket with the exact same folder/id structure, which I'd like to avoid since the User's corresponding DB info (e.g. id) will change when I migrate them over in the postgres DB
Option 2 - Use paperclip to pull down each file locally and re-upload it
This is an attractive option because it lets paperclip handle all the work.
However, paperclip uses the bucket name to construct the URL of the file. I need it to pull from 1 bucket and push to another bucket. Is there a way to set the bucket name individually for each transaction?
Paperclip uses the bucket name to construct the URL of a remote file but the names of these directories and files doesn't depend on the bucket name. If your files or directories contains the bucket name then you are doing it wrong and you should start by fixing it.
Do the following:
Sync your public/system directory with oldbucket using the aws s3 sync OLD_BUCKET_URL public/system command
Perform the changes in the directories and files locally with a Ruby script using Paperclip
Sync (upload) your public/system directory with newbucket using the aws s3 sync public/system NEW_BUCKET_URL command.
I have a rake task that creates a CSV file. Right now I am storing this file into my /tmp folder (I am using Heroku). This CSV file is not associated with any model, it's just some data I pull from several APIs, combined with some data from my models.
I would like to download this file from Heroku, but this seems not possible. So, my question is: Which gem am I trying to look for in order to upload that file to Amazon S3? I have seen gems like Paperclip, but that seems to be associated with a model, and that is not my case. I just want to upload that CSV file that I will have in /tmp, into my Amazon S3 bucket.
Thanks
You can use aws-s3 gem
S3Object.store('filename_in_s3.txt', open("source_file.tmp"), 'bucket_name')
You should define the exact path of your tmp file, for example:
open("#{Rails.root}/tmp/source_file.tmp")
CarrierWave can directly interface your Ruby application with S3 directly via the Fog library. Rather than operating on the model level, CarrierWave utilizes a Uploader class where you can cull from across your APIs and datasources, which is precisely what you're trying to accomplish.
I have a RoR website, where users can upload photos. I use paperclip gem to upload the photos and store them on the server as files. I am planning to move to Amazon S3 for storing the photos. I need to move all my existing photos from server to Amazon S3. Can someone tell me the best way for moving the photos. Thanks !
You'll want to log into your AWS Console and create a bucket structure to facilitate your images. Neither S3 nor Paperclip have any tools in the way of bulk migrations from file system -> s3, you'll need to use the tool s3cmd for that. In particular, you're interested in the s3cmd sync command, something along the lines of:
s3cmd sync ./public/system/images/ s3://imagesbucket
If you have any image urls hard-coded into your database (a la markdown/template code) this might be a little tricky. One option would be to manually update your urls to point to the new bucket. Alternatively, you can rack-rewrite.
You can easily do this by creating a bucket on Amazon S3 that has the same folder structure as your public directory on your Rails app.
So say for instance, you create a new bucket on Amazon S3 called MyBucket and it has a folder in it called images. You'd just move all of your images within your Rails app's images folder over to that new bucket's images folder.
Then you can set up your app to use an asset host like this answer describes: is it good to use S3 for Rails "public/images" and there an easy way to do it?
If you are using image_tag or other tag helpers (javascripts, stylesheets, etc), then it will use that asset_host for production environments and properly generate the URL to your S3 bucket.
I found this script which takes care of moving the images to Amazon S3 bucket using rake task.
https://gist.github.com/924617
Let's say I'm using attachment_fu to attach profile pics to user profiles in a system, with Amazon S3 used as the actual file storage. When users upload new profile pics, I'd like to replace the attached file with the new one. I can do this within my database (i.e. the file metadata) easily, but attachment_fu doesn't seem to provide methods for deleting the files from S3.
Am I missing something, or am I approaching this the wrong way?
Many thanks!