I am developing an admin panel for a legacy database which has images stored on S3. The way this is done is a full url pointing out to the resource on S3 which is publicly readable.
Paperclip store this in different fields in a database. I need to do these steps.
Upload the file to my server
Generate a UUID
Upload file to S3 with correct permissions and name it with the UUID.
Save the complete url in a single db col.
My only requirement is uploading single image file and storing it in a single field as explained. Dont care which lib I use.
My questions are
Can I configure paperclip to change its default behaviour to do this?
If I use AWS sdk myself do I have to use somekind of threads? I cant see a decent example from where we can use Model to upload image in the same way as done in paperclip.
How is the image recieved in model when uploading using paperclip and active admin.
These may be basic questions but I am a ROR newbie! Guidance and help is much appreciated.
Instead of using paperclip I used carrierwave and fog. Which gives the ability to override name and storage location etc.
storage :fog
def initialize(*)
super
self.fog_directory = ENV['S3_BUCKET_NAME']
end
def store_dir
'pronunciations'
end
def cache_dir
"#{Rails.root}/tmp/uploads/pronunciations"
end
def extension_white_list
%w(mp3 m4a)
end
Related
I'm working on a project that uses a Rails API for the backend and has a separate front end calling on that API. This is the first project where I've had to store images on the API and after days of research on best practices, I'm now more confused than I was going into it. Basically, I use Paperclip and S3 to handle uploading images onto the API, but I've hit a roadblock now that I'm trying to call on the API to retrieve those images. I'm very new to handling images this way and don't know if I should somehow generate a url and store that in the database to call on for retrieving and displaying the images on the front end, or if there is a way to take the multiple parts of the image that Paperclip creates and generate the image from those? The API successfully calls the rest of the data like my object names and bios, I just can't figure out the proper way to store the images so I can easily retrieve them.
If you are using paperclip to upload images to amazon s3 then the object that the attatchment is associated with should have a method that is the name of whatever you set the item name as when you generated your paperclip migrations.
For example if when you generated the migration using...
https://github.com/thoughtbot/paperclip#migrations
rails generate paperclip user avatar
then this will give you the method on User called .avatar and this will in turn give you a method called .url that you can call on avatar.
i.e.
User.avatar.url
will give you the url to the location where the image is stored on S3
I've just discovered that Heroku doesn't have long-term file storage so I need to move to using S3 or similar. A lot of new bits and pieces to get my head around so have I understood how direct upload to S3 using CarrierWave-direct and then processing by delayed_job should work with my Rails app?
What I think should happen if I code this correctly is the following:
I sign up to an S3 account, set-up my bucket(s) and get the authentication details etc that I will need to program in (suitably hidden from my users)
I make sure that direct upload white lists don't stop cross-domain from preventing my uploads (and later downloads)
I use CarrierWave & CarrierWave-direct (or similar) to create my uploads to avoid loading up my app during uploads
S3 will create random access ('filename') information so I don't need to worry about multiple users uploading files with the same name and the files getting overwritten; if I care about the original names I can use metadata to store them.
CarrierWave-direct redirects the users browser to an 'upload completed' URL after the upload from where I can either create the delayed_job or popup the 'sorry, it went wrong' notification.
At this point the user knows that the job will be attempted and they move on to other stuff.
My delayed_job task accesses the file using the S3 APIs and can delete the input file when completed.
delayed_job completes and notifies the user in the usual way e.g. an e-mail.
Is that it or am I missing something? Thanks.
You have a good understanding of the process you need. To throw one more layer of complexity at you---you should wrap all of it in rails new(er) ActiveJob. ActiveJob simply facilities background processing inside rails via the processor of your choosing (in your case DelayedJobs). Then, you can create Jobs via a rails generator:
bin/rails g job process_this_thing
Active Jobs offers a few "rails way" of handling jobs...but, it also allows you to switch processors with less hassle.
So, you create a carrierwave uploader (see carrierwave docs). Then, attach that uploader to a model. For carrierwave_direct you need to disassociate the file field from your models form and move the file field to its own form (use the form url method provided by carrierwave-direct).
You can choose to upload the file, then save the record. Or, save the record and then process the file. The set-up process is significantly different depending on which you choose.
Carrierwave and carrierwave-direct know where to save the file based on the fog credentials you put in the carrierwave initializer and by using the store_dir path, if set, in the uploader.
Carrierwave provides the uploader, which define versions, etc. Carrierwave_direct facilities uploading direct to your S3 bucket and processing versions in the background. Active Jobs, via DelayedJobs, provides the background processing. Fog is the link between carrierwave and your S3 bucket.
You should add a boolean flag to your model that is set to true when carrierwave_direct uploads your image and then set to false when the job finishing processing the versions. That way, instead of a broken link (while the job is running and not yet complete) your view will show something like 'this thing is still processing...'.
RailsCast is the perfect resource for completing this task. Check this out: https://www.youtube.com/watch?v=5MJ55_bu_jM
I'm newest in ruby on rails developpement and i would like what is the best way to save an pictures/image from the controller of my web page. I try with something like this:
#fin = File.open(params[:photos] , "rb")
#img = #fin.read
I think you have understand my reasoning. At the end I want to be able to save my picture into my database.
I would recommend that you use a gem like carrierwave: https://github.com/carrierwaveuploader/carrierwave
You really should not save a picture into a database. Instead you should store the image on some sort of other datastore and put a pointer to it in your database. Carrierwave makes this very easy and has different adapters to store the images on your local filesystem, S3, SFTP, or NFS.
Thoughtbot's Paperclip is another good alternative: https://github.com/thoughtbot/paperclip
I have a Rails 4.1.1 app with file upload thru Paperclip to Amazon S3. I'd like to do some processing to my file when it's uploaded, and so I'd like to perform this processing before the file actually is sent to S3, so that everything happens faster, otherwise I'd have to upload the file, then download it, then process it.
So, how can I create a file, somewhere in my tmp/ folder for processing, from the form submitted by the user?
Any help would be appreciated, I could find no reference on the web for such a need.
Thanks in advance
Images are uploaded to your application before being stored in S3. This allows your models to perform validations and other processing before being sent to S3.
So I would go with Paperclip::Processor (a custom one) or Paperclip callbacks like before_post_process (usually for validation stuff).
I think this and this articles are very enlightening.
Is there any existing solution to select store method on the fly in carrierwave?
I wish to select local store or fog (two providers) store on record (user) basis.
Do i have to rewrite my models to inherited models or there is other solution already?
Thanks for your thoughts.
You can access your model from uploader using the model method
To specify custom store dir, you can use
storage :fog
So, you can write your own macro-class-method which will call storage :for or local based on preference, should be possible but I don't know what you want to archieve exactly. Providing more code (The user code and the conditions on which storage depends) would be helpful.