Hi I am a ROR developer and developing an ROR application where user can upload file and share the URL to download.
I am using carrierwave for file upload and storing the file in my application (not in AWS).
I want to give permission for the user to access the upload file only for a certain period of time (for ex:1 hour).
please help me to achieve this without creating any cron jobs or before_filter method?
I found AWS has default feature for this but I am not storing the file in AWS S3. I am storing it in my local application folder, so please guide me how I can achieve this.
Related
I am trying to create an application that crawls a website providing free financial data in .xlsx format. They upload files once a month and not always on the same day.
Is it possible to download any new files from a specific URL and dump it into my S3 bucket, before reading it into a database? I have read up about creating a worker using Sidekiq. I expect that this will play a crucial part in the process.
Can anybody perhaps give some advice or point me to a tutorial that can help?
Yes, you can, and you don't even need Sidekiq.
Take a look at AWS SDK for Ruby, and do the following things:
Just write a ruby script that downloads the xlsx files then upload to S3. Be sure the script starts with #!/usr/bin/env ruby, and give it execute permission.
Add this script to your crontab jobs, and make it run everyday.
We are looking to add a simple file uploader to our rails 3.2 app which is a business application (with Rails engines). Here are what we are looking for with the file uploader:
Allow access control to who can do what. For example, sales can upload a contract and acct can view the uploaded contract.
No change to current model. The file uploader acts on its own about file uploading, checking, storing and removing. We are thinking to have a file uploader engine and attach the engine to the Rails app.
File uploaded belongs to a model. For example, uploaded contract copy belongs to a project.
May need to upload file to a remote server.
We are evaluating options of developing our own uploader engine or find a upload gem such as carrierwave or paperclip. Can someone shed a light on rails file uploading and its related issue?
Using a combination of cancan and paperclip is a good option.
I am using Rails 4.0.0 and Ruby 2.0.0, and I have an app deployed to Heroku.
I want to give my users the ability to upload a mp3 file. After it is completed uploading, I need them to get access to the public URL of that mp3 file. Right now, I could upload the recordings myself in my public directory, and then I could access them at a public URL.
I need to replicate that ability for my users. Any thoughts on the best way to do that?
Heroku generally doesn't want you to let people upload files onto the Heroku filesystem via your website. You need to use a third party file storage system. Most people use Amazon S3, and there are loads of detailed tutorials on how to use this with heroku (including on the heroku site). Google for "heroku amazon s3" and you'll see loads of helpful stuff, eg
https://devcenter.heroku.com/articles/s3
I'm designing a little application using Rails and I need to upload files. I searched and I found paperclip + Carrierwave to do that. I've also seen that it's possible for that gems to specify where to store the uploaded file in the model.
My question is : Using Paperclip, is it possible to know at runtime where the user wants to upload the file and of course is it possible to catch and save the file to the corresponding host ?
Regards
I'd like to be able to upload a zip file to my Rails application that contains a number of images. Then I'd like Rails to unzip that file and attach the images inside to my Photo's model via Paperclip, so that they are ultimately stored on my Amazon S3 account (configured through Paperclip).
I'd like do do this all on my Rails site hosted on Heroku, which unfortunately doesn't allow local storage of any kind (so far as I'm aware) to temporarily do the unzipping before the Paperclip parsing.
How would I do this??
I would recommend uploading directly to S3 which bypasses Heroku entirely so you're not restricted to the 30 second request timeout they enforce (which drops your uploads after that time is hit) or the 1gb /tmp directory limit. After the file is uploaded, you can make a POST to your Rails app with the file's name and location and then do your unzipping operation. If you'd like to use Paperclip for post-processing, I have attached a link below. If you end up going the route of uploading directly to S3 which offloads the work from your Rails server, please check out my sample projects:
Sample project using Rails 3, Flash and MooTools-based FancyUploader to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-FancyUploader
Sample project using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload
Here is the link for the Paperclip post processing for an example like images:
http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip
dmagkic is correct about the rails_root/tmp. I recommend something like the following:
Upload files through heroku to S3
Setup a background job to zip the files (store the file names that you need to group)
run the BJ that downloads the files from S3, zips them, sends the zip to S3, removes the unzipped files.
That way your application will still be responsive'ish during the upload process.
If you try to upload multiple files, you COULD write to /tmp, but just make sure that all the files come across in the same post request.
Heroku does allow writing to #{RAILS_ROOT}/tmp.
But you need to take in mind that file will be there only as long as request lasts. Probably longer, but that is not guaranteed. You could try to block request while you unzip and send to S3, but you should take care of the time it takes.
It sounds to me like you need some flash uploader that can unzip and send to S3, without Heroku.