I'm having few doubts on implementing file downloads. I'm creating an app where I use attachment_fu with Amazon s3 to upload files. Things are working pretty well so far on uploading side. Now its the time to start the file downloads. Here is what I need, a logged in user search and browse for Images and they should able to add the files in to a download basket (Let's say its a Download Shopping Cart). Finally the user should be able to download these file(s) from S3 probably as a zipped file.
Is there any plugin/gem where I can use for this?
The downside of giving the customer a zip file of all the files is that you'll need to first pull all of the files from S3 back onto your server, then zip them.
You can certainly do that if you want, but it will take a bit of time, you would not want to do it synchronously as part of the browser request. Instead, do it as a background job using delayed_job or similar.
To do the actual zipping, use Zlib::GzipWriter See http://ruby-doc.org/core/classes/Zlib/GzipWriter.html -- it is part of standard Ruby
You could then:
email the user the actual zip file as an attachment
email the user the link to the zip file on your server
or upload the zip file to s3, then email a link to the zip file on s3
Remember to create a clean up task/job to remove the old zip files from your system...
Alternative is to not zip the files together, instead, give the user one or more links to download the files separately.
S3 enables you to create a url to an S3 file that can be used for a set period of time. (The file would be private on S3 so a straight link to it won't work.) Here's how to create it using attachment-fu and aws-s3 gem:
# I added this as a method to my model for the files stored in S3
def authenticated_s3_url
# return a publicly usable url
connect_to_aws # a local method which connects/re-connects to s3
S3Object.url_for(full_filename,
bucket_name,
:expires_in => 60 * 60) # 1 hour
end
Related
I have audio files located on a private GCS bucket. I want to serve these audio files for users to listen to.
I cant use Active Storage for this as these files are created/deleted outside of my Rails application.
I could download files using google-cloud-storage gem. It would cover authentication, file download. But if I understand correctly I can only serve files from the public directory? So do I need to download those to Rails.public_path?
Furthermore, I really don't want to manage these files after downloading them - caching, deleting them after some time, etc.
What would be the best way to achieve this?
The best option in my opinion would be to use the google-cloud-storage gem,
since both Google::Cloud::Storage::Bucket and Google::Cloud::Storage::File have the #signed_url method. This way you can find the relevant file(s) that you need and create a temporary url, send the url to the client, which will be in charge of downloading the file directly.
If you don't want the client do download the file directly from Google Cloud you can just download the file from GC yourself, and use #send_data or #send_file in the controller.
For my Rails application, I download a bunch of files from a remote URL to my application. I would like to directly upload them to Amazon S3, without needing a form to do the upload, since I will temporarily cache the file I downloaded on the EC2 instance.
I would also like to retain the links to the files I uploaded so I can download them later.
I am essentially reposting the files I downloaded.
I looked around, but most of the solution seem to involve form uploading to S3 with a user.
Is there s direct upload solution?
You can upload directly to S3 using the AWS SDK for Ruby. The easiest way is:
require 'aws-sdk'
s3 = Aws::S3::Resource.new(region:'us-west-2')
obj = s3.bucket('bucket-name').object('key')
obj.upload_file('/path/to/source/file')
Or you can find a couple other options here.
You can simply use EvaporateJS to achieve this. You can also take advantage of sending ajax request to update file name to the database after each file upload. Though javascript exposes few details your bucket is not vulnerable to hack as S3 service provide a bucket policy.
Just set the <AllowedOrigin>*</AllowedOrigin> to <AllowedOrigin>specificwebsite.com</AllowedOrigin> in production mode.
I have a service set up where when the user registers, they are able to download a file to their device. The file is dynamically generated from some local information from our database such custom field information (username, email, web url, etc) and then account specific assets stored on S3 (avatar, icons, background art).
I'm not sure of the best way to handle these S3 files as part of the generation process.
Using a Ruby Tempfile class generates a file that has a unique filename that doesn't match what we are expecting. Using Ruby's File class generates the files we want, but it also litters the filesystem with a bunch of files and I worry won't handle concurrent requests for the same assets properly. We're also using Heroku, and they tend to frown on that from what I read.
What's a best practice/recommended way to handle dynamically generating files based on a mix of local and remote assets and then presenting it to the user?
I'm writing a Rails application that serves files stored on a remote server to the end user.
In my case the files are stored on S3 but the user requests the file via the Rails-application (hiding the actual URL). If the file was on my servers local file-system, I could use the Apache header X-Sendfile to free up the Ruby process for other requests while Apache took over the task of sending the file to the client. But in my case - where the file is not on the local file-system, but on S3 - it seems that I'm forced to download it temporarily inside Rails before sending it to the client.
Isn't there a way for Apache to serve a "remote" file to the client that is not actually on the server it self. I don't mind if Apache has to download the file for this to work, as long as I don't have to tie up the Ruby process while it's going on.
Any suggestions?
Thomas, I have similar requirements/issues and I think I can answer your problem. First (and I'm not 100% sure you care for this part), hiding the S3 url is quite easy as Amazon allows you to point CNAMES to your bucket and use a custom URL instead of the amazon URL. To do that, you need to point your DNS to the correct amazon URL. When I set mine up it was similar to this: files.domain.com points to files.domain.com.s3.amazonaws.com. Then you need to create the bucket with the name of your custom URL (files.domain.com in this example). How to call that URL will be different depending on which gem you use, but a word of warning was that the attachment_fu plugin I was using was incorrectly sending me to files.domain.com/files.domain.com/name_of_file.... I couldn't find the setting to fix it, so a simple .sub method for the S3 portion of the plugin fixed it.
On to your other questions, to execute some rails code (like recording the hit in the db) before downloading you can simply do this:
def download
file = File.find(...
# code to record 'hit' to database
redirect_to 3Object.url_for(file.filename,
bucket,
:expires_in => 3.hours)
end
That code will still cause the file to be served by S3, but and still give you the ability to run some ruby. (Of course the above code won't work as is, you will need to point it to the correct file and bucket and my amazon keys are saved in a config file. The above is also using the syntax for the AWS::S3 gem - http://amazon.rubyforge.org/).
Second, the Content-Disposition: attachment issue is a bit more tricky. Hopefully, your situation is a bit more simple than mine and the following solution can work. Assuming the object 'file' (in this example) is the correct S3 object, you can set the disposition to attachment by
file.content_disposition = "attachment"
file.save
The above code can be executed after the file exists on the S3 server (unlike some other headers and permissions), which is nice and it can also be added when you upload the file (syntax depends on your plugin). I'm still trying to find a way to tell S3 to send it as an attachment and only when requested (not every time), and if you find that, please let me know your solution. I need to be able to sometimes download it and other times save embed images (for example) into HTML. I'm not using the above mentioned redirect but fortunately it seems that if you embed (such as a HTML image tag) a file with the content-disposition/attachment header, and the browser still displays the image normally (but I haven't throughly tested that across enough browsers to send it in the wild).
Hope that helps! Good luck.
I am currently developing a rails application that tries to copy/move videos from one bucket to another in s3. However i keep getting a proxy error 502 on my rails application. In the mongrel log it says "failed to allocate memory." Once this error occurs the application dies and we must restart is.
Seems like your code is reading the entire resource into memory, and that out-of-memories your application. A naïve way to do this (and from your description, you're doing something like this already) would be to download the file and upload it again: just download it to a local file and not into memory. However, Amazon engineers have thought ahead and provide APIs that can deal with this specific case, as well.
If you're using something like the RightAWS gem, you can use its S3Interface like so:
# With s3 being an S3 object acquired via S3Interface.new
# Copies key1 from bucket b1 to key1_copy in bucket b2:
s3.copy('b1', 'key1', 'b2', 'key1_copy')
And if you're using the naked S3 HTTP interface, see amazon's object copy docs for a solution that uses only HTTP to copy one object from one bucket to another.
try to stream files instead of loading whole file into memory and then working with it.
for example, if you're using aws-s3 gem, do not use:
data = open(file)
S3Object.store file_name, data, BUCKET
Use following instead:
S3Object.store file_name, open(file), BUCKET
not sure how exactly to "stream-download" the file though.
boto works well. See this thread. Using boto, you copy the objects straight from one bucket to another, rather than downloading them to the local machine and then uploading them to another bucket.
You can copy bucket to bucket directly using the fog gem.
s3 = Fog::Storage.new(your_aws_credentials)
s3.copy_object('source-bucket', 'source/path', 'dest-bucket', 'dest/path')