rails controller download from aws s3 - ruby-on-rails

I am trying to build a really easy way for my users to download audio content from aws via my website. Here is the flow:
I give the user a download link. Ex: www.mysite.com/foobar
User clicks on the link.
In my rails controller, I create an expiring aws s3 url and automatically start downloading the audio content from that url.
User's browser should ask the user whether or not to save the file or not. In the event the user accepts to save the file, I want a callback to my rails app to log that the user actually downloaded the file.
So, from a user's perspective, I want the process to be as simple as going to a url I determine, and accepting to download the file when prompted.
In the background, I want to keep the aws s3 url hidden from the user and I want to have the flexibility to write callback logic after the user accepts the download.
What is the recommended way to achieving this?

The best way to solve this is to create an S3 URL with a very short (10 minute?) lifetime and return a redirect to the S3 URL. This does expose the S3 url to the user, but isn't a vulnerability.
If you want to hide the S3 URL, you will need to proxy the download through your servers, which is expensive and consumes a worker process for long periods of time. I do not recommend this, but it is the only way to conceal the S3 resource.
Additionally, if triggering a download vs. a view is important, you need to set the Content-Disposition header to trigger an attachment download:
Content-Disposition: attachment; filename="fname.ext"

Related

How to upload files and handle processing and validations - a very general overview?

The problem at hand
I have a rails app.
Users will be uploading files. Anywhere between 1 file to 3000 files. Sometimes they are zip files, and sometimes they are not. I do not want hold up the server with these files uploads, so I am looking for a solution around this problem.
The zipped files will have to be unzipped.
I then want to check whether: the user has previously uploaded the same files? i.e. if the user has already uploaded the same file(2) one week ago, then this is a problem: (i) either we don’t allow that particular file to be uploaded, or we ask the user: are you sure you want to upload the same file again?
Then I want to store the keys/links to the files within the appropriate models/records on the back end.
Was wondering what the best workflow for handling the above could be: i.e a very general overview: in other words, could AWS Lambda / Google cloud computing etc. etc be best employed to handle the above problem? How would we use the Shrine gem, to best handle this situation? Would it make sense to use AWS Lambda rather than using background jobs?
My preferences are to use the Shrine gem for uploading.
My Ideas:
In the client side, the user drags and drops the files the user
wants to upload.
All the files are then uploaded (whether zipped or otherwise) to a temporary bucket location via the Shrine gem.
IF the zip files are uploaded then perhaps an AWS lambda function must be triggered to unzip the files. If that’s the case,then at the end of the day, the keys for these files must somehow be returned to the client, to handle validation issues – but then how would the AWS lambda function be able to return this request to the original client side where the request was originated? Or rather,should the AWS lambda function be generated from the client side,passing in the IDs of the unzipped blobs?
Then we need to run some validations: we want to handle the situation where there are duplicate files. We will need to check with our rails backed as to whether those files have already been uploaded.
After those validation issues are handled, then user submits the form, and all the keys are stored within the appropriate records.
These ideas are by no means prescriptive
Am seeking some very general advise on what the best way is of doing this all. I am by no means constrained to AWS: I could use Google or Azure just as easily. Any guidance on the above would be much appreciated.
Specific questions:
How would the AWS lambda function get triggered?
How would be be able to return the keys of the uploaded files back to the client?
What do I mean by general overview?
Here are some examples of general overviews:
(1) Uploading & Unzipping files to S3 through Rails hosted on Heroku?
(2) https://www.quora.com/How-do-I-extract-large-zip-files-in-AWS-Lambda
Any pointers in the right direction would be much appreciated.
Cheers!
This isn't a really difficult problem to solve if you are willing to change the process flow a little bit.
In the client side, the user drags and drops the files the user wants to upload.
When the user requests the upload operation to begin you can make HTTP GET requests to an API Gateway endpoint, backed with a Lambda. The Lambda can query for previous files uploaded by the client and send back a result set showing what files already exist. You then filter those out and send only what is considered new from the client to the server. This will save the user time in waiting for the upload to happen and save you time on the S3/Lambda side of not having to store duplicates or process them. This isn't a substitute for server-side validation though, you'll still want to do that. For legit clients, this will save you and them a lot of bandwidth and storage.
All the files are then uploaded (whether zipped or otherwise) to a temporary bucket location via the Shrine gem.
This works. As they enter the temp bucket, use a Lambda with an S3 event to process the files, unzip files, push any metadata needed into DynamoDb and delete the files from the temp bucket. In the temp bucket, I would place the files into a folder that is unique per request and user. I would take the user/client Id and a UUID of some kind and make that your folder name. Such as Johnathon+3b5339b8-c8db-4d5c-b678-406fcf073f4f, or encode this value into a Base64 string and make that your folder name. Store this in DynamoDb with each file uploaded into your permanent bucket with the Hash Key being the userid/clientid, a Sort Key being the full folder path + file name and an extra attribute of IsProcessed. The IsProcessed attribute will be updated by your Lambda that is processing the files and moving them to their permanent S3 bucket. If there are errors, you can put the error in this field. If it is successful then you put it in this field.
the keys for these files must somehow be returned to the client, to handle validation issues – but then how would the AWS lambda function be able to return this request to the original client side where the request was originated? Or rather,should the AWS lambda function be generated from the client side,passing in the IDs of the unzipped blobs?
The original API request to push the files to the temp S3 bucket would be able to return back to the client the folder name johnathon+3b5339b8-c8db-4d5c-b678-406fcf073f4f to the client. So let's say you made a HTTP POST to /jobs. You would return back 201 Created with a HTTP Header of Location /jobs/johnathon+3b5339b8-c8db-4d5c-b678-406fcf073f4f. Your client can then start polling /jobs/johnathon+3b5339b8-c8db-4d5c-b678-406fcf073f4f for the status of the process.
Your response back to /jobs/johnathon+3b5339b8-c8db-4d5c-b678-406fcf073f4f can return the DynamoDB records. This would include all DynamoDB records for the HashKey matching the folder name. Your client side can look at all of the objects in the result set and check the IsProcessed attribute to see if everything worked out ok, or if there were issues.
Then we need to run some validations: we want to handle the situation where there are duplicate files. We will need to check with our rails backed as to whether those files have already been uploaded.
Handle this with the Lambda that is executed by the temporary bucket. Grab the files from the temp bucket folder, handle your business logic and back-end queries then push them to their final permanent bucket.
After those validation issues are handled, then user submits the form, and all the keys are stored within the appropriate records.
All of this would happen asynchronously, starting when the user submits the form. The client side needs to be able to handle this by making HTTP GET requests to the endpoint mentioned above, checking for the status of the process. This gives you some more flexibility as you can also publish SNS messages on failures as well, such as sending an email to the clients if they upload 3,000 files and you need to spend 30 minutes processing them.

How to test an S3 Presigned GET url to see if it has expired, and regenerate it if it has, using Ruby on Rails/Coffeescript

I am building a document management system in Ruby on Rails that uses Amazon S3 for storage. I am using the carrierwave and carrierwave-aws gems for uploading/downloading the files.
I have it working to where I can generate a presigned url that expires after a certain amount of time (the sooner the better...maybe 10 seconds-1 minute), but the problem I'm having is if someone loads the page and doesn't click the "Download" button right away, then the link expires and they get directed to an ugly XML error.
What I'm trying to figure out is either:
How can I generate the presigned url on the fly when the download button is clicked (I'm thinking with Coffeescript) OR
Go ahead and generate the presigned url on page load, but when download is clicked, somehow check to see if it returns an error and if so, get a new presigned url and redirect to it at that time. (Again, thinking Coffeescript, if possible)
In the past I have taken the following entirely serverside approach to a similar problem (not using carrier wave, but shouldn't be relevant).
Have the download button link to a controller/action in your app, passing the id of the document. Your controller action then generates a presigned URL and then redirects the user to that URL.
In theory you would only need a very short lifetime on such a presigned urls, although if the files are large enough that a user might want to pause and resume transfers then a longer lifetime might be advisable.

Rails, given an AWS S3 URL, how to use a controller to send_data / file to the requestor

Given an Amazon S3 URL, or any URL that is a direct URL to a file. In my controller, given this URL, I want to send the user the file, whatever it is w/o redirecting.
Is this possible?
If I understand your question correctly, I don't think that's possible from your end. That's why many sites say "right click to save" or something along those lines. Some sites even have links to videos that say "click to download" but when I click the link they start streaming. These are due to MY settings (ie. the settings on the user's client). You can't control that.
If what you're trying to do is HIDE the location of a file...
Send files back to the user - Usually,
static files can be retrieved by using
the direct URL and circumventing your
Rails application. In some situations,
however, it can be useful to hide the
true location of files, particularly
if you're sending something of value
(e-books, for example). It may be
essential to only send files to logged
in users too. send_file makes it
possible. It sends files in 4096 byte
chunks, so even large files can be
sent without slowing the system down.
From an old blog post

Can we find out when a Paperclip download is complete?

I have an application where I need to know when a user's Rails/Paperclip file download is complete. My app is set up to interact with Amazon S3 and I need to run a javascript function when the user has received the completed file.
How can I do this?
Tracking weather or not the download completes is hard, especially in Javascript. There are a few blurred lines in your question which makes me think its not possible.
First, send_file passes a special header to tell the webserver telling it what to send. See the send_file docs. Rails doesn't actually send the file at all, it sets this header which tells the webserver to send the file but then returns immediately, and moves on to serve another request. To be able to track if the download completes you'll have to occupy your Rails application process sending the file and block until the user downloads it, instead of leaving that to the webserver (which is what its designed to do). This is super inefficient.
Next, how can you still be on a page to execute a javascript function if you are downloading a file? Your user clicks the file download link and is taken to wherever the file is, weather that be a send_file from Rails or a redirect to S3 or whatever, they are no longer on the page they came from. If you are thinking about the way Chrome or Firefox works where the download goes into a download manager and the user stays on the page, theres no more interaction with the server on the old page! If you want that page to be notified of download completion, then you'd need a periodic check or long poll to the server to see if the download is done.
I think you'd be better served by redirecting to the S3 file and setting a session variable to redirect the user to where you want them to go after the download is complete so that the next time they visit any page they are back in your planned flow.
Hope this helps!

Does Amazon S3's HTTP Uploads feature support web-hook style callbacks?

When uploading files to Amazon S3 using the browser http upload feature, I know I can specify a success_action_redirect field/value that will tell my browser where to go when the upload is done.
I'm wondering: is it possible to ask Amazon to make a web hook style POST request to my web server whenever a file gets uploaded?
Basically, I want a way of being notified whenever a client uploads a new file, so that my server can process the upload. I'd like to do this without relying on the client to make the request to my server to tell me the file has been uploaded (never trust the client, right?).
They just recently announced AWS Lambda which lets you run code in response to events, with S3 uploads being one of the supported events.
Amazon can publish a notification to SNS or SQS when an object has been created in your specified S3 bucket.
http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
There is no support from Amazon regarding this as yet but we can get around this with other tools like s3cmd etc, which allow us to write cronjobs to notify us of any change in the keys on S3. So if a new key is created (notified via timestamp) we could have it send a GET request to our server endpoint listening for updates from S3 with the associated metadata.
We could use GET or POST here as the data would be very minimal I think. Probably a form data with POST should do.

Resources