Rails using send_file to send multiple files - ruby-on-rails

I'm currently trying to send multiple files outside of my application using Rails send_file method. It loops through all of the files, but only sends the last one in the directory. Here is my code.
Dir.foreach(#dir) do |entry|
if entry != "." && entry != ".." && entry != ".DS_Store" && entry != ".title"
send_file(#dir + entry, :disposition => 'inline')
logger.info("File: " + #dir + entry)
end
end
Any help is appreciated!

send_file tells the controller that it should respond to the browser's request by sending a file. -- As opposed to rendering a view, sending JSON, etc.
In common usage, you send exactly one response in HTTP. (I'm omitting discussion of long-polling and other esoteric types of responses. I'm also omitting HTTP multipart responses which are not generally supported at this time.)
Since you can only send one file, make it count! The one file can be a zip of a number of files, but then the user will need to unzip them.
An alternative is to show multiple download links on the web page, inviting the user to download one after another to accomplish the multiple downloads.
As an example UX (User Experience): Send an email to yourself with multiple attachments. Then use GMail and see how they present the multiple files for you to download.

You can only send a single file in a single request; if you want to send multiple files you need to zip them up or otherwise bundle them.

Related

How to zip and save jSON data received from an API with Rails

So, I'm creating an app that works like a bot, it makes a call to an API from time to time, then it receives a response in a json-like format and saves it like this:
finalResult = RestClient.get( apiUrl, headers = apiHeaders )
jsonData = JSON.parse(ActiveSupport::Gzip.decompress(finalResult))
time = Time.now
File.write("public/#{time}.json", jsonData)
I'm using ActiveSupport to be able to parse this Gzip compressed data, since it's a lot of data, otherwise it takes forever to get it. Then I get the time the data was received, basically, and I use it to name the file so that I can keep good track of it.
What I need to do now is compress this .json file, if possible, into a .zip file(it can be .rar, or .7z, or .tz, whatever) before I upload it to my storage so it takes less space. Is there anyway that I can do something similar to File.write but to save it as a zipped json file? I already checked stuff like zlib and ruby-zip, but they only let me zip files that already "exist", so I can't save it as a zipped .json directly, I'd need to take the .json file and then zip it, but how could I do that if the name of the file is a Time.now and it always change?
I'd appreciate any help, thanks in advance :)
EDIT¹
Giving some more details that may help you to help me:
I created a controller and model to handle this, since I'll be using ActiveStorage. It's ResponsesController and Response model, and the only parameter that the Response model has is has_one_attached :json_file. I intend to use Heroku to handle the CRON job of calling the API and I'll upload the .json files(or .zip files) to an AWS storage.

How to validate a file as image on the server before uploading to S3?

The flow is:
The user selects an image on the client.
Only filename, content-type and size are sent to the server. (E.g. "file.png", "image/png", "123123")
The response are fields and policies for upload directly to S3. (E.g. "key: xxx, "alc": ...)
The case is that if I change the extension of "file.pdf" to "file.png" and then uploads it, the data sent to the server before uploads to S3 are:
"file.png"
"image/png"
The servers says "ok" and return the S3 fields for upload .
But the content type sent is not a real content type. But how I can validate this on the server?
Thanks!
Example:
Testing Redactorjs server side code (https://github.com/dybskiy/redactor-js/blob/master/demo/scripts/image_upload.php) it checks the file content type. But trying upload fake image (test here: http://imperavi.com/redactor/), it not allows the fake image. Like I want!
But how it's possible? Look at the request params: (It sends as image/jpeg, that should be valid)
When I was dealing with this question at work I found a solution using Mechanize.
Say you have an image url, url = "http://my.image.com"
Then you can use img = Mechanize.new.get(url)[:body]
The way to test whether img is really an image is by issuing the following test:
img.is_a?(Mechanize::Image)
If the image is not legitimate, this will return false.
There may be a way to load the image from file instead of URL, I am not sure, but I recommend looking at the mechanize docs to check.
With older browsers there's nothing you can do, since there is no way for you to access the file contents or any metadata beyond its name.
With the HTML5 file api you can do better. For example,
document.getElementById("uploadInput").files[0].type
Returns the mime type of the first file. I don't believe that the method used to perform this identification is mandated by the standard.
If this is insufficient then you could read the file locally with the FileReader apis and do whatever tests you require. This could be as simple as checking for the magic bytes present at the start of various file formats to fully validating that the file conforms to the relevant specification. MDN has a great article that shows how to use various bits of these apis.
Ultimately none of this would stop a malicious attempt.

how can i use wildcards in a web address to download a file where the filename periodically changes

i am downloading files from a web server into my IOS iPad application.
my problem is that now the hardcoded url addresses are subject to change.
how can i use wildcards in my url address to compensate for the changed address
e.g this is the current url address
http://www.testserver/modules/public/sheets/HZ_TECAPET__black_gb_DE_201301.pdf
the 201301 changes, so how can i code the url address using wildcard?
e.g http://www.testserver/modules/public/sheets/HZ_TECAPET__black_gb_DE_??????.pdf
the first part of the address remains static it s just the numbers at the end that are subject to change
thanks
That's a bit harder then. But you can do it on the server side. You can write a simple script (BASH) that will run on the server. It will count and list all files in the directory and save results in txt, which you can access by http://example.com/files.txt
Something like:
for file in "$sheets"/*
do
echo "$file" >> files.txt
done
EDIT:
Aha, so there actually is a pattern. Then you can try to download each of the possible patterns. Then check if the HTTP status code is 200 (OK) or 404 (Not found).

Transmit File After Redirecting to a new Action

In my project, I got a page that creates guest users for a number of products chosen.
Basically you tell the application how many users to create for each product you choose.
When you click 'Save' I generate the guests, save their username + password (before encryption) in a csv file that I transmit at the end of the process, and finally redirect the user to the index page.
The problem happens at the file transmission and the consequent redirection to a new action:
Private Sub DownloadCsv(ByVal csv As List(Of String), ByVal filename As String)
Dim sb As New StringBuilder()
For Each Str As String In csv
sb.AppendLine(Str)
Next
' Clear any previous response
Response.Clear()
' Indicate we're returning a CSV file
Response.ContentType = "text/csv"
' Provide filename for download
Response.AddHeader("content-disposition", "attachment;filename=" & filename & ".csv")
' Write actual CSV data
Response.BinaryWrite(Encoding.ASCII.GetBytes(sb.ToString()))
End Sub
And in my action, I call this method DownloadCsv and then I redirect to the Index like so:
Return (RedirectToAction("GroupIndex"))
What's happening is the page transmitting the file to the browser and never doing the redirect that follows.
I read a bit on the matter and found out that once you add an Header to our Response object and Write a file, it will automatically close the Response. If I'm wrong, please correct me.
How can I make it possible to transmit this file after the user has being redirected to the Index or Redirect to the Index after he either saves or cancels the file download?
I have crossed this problem before and it's not easy to solve, at least not in a cross-browser way. On the onclick of your Save button, you can set a Javascript timeout which, in a couple of seconds, does a client-side redirect using window.location = "...url...";. However, I'm pretty sure that didn't work on one particular browser (IE probably).
Otherwise, I would rethink your workflow to avoid having a download and redirect in the same action. Provide a "Save" button, followed by a "Back to Index" button, for example. In my case, I put my download action in a jQuery overlay "dialog". When the file had started downloading, it was just a case of closing the dialog.

How to change filename prompt text browser Save As dialog?

In my web page (rendered by Rails), I'd like to let the user right-click on a photo to bring up the browser's Save As dialog, to let the user save the photo to their hard drive.
However, the photos on my server have unusual filenames (long hex names) with no file extension. The filename prompt in the Save As dialog has this ugly filename. If the user hits save, they'll end up with a poorly-named file, with no file extension.
The web page is aware of the photo's real file name (the name that came off the camera, for example). Is there a way for me to programmatically override the Save As dialog's filename prompt with a filename of my choosing?
I'm aware of the Content-Dispostion header, and that via this header a filename can be specified. However, I think that in order to be able to make use of this header, I need to load/render the entire file to the browser. If the asset to be made available for download is a movie, that loading of the file could timeout the browser...like, if it's a 100meg video.
Thoughts?
-A
I think I understand the problem here because I encountered (and resolved) at least part of it myself not too long ago.
I have some large mp3's and I link to them on my website
A few problems
I needed to set my content-disposition header to attachment in order to prevent files from automatically streaming whenever a user clicked the download button
my files are on a remote server
my files are large (100MB)
large files can tie up rails controllers if not handled properly
Now, Michael Koziarsky advises in this article that the best way to keep your rails processes free when serving large files, is to create a download action in your controller, and the do something like this (note the use of x_sendfile=>true):
def download
send_file '/path/to/podcast.mp3', :type => 'application/octet-stream', :disposition => 'attachment', :filename=>'something.mp3', :x_sendfile=>true
end
:x_sendfile tells apache to let the file through without tying up a rails controller process. The rest of the code sets the filename and the content-disposition header.
Great, but I'm on heroku, like everyone else nowadays. So I can't use x_sendfile.
I found that I couldn't modify the nginx configuration file either as it's locked down by heroku so it was not possible to get x-accel-redirect (nginx equivalent of x-sendfile) working
So, I decided to add a perl script (see below) to the cgi-bin on our asset-host and this script sets the content-disposition to attachment and gives our file a name too.
Instead of doing a restful download like this:
link_to "download", download_podcast_path(#podcast.mp3)
we just link to the mp3 making sure that we go in through the cgi-bin so that the perl script gets called on every mp3 that leaves the server
# I'm using haml
%a{:href=>"http://afmpodcast.com/cgi-bin/download.cgi?ID=#{#podcast.mp3}"}
download
The result is that my rails controller is no longer called into action when someone downloads a file
I found the perl script here and chopped it up a bit to work for me:
#!/usr/local/bin/perl -wT
use CGI ':standard';
use CGI::Carp qw(fatalsToBrowser);
my $files_location;
my $ID;
my #fileholder;
$files_location = "../";
$ID = param('ID');
open(DLFILE, "<$files_location/$ID") || Error('open', 'file');
#fileholder = <DLFILE>;
close (DLFILE) || Error ('close', 'file');
print "Content-Type:application/x-download\n";
print "Content-Disposition:attachment;filename=$ID\n\n";
print #fileholder
My code, is on github but you'll likely have all sorts of problems using it on your machine as i make heavy use of ENV variables that I store in bashrc and I have no documentation or tests ^hides^
You could do some smart server side url rewrite, like for example rewriting foo.mpeg to youveryuglyfilenamewithoutextension.
Set the Content-Disposition to "attachment; filename="...that's fine. "attachment" explicitly means it's not to be rendered in the browser, file renaming works nonetheless (or possibly particularly for that case).
Based on your comments, you have a few problems.
You want to set the filename using your Rails app.
The file is on a remote host and your Rails app is acting as a middleman.
The file might be big, so you want the file to be sent out to the browser as you receive it instead of queuing the whole thing.
Streaming only with Rails is tricky for a few reasons.
You would need an HTTP client that lets you access the message body as you receive data instead of blocking until you have everything. Net::HTTP is not that client. I'm not sure what library would be better suited.
Once you have a more event-driven way to get your file in pieces, you can pass a proc to the render:
render :text => proc { |response, output| ... }
output can be used like an IO object. Some servers may buffer before sending anyway, though, so that's something to look out for.
It would be easier not handle the byte-shuffling in Rails.
If your webserver or the proxy in front of your webserver supports the X-REPROXY-URL HTTP header, your application can set that header and your webserver or proxy will stream the file.
Perlbal is the only proxy server I know of that supports that header out of the box.
An Apache2 module is also available.

Resources