I have a function that create a backup of the app and download it in a zip file, with the datas in the databases, images, files, etc. For this i create a big temp file (using the Tempfile class) that send to browser with send_data, but when i delete it after send_data the download failed and its memory is not released.
send_file(zip_data.path, type: 'application/zip', filename: "#{model_name}.zip")
zip_data.unlink
Service class: http://pastebin.com/MskjP8d7
Controller method: http://pastebin.com/CV9Wr27h
It happens because by the time the unlink method is executed, the request hasn't been completely served yet and the server hasn't actually sent the file. send_file is actually handled by the web server.
You could either drop the unlink call. Ruby garbage collector will clean up Tempfiles once they are out of scope. Or, replace send_file with send_data and manually send the binary contents of your Zip file from within the controller.
Related
I am having a rails application on e.g. example.com . I am using a cloud storage provider for any kind of files (videos, images, ...).
No I would like to make them available for download without exposing the url of the actual storage location.
So I was thinking of a kind of proxy. A simple controller which could look like this :
data = open(params[:file])
filename = "#{RAILS_ROOT}/tmp/my_temp_file"
File.open(filename, 'r+') do |f|
f.write data.read
end
send_file filename, ...options...
( code taken from a link ).
Point being is that I would have to download the file first.
So I was wondering if it would be possible to stream the file right away without downloading from the cloud storage first.
best
philip
I was working on this exact issue a while ago and came to the conclusion that this would not be possible without having to download the file to your server and then pass it on to the client as you say.
I'd recommend generating a signed, expiring download link that you insert into a hidden iframe whenever a user clicks a download link on your page. In this way they will get the experience of downloading from your page, without the file making an unnecessary roundtrip to your server.
I am using cloudfoundry. I upload a file and save the file..my routine returns the path and filename
/var/vcap/data/dea/apps/Dwarfquery-0-99065f0be8880d91916257931ed91162/app/tmp/region1-legends10-11-2012-20:53.xml
However the scheduled resque routine which tries to read it using File.Open returns the following error
Errno::ENOENT
Error
No such file or directory - /var/vcap/data/dea/apps/Dwarfquery-0-99065f0be8880d91916257931ed91162/app/tmp/region1-legends10-11-2012-20:53.xml
This is the path returned by the Upload Server...I have added require 'open-uri' at the top of my Job Class
The line that is failing is
File.open(fpath, 'r+') do |f|
where fpath the the file/directory returning the error
I'm not proficient with ruby at all, but just to clarify:
Are the bit that uploads and the Resque routine part of the same "app" (in Cloud Foundry sense?)
Are you trying to read the file soon after it has been uploaded, or long after (in particular, after your app has/could have been restarted?)
This is important because:
Each "app" has its own temporary folder and obviously one app can't access another app's filesystem. This also holds if you deployed your app with multiple "instances". Each instance is a separate process that has its own filesystem.
local filesystem storage is ephemeral and is wiped clean every time the app restarts
If you need to access binary data between apps, you will want to use some kind of storage (e.g. Mongo's GridFS) to have it persisted and visible by both apps.
I'm trying to create PDFs that can be stored on an external server.
I do this:
File.new("temp.pdf", "w").close
File.open("temp.pdf", "wb") do |f|
f.write(bytes)
end
File.open("temp.pdf", "r") do |f|
# upload `f` to server
end
File.delete("temp.pdf")
then upload them to the server.
On my local machine this works fine, but, I recently tried running on another machine, and I got a permissions error in the log.
Is there a way to:
Write bytes to a file.
Never touch the hard disk.
Why don't you just upload the bytes to the server?
You may have to go a little lower-level than normal, but check for instance the UploadIO class of the multipart-post gem.
I realize I have to write to file and delete the file since UploadIO takes in an open file
So I created an new file, wrote the content to it, passed it in as a File.open to UploadIO, and then deleted the file after I send it.
We are currently working on a Rails application hosted on Heroku. We are trying to generate a PDF and push it to the user to download.
We are using Prawn to handle the PDF generation.
Our code for generating the PDF is currently:
Prawn::Document.generate #name[0]+ ".pdf" do
Followed by all of our code to generate the document. Unfortunately, this saves the document to the disk which is not possible (to the best of my knowledge) for applications hosted on Heroku.
We then push it to the user using
send_file "#{Rails.root}/"+#name[0]+ ".pdf", :type =>
'application/pdf',:filename => #name[0]+ ".pdf"
Is there any way using Prawn to directly push the download of the document to the user without saving the document to disk first? If not, are there any other gems for generating PDFs that don't require saving the file to the disk prior to sending the file?
Though this was answered long ago, I'll post for others who may want to do this.
You can also call render with no file name in current Prawn v0.13.2. A string will be returned, which can be sent to the client with send_data. The pattern is:
pdf = Prawn::Document.new
# ... calls to build the pdf
send_data pdf.render,
type: 'application/pdf',
filename: 'download_filename.pdf',
disposition: :inline
This will display the PDF in the browser. If you want instead to have the user download it, omit , disposition: :inline
Of course you only want to do this if the document is reasonably short or your system is not heavily used because it will consume RAM until the user's download is complete.
On Aspen/Bamboo, you can save the file to disk in the tmp/ directory in your application directory (possibly Rails.root.join("tmp")) or any subdirectory.
On Cedar, you can save the file to disk anywhere in your application directory, but you should still choose a subdirectory of your application's tmp/ directory anyway.
In either case, saved files are ephemeral. They are not shared between two running instances of your application; they are not kept between restarts; etc. Do not rely on saving the file in one request and then being able to access it in a second request.
I have a controller action that is meant to send a file to the user for download from my bucket in S3.
Here's the controller code:
send_file #project.file.url, :type => #project.file_content_type
Here's the error:
Cannot read file http://s3.amazonaws.com/bucket/projects/1/project.xlsx?2011
When I go to the URL directly, I get a download of the file! What's going on?
Is it ok if you just redirect the user to the file on S3?
redirect_to #project.file.url
The issue is that send_file expects a path to a local file, which is then used by the webserver to serve up data from the local file it can access on disk. The file on S3 is only accessible by HTTP, so your webserver can't serve it. To use send_file you'd have to download it and then serve it I think.