Using Prawn on Heroku - ruby-on-rails

We are currently working on a Rails application hosted on Heroku. We are trying to generate a PDF and push it to the user to download.
We are using Prawn to handle the PDF generation.
Our code for generating the PDF is currently:
Prawn::Document.generate #name[0]+ ".pdf" do
Followed by all of our code to generate the document. Unfortunately, this saves the document to the disk which is not possible (to the best of my knowledge) for applications hosted on Heroku.
We then push it to the user using
send_file "#{Rails.root}/"+#name[0]+ ".pdf", :type =>
'application/pdf',:filename => #name[0]+ ".pdf"
Is there any way using Prawn to directly push the download of the document to the user without saving the document to disk first? If not, are there any other gems for generating PDFs that don't require saving the file to the disk prior to sending the file?

Though this was answered long ago, I'll post for others who may want to do this.
You can also call render with no file name in current Prawn v0.13.2. A string will be returned, which can be sent to the client with send_data. The pattern is:
pdf = Prawn::Document.new
# ... calls to build the pdf
send_data pdf.render,
type: 'application/pdf',
filename: 'download_filename.pdf',
disposition: :inline
This will display the PDF in the browser. If you want instead to have the user download it, omit , disposition: :inline
Of course you only want to do this if the document is reasonably short or your system is not heavily used because it will consume RAM until the user's download is complete.

On Aspen/Bamboo, you can save the file to disk in the tmp/ directory in your application directory (possibly Rails.root.join("tmp")) or any subdirectory.
On Cedar, you can save the file to disk anywhere in your application directory, but you should still choose a subdirectory of your application's tmp/ directory anyway.
In either case, saved files are ephemeral. They are not shared between two running instances of your application; they are not kept between restarts; etc. Do not rely on saving the file in one request and then being able to access it in a second request.

Related

Ruby on Rails app local directory usage for resourcing

I want to use local files stored under my server (not in Rails' public directory or in the project directory) just like the public folder.
For a more brief explanation, I have an /home/server/img folder and my Rails project's root is something different. I want to be able to use these folder named "img" contents in my Rails app's, just like a resource file.
As an example;
<img src="/home/server/img">
I am using nginx to serve my app. And the img folder is mounted samba share directory.
Thanks.
I found the solution to the question I asked earlier. Thing I was looking for was, I wanted to use local files under my webserver on my Rails application. The main idea behind this is I didn't want to fill my web server's storage space with user content.
Therefore I basically created a mounted share under my server with the following script, then used Rails's File.open method in order to access this mounted directory's content.
The following functio reads the data from my UNIX based server's any directory and then sends the file to client machine with a basic HTTP communication. I placed this lines under controller function and gave a get routing under config/routes.rb
data_file = '/path/to/file'
data = File.open(File.expand_path(data_file), 'r')
send_data data.read, filename: "imageName.png", type: "image/png", disposition: 'inline', stream: 'true', buffer_size: '4096'
Here is another source explaining an answer for similar question.

Memory doesn't release

I have a function that create a backup of the app and download it in a zip file, with the datas in the databases, images, files, etc. For this i create a big temp file (using the Tempfile class) that send to browser with send_data, but when i delete it after send_data the download failed and its memory is not released.
send_file(zip_data.path, type: 'application/zip', filename: "#{model_name}.zip")
zip_data.unlink
Service class: http://pastebin.com/MskjP8d7
Controller method: http://pastebin.com/CV9Wr27h
It happens because by the time the unlink method is executed, the request hasn't been completely served yet and the server hasn't actually sent the file. send_file is actually handled by the web server.
You could either drop the unlink call. Ruby garbage collector will clean up Tempfiles once they are out of scope. Or, replace send_file with send_data and manually send the binary contents of your Zip file from within the controller.

rails as proxy for remote file download

I am having a rails application on e.g. example.com . I am using a cloud storage provider for any kind of files (videos, images, ...).
No I would like to make them available for download without exposing the url of the actual storage location.
So I was thinking of a kind of proxy. A simple controller which could look like this :
data = open(params[:file])
filename = "#{RAILS_ROOT}/tmp/my_temp_file"
File.open(filename, 'r+') do |f|
f.write data.read
end
send_file filename, ...options...
( code taken from a link ).
Point being is that I would have to download the file first.
So I was wondering if it would be possible to stream the file right away without downloading from the cloud storage first.
best
philip
I was working on this exact issue a while ago and came to the conclusion that this would not be possible without having to download the file to your server and then pass it on to the client as you say.
I'd recommend generating a signed, expiring download link that you insert into a hidden iframe whenever a user clicks a download link on your page. In this way they will get the experience of downloading from your page, without the file making an unnecessary roundtrip to your server.

How do you create a file without touching the hard disk?

I'm trying to create PDFs that can be stored on an external server.
I do this:
File.new("temp.pdf", "w").close
File.open("temp.pdf", "wb") do |f|
f.write(bytes)
end
File.open("temp.pdf", "r") do |f|
# upload `f` to server
end
File.delete("temp.pdf")
then upload them to the server.
On my local machine this works fine, but, I recently tried running on another machine, and I got a permissions error in the log.
Is there a way to:
Write bytes to a file.
Never touch the hard disk.
Why don't you just upload the bytes to the server?
You may have to go a little lower-level than normal, but check for instance the UploadIO class of the multipart-post gem.
I realize I have to write to file and delete the file since UploadIO takes in an open file
So I created an new file, wrote the content to it, passed it in as a File.open to UploadIO, and then deleted the file after I send it.

Ruby on rails: Image downloads with Authentication/Authorization/Time outs

I'm having few doubts on implementing file downloads. I'm creating an app where I use attachment_fu with Amazon s3 to upload files. Things are working pretty well so far on uploading side. Now its the time to start the file downloads. Here is what I need, a logged in user search and browse for Images and they should able to add the files in to a download basket (Let's say its a Download Shopping Cart). Finally the user should be able to download these file(s) from S3 probably as a zipped file.
Is there any plugin/gem where I can use for this?
The downside of giving the customer a zip file of all the files is that you'll need to first pull all of the files from S3 back onto your server, then zip them.
You can certainly do that if you want, but it will take a bit of time, you would not want to do it synchronously as part of the browser request. Instead, do it as a background job using delayed_job or similar.
To do the actual zipping, use Zlib::GzipWriter See http://ruby-doc.org/core/classes/Zlib/GzipWriter.html -- it is part of standard Ruby
You could then:
email the user the actual zip file as an attachment
email the user the link to the zip file on your server
or upload the zip file to s3, then email a link to the zip file on s3
Remember to create a clean up task/job to remove the old zip files from your system...
Alternative is to not zip the files together, instead, give the user one or more links to download the files separately.
S3 enables you to create a url to an S3 file that can be used for a set period of time. (The file would be private on S3 so a straight link to it won't work.) Here's how to create it using attachment-fu and aws-s3 gem:
# I added this as a method to my model for the files stored in S3
def authenticated_s3_url
# return a publicly usable url
connect_to_aws # a local method which connects/re-connects to s3
S3Object.url_for(full_filename,
bucket_name,
:expires_in => 60 * 60) # 1 hour
end

Resources