Download zip file from rails 4 to angularjs - ruby-on-rails

I'm trying to download a zip file, sent from a rails 4 application to the front-end.
The zip file construction is working correctly, I can unzip the file and get the content
filename = "cvs_job_#{params[:job_id]}.zip"
archive_path ="#{Rails.root}/tmp/#{filename}"
File.delete(archive_path) if File.exists?(archive_path)
Zip::File.open(archive_path, Zip::File::CREATE) do |zipfile|
params[:user_ids].each do |user_id|
user = User.find(user_id)
zipfile.add("#{user.last_name}_#{user.first_name}.pdf", user.cv_file.path) unless user.cv_file.nil?
end
end
send_file("#{Rails.root}/tmp/#{filename}", :type => 'application/zip', :disposition => 'attachment')
but how am I supposed to handle the response back in the promise?
$http(req).success(function(success){
console.log(success)
})
I saw the zip file in the chrome console, such as :
"...8f�~��/g6�I�-v��=� ..."
I have tried many solutions but none are working.
I thought that I would be able to send the file and download from my front.

Related

Rails download file direct from S3 with content-disposition = attachment?

This is my controller
Cotroller
def download
data = open(#attachment.file.url).read
#attachment.clicks = #attachment.clicks.to_i + 1
#attachment.save
send_data data, :type => #attachment.content_type, :filename => #attachment.name
end
example:
#attachment.file.url = "http://my_bucket.cloudfront.net/uploads/attachment/file/50/huge_file.pptx"
I did this, but if #attachement is a huge file (eg. 300MB), my server crash.
I want to allow users to download the file in the browser directly from my AWS server?
2) tip: Do you suggest to download file from S3 (where they are stored) or with CloudFront?
If you using carrierwave gem, you can try this to track number of clicks
def download
#attachment.clicks.to_i += 1
#attachment.save
redirect_to #attachment.file.url(query: {"response-content-disposition" => "attachment;"})
end
references:
Rails carrierwave S3 get url with Content-Disposition header

Issue with Zip file exports in Ruby / Rails

I am having an issue with my rails action that takes binary (blob) files from my database and packages them up into a nice zip file and then finally sends it out for download. The problem occurs when I try to unzip the file, it says something like "Unable to expand; Error 1 - Operation not permitted." I believe that means that the file is corrupted but I don't know what I am doing wrong. I've included my code below, any help would be greatly appreciated. Thanks!
require 'zip/zip'
require 'zip/zipfilesystem'
def export
#layers = Layer.where('group_id > 1')
temp = Tempfile.new("layers-zip-export")
Zip::ZipOutputStream.open(temp.path) do |zipfile|
#layers.each do |layer|
zipfile.put_next_entry(layer.name)
file = Tempfile.new("temp-" + layer.id.to_s)
file.binmode
file << layer.file
file.rewind
zipfile.write IO.binread(file.path)
file.close
file.unlink
end
end
send_file temp.path, :type => 'application/zip', :filename => "layer-export.zip"
temp.close
end

Rails 4, asset pipeline causes user downloadable files to be downloaded twice

I have a folder in my app directory named "uploads" where users can upload files and download files. I don't want the uploads folder to be in the public directory because I want to control download authorization.
In my controller, I have:
send_file Rails.root.join('app', 'uploads', filename), :type => 'application/zip', :disposition => 'inline', :x_sendfile=>true
This actually works fine. The problem is that when I'm on the production server, when I run the rake assets:precompile, and have an assets directory, the file downloads twice. The first time the file downloads, the browser acts as if nothing is going on (no loading spinning), but I see data being transferred in the Google Chrome web developer Network tab. Then after the file has been downloaded, a prompt comes up asking the user if he/she wants to download the file.
Removing the assets folder in the public directory gets rid of this problem, but I want to use the asset pipeline. I also tried changing the asset pipeline requires from require_tree to require_directory.
Does anyone know how to get send_file working properly with the asset pipeline?
Thanks.
For anyone having this problem, I solved it. Pass
'data-no-turbolink' => true
into the link_to helper to stop Turbolinks from messing with the download.
https://github.com/rails/turbolinks/issues/182
But if you are using a form with turbooboost = true, instead of link_to, or even with a link_to you can do it like this:
Inside your controller, and inside your action put:
def download
respond_to do |format|
format.html do
data = "Hello World!"
filename = "Your_filename.docx"
send_data(data, type: 'application/docx', filename: filename)
end
format.js { render js: "window.location.href = '#{controller_download_path(params)}';" }
end
end
Replace controller_download_path with a path to your download action,
and place in your routes both post and get for the same path:
post '/download' => 'your_controller#download', as: :controller_download
get '/download' => 'your_controller#download', as: :controller_download

rails send_file and send_data sends out zero byte files

I'm trying to send a pdf back to the user but I'm having serious problem getting send_file and send_data to work. I created the pdf file as follows:
tmp = Tempfile.new('filled')
new_tmp_path = PDFPrint.fill_form_using_pdftk(template_path, tmp.path)
send_file (new_tmp_path, :filename => 'filled.pdf')
The browser prompts for a download, but the downloaded filled.pdf file has zero byte.
I have verified that new_tmp_path does contain a valid pdf (good, filled content)
I have tried this:
File.open(new_tmp_path, 'r') do |f|
send_data(f.read, :filename => "filled.pdf")
end
But this also gives me the same download->zero-byte problem, while the file on server (new_tmp_path) has perfect content.
Regards,
Try sending a simple file to see if it works
send_file '/path/to.jpeg', :type => 'image/jpeg', :disposition => 'inline'
Read this thread, I think it has everything you need.

Zip up all Paperclip attachments stored on S3

Paperclip is a great upload plugin for Rails. Storing uploads on the local filesystem or Amazon S3 seems to work well. I'd just assume store files on the localhost, but the use of S3 is required for this app as it will be hosted on Heroku.
How would I go about getting all of my uploads/attachments from S3 in a single zipped download?
Getting a zip of files from the local filesystem seems straight forward. It's getting the files from S3 that has me puzzled. I think it may have something to do with the way that rubyzip handles files referenced by URL. I've tried various approaches but can't seem to avoid errors.
format.zip {
registrations_with_attachments = Registration.find_by_sql('SELECT * FROM registrations WHERE abstract_file_name NOT LIKE ""')
headers['Cache-Control'] = 'no-cache'
tmp_filename = "#{RAILS_ROOT}/tmp/tmp_zip_" <<
Time.now.to_f.to_s <<
".zip"
# rubyzip gem version 0.9.1
# rdoc http://rubyzip.sourceforge.net/
Zip::ZipFile.open(tmp_filename, Zip::ZipFile::CREATE) do |zip|
#get all of the attachments
# attempt to get files stored on S3
# FAIL
registrations_with_attachments.each { |e| zip.add("abstracts/#{e.abstract.original_filename}", e.abstract.url(:original, false)) }
# => No such file or directory - http://s3.amazonaws.com/bucket/original/abstract.txt
# Should note that these files in S3 bucket are publicly accessible. No ACL.
# works with local storage. Thanks to Henrik Nyh
# registrations_with_attachments.each { |e| zip.add("abstracts/#{e.abstract.original_filename}", e.abstract.path(:original)) }
end
send_data(File.open(tmp_filename, "rb+").read, :type => 'application/zip', :disposition => 'attachment', :filename => tmp_filename.to_s)
File.delete tmp_filename
}
You almost certainly want to use e.abstract.to_file.path instead of e.abstract.url(...).
See:
Paperclip::Storage::S3::to_file (should return a TempFile)
TempFile::path
UPDATE
From the changelog:
New in 3.0.1:
API CHANGE: #to_file has been removed. Use the #copy_to_local_file method instead.
#vlard's solution is ok. However I've run into some issues with the to_file. It creates a tempfile and the garbage collector deletes (sometimes) the file before it was added to the zip file. Therefor, I'm getting random Errno::ENOENT: No such file or directory errors.
So I'm using the following code now (I've kept the initial code variables names for consistency with the initial question)
format.zip {
registrations_with_attachments = Registration.find_by_sql('SELECT * FROM registrations WHERE abstract_file_name NOT LIKE ""')
headers['Cache-Control'] = 'no-cache'
#please note that using nanoseconds option in strftime reduces the risks concerning the situation where 2 or more users initiate the download in the same time
tmp_filename = "#{RAILS_ROOT}/tmp/tmp_zip_" <<
Time.now.strftime('%Y-%m-%d-%H%M%S-%N').to_s <<
".zip"
# rubyzip gem version 0.9.4
zip = Zip::ZipFile.open(tmp_filename, Zip::ZipFile::CREATE)
zip.close
registrations_with_attachments.each { |e|
file_to_add = e.file.to_file
zip = Zip::ZipFile.open(tmp_filename)
zip.add("abstracts/#{e.abstract.original_filename}", file_to_add.path)
zip.close
puts "added #{file_to_add.path} to #{tmp_filename}" #force garbage collector to keep the file_to_add until after the file has been added to zip
}
send_data(File.open(tmp_filename, "rb+").read, :type => 'application/zip', :disposition => 'attachment', :filename => tmp_filename.to_s)
File.delete tmp_filename
}

Resources