I am trying to setup a download link for a file management system built on Rails 3 using the paperclip-cloudfiles gem. The send_file method works great when hosting files locally, but I need to use the Rackspace Cloudfiles system. I've tried setting the response headers and it seems to initialize the download, but the file is empty when finished.
here is my download function:
#file = UserFile.find(params[:id])
response.headers['Content-type'] = "#{#file.attachment_content_type}"
response.headers['Content-Disposition'] = "attachment;filename=\"#{#file.attachment_file_name}\""
response.headers['Content-Length'] = "#{#file.attachment_file_size}"
response.headers['Content-Description'] = 'File Transfer'
response.headers['Location'] = "#{#file.attachment.url(:original, false)}"
render :nothing => true
Am I doing this right?
I've also tried using just the ruby-cloudfiles library from Rackspace to download the object but no luck there as well.
Use "send_data" method.
It works for me.
Related
I can't get ActiveStorage images to work in production. I want to use a resized image (variant) within the body of the PDF I'm generating.
= image_tag(#post.image.variant(resize_to_limit: [150, 100]))
It worked in development but in production generating the PDF hangs indefinitely unless I take that line out.
I've tried things like #post.image.variant(resize_to_limit: [150, 100]).processed.url and setting Rails.application.default_url_options = { host: "example.com" }
Ironically when I restart Passenger it sends the PDF to the browser and it actually looks fine. The image is included.
This is similar:
= wicked_pdf_image_tag(#post.image.variant(resize_to_limit: [150, 100]).processed.url)
Rails 7.0.3, Ruby 3.1.2, wicked_pdf 2.6.3
Thanks to #Unixmonkey I added passenger_min_instances 3; to my server block in Nginx config and it worked initially but would hang Passenger under load. Since I didn't have the RAM to throw at increasing that number I came up with a different solution based on reading images from file.
= image_tag(active_storage_to_base64_image(#post.image.variant(resize_to_limit: [150, 100])))
Then I created a helper in application_helper.rb
def active_storage_to_base64_image(image)
require "base64"
file = File.open(ActiveStorage::Blob.service.path_for(image.processed.key))
base64 = Base64.encode64(file.read).gsub(/\s+/, '')
file.close
"data:image/png;base64,#{Rack::Utils.escape(base64)}"
end
I've hard coded it for PNG files as that's all I needed. Only works for Disk storage. Would welcome improvements
I have this create action to extract data from doc and docx files using the docx gem and the msworddoc-extractor gem
if #subject.save
if #subject.odoc.present?
#odoc_url = #subject.odoc.url
if File.extname(URI.parse(#odoc_url).path) == ".docx"
#subject.homework= ""
doc = Docx::Document.open(#odoc_url)
doc.paragraphs.each do |p|
#subject.homework = #subject.homework+p.to_html
end
else
MSWordDoc::Extractor.load(#odoc_url) do |doc|
#subject.homework= doc.whole_contents
end
end
#subject.save
end
now, doc files works fine.. My problem is with doc = Docx::Document.open(#odoc_url) when i use the code on my local machine it works fine.. when i push into production i get an error Zip::Error: File s3.amazonaws.com/~~~ not found I'm not really sure how to load the file to be accessible to the docx gem
So i finally got it without having to download the file using open-uri
doc = Docx::Document.open(open(#odoc_url).path)
Firstly, I am aware that there are quite a few questions that are similar to this one in SO. I have read most, if not all of them, over the past week. But I still can't make this work for me.
I am developing a Ruby on Rails app that allows users to upload mp3 files to Amazon S3. The upload itself works perfectly, but a progress bar would greatly improve user experience on the website.
I am using the aws-sdk gem which is the official one from Amazon. I have looked everywhere in its documentation for callbacks during the upload process, but I couldn't find anything.
The files are uploaded one at a time directly to S3 so it doesn't need to load it into memory. No multiple file upload necessary either.
I figured that I may need to use JQuery to make this work and I am fine with that.
I found this that looked very promising: https://github.com/blueimp/jQuery-File-Upload
And I even tried following the example here: https://github.com/ncri/s3_uploader_example
But I just could not make it work for me.
The documentation for aws-sdk also BRIEFLY describes streaming uploads with a block:
obj.write do |buffer, bytes|
# writing fewer than the requested number of bytes to the buffer
# will cause write to stop yielding to the block
end
But this is barely helpful. How does one "write to the buffer"? I tried a few intuitive options that would always result in timeouts. And how would I even update the browser based on the buffering?
Is there a better or simpler solution to this?
Thank you in advance.
I would appreciate any help on this subject.
The "buffer" object yielded when passing a block to #write is an instance of StringIO. You can write to the buffer using #write or #<<. Here is an example that uses the block form to upload a file.
file = File.open('/path/to/file', 'r')
obj = s3.buckets['my-bucket'].objects['object-key']
obj.write(:content_length => file.size) do |buffer, bytes|
buffer.write(file.read(bytes))
# you could do some interesting things here to track progress
end
file.close
After read the source code of the AWS gem, I've adapted (or mostly copy) the multipart upload method to yield the current progress based on how many chunks have been uploaded
s3 = AWS::S3.new.buckets['your_bucket']
file = File.open(filepath, 'r', encoding: 'BINARY')
file_to_upload = "#{s3_dir}/#{filename}"
upload_progress = 0
opts = {
content_type: mime_type,
cache_control: 'max-age=31536000',
estimated_content_length: file.size,
}
part_size = self.compute_part_size(opts)
parts_number = (file.size.to_f / part_size).ceil.to_i
obj = s3.objects[file_to_upload]
begin
obj.multipart_upload(opts) do |upload|
until file.eof? do
break if (abort_upload = upload.aborted?)
upload.add_part(file.read(part_size))
upload_progress += 1.0/parts_number
# Yields the Float progress and the String filepath from the
# current file that's being uploaded
yield(upload_progress, upload) if block_given?
end
end
end
The compute_part_size method is defined here and I've modified it to this:
def compute_part_size options
max_parts = 10000
min_size = 5242880 #5 MB
estimated_size = options[:estimated_content_length]
[(estimated_size.to_f / max_parts).ceil, min_size].max.to_i
end
This code was tested on Ruby 2.0.0p0
I'm having some problems reading a file from S3. I want to be able to load the ID3 tags remotely, but using open-URI doesn't work, it gives me the following error:
ruby-1.8.7-p302 > c=TagLib2::File.new(open(URI.parse("http://recordtemple.com.s3.amazonaws.com/music/745/original/The%20Stranger.mp3?1292096514")))
TypeError: can't convert Tempfile into String
from (irb):8:in `initialize'
from (irb):8:in `new'
from (irb):8
However, if i download the same file and put it on my desktop (ie no need for open-URI), it works just fine.
c=TagLib2::File.new("/Users/momofwombie/Desktop/blah.mp3")
is there something else I should be doing to read a remote file?
UPDATE: I just found this link, which may explain a little bit, but surely there must be some way to do this...
Read header data from files on remote server
Might want to check out AWS::S3, a Ruby Library for Amazon's Simple Storage Service
Do an AWS::S3:S3Object.find for the file and then an use about to retrieve the metadata
This solution assumes you have the AWS credentials and permission to access the S3 bucket that contains the files in question.
TagLib2::File.new doesn't take a file handle, which is what you are passing to it when you use open without a read.
Add on read and you'll get the contents of the URL, but TagLib2::File doesn't know what to do with that either, so you are forced to read the contents of the URL, and save it.
I also noticed you are unnecessarily complicating your use of OpenURI. You don't have to parse the URL using URI before passing it to open. Just pass the URL string.
require 'open-uri'
fname = File.basename($0) << '.' << $$.to_s
File.open(fname, 'wb') do |fo|
fo.print open("http://recordtemple.com.s3.amazonaws.com/music/745/original/The%20Stranger.mp3?1292096514").read
end
c = TagLib2::File.new(fname)
# do more processing...
File.delete(fname)
I don't have TagLib2 installed but I ran the rest of the code and the mp3 file downloaded to my disk and is playable. The File.delete would clean up afterwards, which should put you in the state you want to be in.
This solution isn't going to work much longer. Paperclip > 3.0.0 has removed to_file. I'm using S3 & Heroku. What I ended up doing was copying the file to a temporary location and parsing it from there. Here is my code:
dest = Tempfile.new(upload.spreadsheet_file_name)
dest.binmode
upload.spreadsheet.copy_to_local_file(:default_style, dest.path)
file_loc = dest.path
...
CSV.foreach(file_loc, :headers => true, :skip_blanks => true) do |row|}
This seems to work instead of open-URI:
Mp3Info.open(mp3.to_file.path) do |mp3info|
puts mp3info.tag.artist
end
Paperclip has a to_file method that downloads the file from S3.
Hii all,
I am trying to downaload large file in rails using send_data function ,but
getting error :failes to allocate memory and when trying to download in chunks ,getting only chunk size file only ,below is my code ..
File.open(#containerformat.location,"rb"){|f| #data = f.read(8888)}
ext = File.extname(#containerformat.streamName)
if ext == ''
extension = File.extname(#containerformat.location)
send_data(#data,:filename => #containerformat.name+extension,
:disposition => 'attachment')
else
send_data(#data,:filename => #containerformat.streamName,
:disposition => 'attachment')
end
i think am not able to make loop work
You are reading whole file into memory!
Use send_file which uses memory friendly buffered stream.
I would also suggest to use :x_sendfile here, then file may be served directly by front server (Apache, nginx, lighttpd) if proper module is available and configured. This gives very efficient downloads and prevents blocking rails instance by slow clients.
Read about "X-Sendfile" header. http://tn123.ath.cx/mod_xsendfile/