How to use docx gem with amazon s3? - ruby-on-rails

I have this create action to extract data from doc and docx files using the docx gem and the msworddoc-extractor gem
if #subject.save
if #subject.odoc.present?
#odoc_url = #subject.odoc.url
if File.extname(URI.parse(#odoc_url).path) == ".docx"
#subject.homework= ""
doc = Docx::Document.open(#odoc_url)
doc.paragraphs.each do |p|
#subject.homework = #subject.homework+p.to_html
end
else
MSWordDoc::Extractor.load(#odoc_url) do |doc|
#subject.homework= doc.whole_contents
end
end
#subject.save
end
now, doc files works fine.. My problem is with doc = Docx::Document.open(#odoc_url) when i use the code on my local machine it works fine.. when i push into production i get an error Zip::Error: File s3.amazonaws.com/~~~ not found I'm not really sure how to load the file to be accessible to the docx gem

So i finally got it without having to download the file using open-uri
doc = Docx::Document.open(open(#odoc_url).path)

Related

How to send the zip file in Rails via Grape API

I have a set of files that are present in s3 and I have to zip them all and send the zipped file to the front end(ReactJS).
I am successfully able to create a folder in the tmp of the project and also zip them. Unfortunately, I get the error when I try to expand saying Unable to expand
Here is the code -
data = Zip::File.open(zip_file_name, ::Zip::File::CREATE) do |zipfile|
files.each do |file|
zipfile.add(file, file_path)
end
end
content_type "application/octet-stream"
header['Content-Disposition'] = "attachment; filename=abcd.zip"
env['api.format'] = :binary
File.open(zip_file_name, 'rb').read
Is there a way to solve the problem? Thanks

How to convert word file to PDF in ROR

I am using Libreconv gem to convert word to doc but it's not working with S3
bucket = Aws::S3::Bucket.new('bucket-name')
object = bucket.object file.attachment.blob.key
path = object.presigned_url(:get)
Libreconv.convert(path, "public/test.pdf")
If I try to convert this path to PDF using Libreconv then it's give me filename too long error. I have wrriten this code under ActiveJobs. So kindly provide me solutions as per ActiveJobs.
Can someone please suggest me how can I convert word file to pdf.
Here path is https://domain.s3.amazonaws.com/Bf5qPUP3znZGCHCcTWHcR5Nn?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIZ6RZ7J425ORVUYQ%2F20181206%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20181206T051240Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=b89c47a324b2aa423bf64dfb343e3b3c90dce9b54fa9fe1bc4efa9c248e912f9
and error I am getting is
Error: source file could not be loaded
*** Errno::ENAMETOOLONG Exception: File name too long # rb_sysopen - /tmp/Bf5qPUP3znZGCHCcTWHcR5Nn?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIZ6RZ7J425ORVUYQ%2F20181206%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20181206T051240Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=b89c47a324b2aa423bf64dfb343e3b3c90dce9b54fa9fe1bc4efa9c248e912f9.pd
It seems that you PDF is created with all the params needed to fetch docx from S3.
I suppose it happens in this line:
target_tmp_file = "#{target_path}/#{File.basename(#source, ".*")}.#{File.basename(#convert_to, ":*")}"
#source is https://domain.s3.amazonaws.com/Bf5qPUP3znZGCHCcTWHcR5Nn?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIZ6RZ7J425ORVUYQ%2F20181206%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20181206T051240Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=b89c47a324b2aa423bf64dfb343e3b3c90dce9b54fa9fe1bc4efa9c248e912f9 and
> File.basename(#source, ".*")
=> "Bf5qPUP3znZGCHCcTWHcR5Nn?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIZ6RZ7J425ORVUYQ%2F20181206%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20181206T051240Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=b89c47a324b2aa423bf64dfb343e3b3c90dce9b54fa9fe1bc4efa9c248e912f9"
As a result Libreconv gem tries to create a tmp file with this long name and it's too long - that's why an error is raised.
Possible solution: split the process into separate steps of fetching file and converting it. Something like:
require "open-uri"
bucket = Aws::S3::Bucket.new('bucket-name')
object = bucket.object file.attachment.blob.key
path = object.presigned_url(:get)
doc_file = open(path)
begin
Libreconv.convert(doc_file.path, "public/test.pdf")
ensure
doc_file.delete
end
following is the answer using combine pdf gem
tape = Tape.new(file)
result = tape.preview
tempfile = Tempfile.new(['foo', '.pdf'])
File.open(tempfile, 'wb') do |f|
f.write result
end
path = tempfile.path
combine_pdf(path)
and for load file for S3 I have used
object = #bucket.object object_key
path = object.presigned_url(:get)
response = Net::HTTP.get_response(URI.parse(path)).body

Upload Wicked generated PDF to AWS S3 in Rails 5

I am generating invoices as PDFs and want to upload them directly to S3.
I am using Wicked-PDF and the official AWS SDK.
gem 'wicked_pdf'
gem 'aws-sdk-s3', '~> 1'
Now I create the PDF:
pdf = render_to_string pdf: "some_file_name", template: "invoices/download", encoding: "UTF-8"
And want to upload it:
s3 = Aws::S3::Resource.new(region: ENV['AWS_REGION'])
obj = s3.bucket('bucket-development').object('Filename')
obj.upload_file(pdf)
The error I get:
ArgumentError: string contains null byte
If I store the PDF first to a defined path and use the save_path it works:
save_path = Rails.root.join('public','filename.pdf')
File.open(save_path, 'wb') do |file|
file << pdf
end
But I would like to upload he temporary PDF directly to S3 without saving the PDF first to my public folder.
The upload_file method from AWS S3 SDK is working with files - see the method's description.
For uploading an object from memory, you should use the put method - see the method's description in the 2nd way of uploading on this page

Rails : carrier wave and heroku without S3

I am using rails , carrierwave and heroku but right now I don't have a s3 account so I used this configuration
How to: Make Carrierwave work on Heroku
It worked very well for files uploaded by the user but It didn't work for files uploaded through seeds
I am using this syntax
book.cover = File.open(File.join(Rails.root, 'photo.jpg'))
book.save!
Try doing this instead:
file = File.open(File.join(Rails.root, 'photo.jpg'))
book.cover = file
file.close
book.save!

Rails send_file alternative with Rackspace Cloudfiles

I am trying to setup a download link for a file management system built on Rails 3 using the paperclip-cloudfiles gem. The send_file method works great when hosting files locally, but I need to use the Rackspace Cloudfiles system. I've tried setting the response headers and it seems to initialize the download, but the file is empty when finished.
here is my download function:
#file = UserFile.find(params[:id])
response.headers['Content-type'] = "#{#file.attachment_content_type}"
response.headers['Content-Disposition'] = "attachment;filename=\"#{#file.attachment_file_name}\""
response.headers['Content-Length'] = "#{#file.attachment_file_size}"
response.headers['Content-Description'] = 'File Transfer'
response.headers['Location'] = "#{#file.attachment.url(:original, false)}"
render :nothing => true
Am I doing this right?
I've also tried using just the ruby-cloudfiles library from Rackspace to download the object but no luck there as well.
Use "send_data" method.
It works for me.

Resources