Use ActiveStorage Image in wicked_pdf - ruby-on-rails

I can't get ActiveStorage images to work in production. I want to use a resized image (variant) within the body of the PDF I'm generating.
= image_tag(#post.image.variant(resize_to_limit: [150, 100]))
It worked in development but in production generating the PDF hangs indefinitely unless I take that line out.
I've tried things like #post.image.variant(resize_to_limit: [150, 100]).processed.url and setting Rails.application.default_url_options = { host: "example.com" }
Ironically when I restart Passenger it sends the PDF to the browser and it actually looks fine. The image is included.
This is similar:
= wicked_pdf_image_tag(#post.image.variant(resize_to_limit: [150, 100]).processed.url)
Rails 7.0.3, Ruby 3.1.2, wicked_pdf 2.6.3

Thanks to #Unixmonkey I added passenger_min_instances 3; to my server block in Nginx config and it worked initially but would hang Passenger under load. Since I didn't have the RAM to throw at increasing that number I came up with a different solution based on reading images from file.
= image_tag(active_storage_to_base64_image(#post.image.variant(resize_to_limit: [150, 100])))
Then I created a helper in application_helper.rb
def active_storage_to_base64_image(image)
require "base64"
file = File.open(ActiveStorage::Blob.service.path_for(image.processed.key))
base64 = Base64.encode64(file.read).gsub(/\s+/, '')
file.close
"data:image/png;base64,#{Rack::Utils.escape(base64)}"
end
I've hard coded it for PNG files as that's all I needed. Only works for Disk storage. Would welcome improvements

Related

Paperclip Nginx 504 Gateway Time-out

I have a Rails 4 application that allows to upload videos using the jQuery Dropzone plugin and the paperclip gem. Each uploaded video is encoded into multiple formats and uploaded to Amazon S3 in the background using delayed_paperclip, av-transcoder and sidekiq gems.
All works fine with most videos, but with a higher size like 1.1GB after the upload reaches what seems like the end of the progress bar of the dropzone plugin it returns an Nginx 504 Gateway Time-out.
As far as server goes, the rails app runs on Nginx + Passenger on a couple of servers that are behind a load balancer (Nginx used here too). I do not have timeouts set in the upstream section of the load balancer, the client_max_body_size is set to 2000M (both on the load balancer and servers), I've tried setting passenger_pool_idle_time to a large value (600), that didn't help, I have also tried setting send_timeout (600s), nothing made any difference.
Note: When making those changes, I did them on the host files of both servers as well as of the load balancer and always restarted nginx afterwards.
I've read also several answers regarding similar problems like this one and this one but still can't figure this out, google wasn't much more helpful either.
Some extra notes for those unfamiliar with the whole paperclip/delayed_paperclip process, the file is uploaded to the server and then the operation is done as far as the user is concerned, in the background the post processing of the videos (encoding/uploading to S3) is pushed to Redis as a job and Sidekiq processes it whenever it has time/resources.
What could be causing this issue? How can I debug this and solve it?
UPDATE
Thanks to Sergey's answer I was able to solve the issue. Since I was restricted to a specific version of Paperclip, I couldn't update it to the newest version that has the fix, therefore I'll leave here what I ended up doing.
In the engine that I use to handle the uploads I've added the following code in the engine_name.rb file to override the methods from Paperclip that needed fixing:
Paperclip::AbstractAdapter.class_eval do
def copy_to_tempfile(src)
link_or_copy_file(src.path, destination.path)
destination
end
def link_or_copy_file(src, dest)
Paperclip.log("Trying to link #{src} to #{dest}")
FileUtils.ln(src, dest, force: true) # overwrite existing
#destination.close
#destination.open.binmode
rescue Errno::EXDEV, Errno::EPERM, Errno::ENOENT => e
Paperclip.log("Link failed with #{e.message}; copying link #{src} to #{dest}")
FileUtils.cp(src, dest)
end
end
Paperclip::AttachmentAdapter.class_eval do
def copy_to_tempfile(source)
if source.staged?
link_or_copy_file(source.staged_path(#style), destination.path)
else
source.copy_to_local_file(#style, destination.path)
end
destination
end
end
Paperclip::Storage::Filesystem.class_eval do
def flush_writes #:nodoc:
#queued_for_write.each do |style_name, file|
FileUtils.mkdir_p(File.dirname(path(style_name)))
begin
move_file(file.path, path(style_name))
rescue SystemCallError
File.open(path(style_name), "wb") do |new_file|
while chunk = file.read(16 * 1024)
new_file.write(chunk)
end
end
end
unless #options[:override_file_permissions] == false
resolved_chmod = (#options[:override_file_permissions] &~ 0111) || (0666 &~ File.umask)
FileUtils.chmod( resolved_chmod, path(style_name) )
end
file.rewind
end
after_flush_writes # allows attachment to clean up temp files
#queued_for_write = {}
end
private
def move_file(src, dest)
# Support hardlinked files
if File.identical?(src, dest)
File.unlink(src)
else
FileUtils.mv(src, dest)
end
end
end
I faced similar issue a while ago. Maybe, my experience will help.
We had m3.medium instance on Amazon with 4Gb of memory.
User could be able to upload large video files. We faced an issue of 504 error when uploading files larger than 400Mb.
During monitoring and logging the upload process it appeared that Paperclip creates 4 files per attachment and thus all the instance resources work on a file system.
Here there is a description of this problem
https://github.com/thoughtbot/paperclip/issues/1642
and proposed a solution - use links instead of files when possible. You can see the appropriate code changes here
https://github.com/arnonhongklay/paperclip/commit/cd80661df18d7cd112944bfe26d90cb87c928aad
However 2 days ago Paperclip was updated to 5.2.0 version and they implemented similar solution.
So for now it creates only one file per attachment. Thus our file system is not overloaded and after updating to 5.2.0 version we stopped receiving 504 error.
Conclusion:
Use monkey patch from the link attached above if you're restricted in Paperclip version for some reason
Update Paperclip to 5.2.0 version. Should help.

Tracking Upload Progress of File to S3 Using Ruby aws-sdk

Firstly, I am aware that there are quite a few questions that are similar to this one in SO. I have read most, if not all of them, over the past week. But I still can't make this work for me.
I am developing a Ruby on Rails app that allows users to upload mp3 files to Amazon S3. The upload itself works perfectly, but a progress bar would greatly improve user experience on the website.
I am using the aws-sdk gem which is the official one from Amazon. I have looked everywhere in its documentation for callbacks during the upload process, but I couldn't find anything.
The files are uploaded one at a time directly to S3 so it doesn't need to load it into memory. No multiple file upload necessary either.
I figured that I may need to use JQuery to make this work and I am fine with that.
I found this that looked very promising: https://github.com/blueimp/jQuery-File-Upload
And I even tried following the example here: https://github.com/ncri/s3_uploader_example
But I just could not make it work for me.
The documentation for aws-sdk also BRIEFLY describes streaming uploads with a block:
obj.write do |buffer, bytes|
# writing fewer than the requested number of bytes to the buffer
# will cause write to stop yielding to the block
end
But this is barely helpful. How does one "write to the buffer"? I tried a few intuitive options that would always result in timeouts. And how would I even update the browser based on the buffering?
Is there a better or simpler solution to this?
Thank you in advance.
I would appreciate any help on this subject.
The "buffer" object yielded when passing a block to #write is an instance of StringIO. You can write to the buffer using #write or #<<. Here is an example that uses the block form to upload a file.
file = File.open('/path/to/file', 'r')
obj = s3.buckets['my-bucket'].objects['object-key']
obj.write(:content_length => file.size) do |buffer, bytes|
buffer.write(file.read(bytes))
# you could do some interesting things here to track progress
end
file.close
After read the source code of the AWS gem, I've adapted (or mostly copy) the multipart upload method to yield the current progress based on how many chunks have been uploaded
s3 = AWS::S3.new.buckets['your_bucket']
file = File.open(filepath, 'r', encoding: 'BINARY')
file_to_upload = "#{s3_dir}/#{filename}"
upload_progress = 0
opts = {
content_type: mime_type,
cache_control: 'max-age=31536000',
estimated_content_length: file.size,
}
part_size = self.compute_part_size(opts)
parts_number = (file.size.to_f / part_size).ceil.to_i
obj = s3.objects[file_to_upload]
begin
obj.multipart_upload(opts) do |upload|
until file.eof? do
break if (abort_upload = upload.aborted?)
upload.add_part(file.read(part_size))
upload_progress += 1.0/parts_number
# Yields the Float progress and the String filepath from the
# current file that's being uploaded
yield(upload_progress, upload) if block_given?
end
end
end
The compute_part_size method is defined here and I've modified it to this:
def compute_part_size options
max_parts = 10000
min_size = 5242880 #5 MB
estimated_size = options[:estimated_content_length]
[(estimated_size.to_f / max_parts).ceil, min_size].max.to_i
end
This code was tested on Ruby 2.0.0p0

Converting PDF to PNG with transparent background

We have a Ruby on Rails application that needs to convert a PDF into a PNG with a transparent background. We're using rmagick 2.13.1. On our development machines the following code works exactly how we want it.
pages = Magick::Image.from_blob(book.to_pdf.render){ self.density = 300 }
page = pages[0]
image_file = Tempfile.new(['preview_image', '.png'])
image_file.binmode
image_file.write( page.to_blob { |opt| opt.format = "PNG" } )
We thens save the image_file and all is peachy. When we deployed to a review server on Heroku, though, the generated image has a white background. It turns out that Heroku's cedar stack is using imagemagick ImageMagick 6.5.7-8 2010-12-02 where we're using ImageMagick 6.7.5-7 2012-05-08 on our development machines.
I've scoured the net for older posts that might apply to the older version to try and figure out how to generate the transparent PNGs. It's surely supported, but, so far I haven't been able to figure out the right combination of settings.
To verify that it wasn't the PDF generation that was the problem, I downloaded a PDF generated on Heroku and successfully converted it using the above code (slightly modified to read the file in instead of generate it) to a transparent PNG.
Some of the things I've tried in various combinations are:
page.matte = true
page.format = "PNG32"
page.background_color = "none"
page.transparent_color = "white"
page.transparent("white")
So, the question is "is this possible?". If so, which settings do I need to set on the image before writing it out?
I'm also investigating including a compiled binary of a more up to date Imagemagick on Heroku.
Any help is appreciated.
This should no longer be an issue, as Heroku has ImageMagick versions 6.7-6.9 on their various stacks.

upload an RMagick-generated file from Heroku to Amazon S3

I am creating a Rails app which is hosted on Heroku and that allows the user to generate animated GIFs on the fly based on an original JPG that's hosted somewhere in the web (think of it as a crop-resize app). I tried Paperclip but, AFAIK, it does not handle dynamically-generated files. I am using the aws-sdk gem and this is a code snippet of my controller:
im = Magick::Image.read(#animation.url).first
fr1 = im.crop(#animation.x1,#animation.y1,#animation.width,#animation.height,true)
str1 = fr1.to_blob
fr2 = im.crop(#animation.x2,#animation.y2,#animation.width,#animation.height,true)
str2 = fr2.to_blob
list = Magick::ImageList.new
list.from_blob(str1)
list.from_blob(str2)
list.delay = #animation.delay
list.iterations = 0
That is for the basic creation of a two-frame animation. RMagick can generate a GIF in my development computer with these lines:
list.write("#{Rails.public_path}/images/" + #animation.filename)
I tried uploading the list structure to S3:
# upload to Amazon S3
s3 = AWS::S3.new
bucket = s3.buckets['mybucket']
obj = bucket.objects[#animation.filename]
obj.write(:single_request => true, :content_type => 'image/gif', :data => list)
But I don't have a size method in RMagick::ImageList that I can use to specify that. I tried "precompiling" the GIF into another RMagick::Image:
anim = Magick::Image.new(#animation.width, #animation.height)
anim.format = "GIF"
list.write(anim)
But Rails crashes with a segmentation fault:
/path/to/my_controller.rb:103: [BUG] Segmentation fault ruby 1.8.7 (2010-01-10 patchlevel 249) [universal-darwin11.0]
Abort trap: 6
Line 103 corresponds to list.write(anim).
So right now I have no idea how to do this and would appreciate any help I receive.
As per #mga's request in his answer to his original question...
a non-filesystem based approach is pretty simple
rm_image = Magick::Image.from_blob(params[:image][:datafile].read)[0]
# [0] because from_blob returns an array
# the blob, presumably, can have multiple images data in it
a_thumbnail = rm_image.resize_to_fit(150, 150)
# just as an example of doing *something* with it before writing
s3_bucket.objects['my_thumbnail.jpg'].write(a_thumbnail.to_blob, {:acl=>:public_read})
Voila! reading an uploaded file, manipulating it with RMagick, and writing it to s3 without ever touching the filesystem.
Since this project is hosted in Heroku I cannot use the filesystem so that is why I was trying to do everything via code. I found that Heroku does have a temporary-writable folder: http://devcenter.heroku.com/articles/read-only-filesystem
This works just fine in my case since I don't need the file after this request.
The resulting code:
im = Magick::Image.read(#animation.url).first
fr1 = im.crop(#animation.x1,#animation.y1,#animation.width,#animation.height,true)
fr2 = im.crop(#animation.x2,#animation.y2,#animation.width,#animation.height,true)
list = Magick::ImageList.new
list << fr1
list << fr2
list.delay = #animation.delay
list.iterations = 0
# gotta packet the file
list.write("#{Rails.root}/tmp/#{#animation.filename}.gif")
# upload to Amazon S3
s3 = AWS::S3.new
bucket = s3.buckets['mybucket']
obj = bucket.objects[#animation.filename]
obj.write(:file => "#{Rails.root}/tmp/#{#animation.filename}.gif")
It would be interesting to know if a non-filesystem-writing solution is possible.
I am updating this answer for AWS SDK Version 2 which should be:
rm_image = Magick::Image.from_blob(params[:image][:datafile].read)[0]
# [0] because from_blob returns an array
# the blob, presumably, can have multiple images data in it
a_thumbnail = rm_image.resize_to_fit(150, 150)
# just as an example of doing *something* with it before writing
s3 = Aws::S3::Resource.new
bucket = s3.bucket('mybucket')
obj = bucket.object('filename')
obj.put(body: background.to_blob)
I think there's a few things going on here. First, the documentation for RMagick is sub-par, and its easy to get side-tracked. The code you're using to generate the gif can be a little simpler. I cooked up a very contrived example here:
#!/usr/bin/env ruby
require 'rubygems'
require 'RMagick'
# read in source file
im = Magick::Image.read('foo.jpg').first
# make two slightly different frames
fr1 = im.crop(0, 100, 300, 300, true)
fr2 = im.crop(0, 200, 300, 300, true)
# create an ImageList
list = Magick::ImageList.new
# add our images to it
list << fr1
list << fr2
# set some basic values
list.delay = 100
list.iterations = 0
# write out an animated gif to the filesystem
list.write("foo.gif")
This code works -- it reads in a jpg I have locally, and writes out a 2-frame animation. Obviously I've hardcoded some values here, but there's no reason this shouldn't work for you, although I am running ruby 1.9.2 and probably a different version of RMagick, but this is basic code.
The second issue is totally unrelated -- is it possible to upload an image generated in IM to S3 without actually hitting the filesystem? Basically, will this ever work:
obj.write(:single_request => true, :content_type => 'image/gif', :data => list)
I'm not sure if it is or not. I experimented with calling list.to_blob, but it only outputs the first frame, and it's as a JPG, although I didn't spend much time on it. You might be able to fool list.write into outputting somewhere, but rather than going down that road, I would personally just output the file unless that is impossible for some reason.

Where do the temp files go when using MiniMagick in a Ruby on Rails app?

I'm using MiniMagick to perform some image resizing on images uploaded through a multi-part form. I need to generate a few different types of images from the originally uploaded file. Here's the code that's performing the image processing:
// Generates a thumbnail image
mm = MiniMagick::Image.open(Rails.root.join('public', 'uploads', new_url))
mm.resize(thumbnail_dimensions.join("x"))
mm.write(Rails.root.join('public', 'uploads', "t_"+new_url))
// Generates cropped version
mm_copy = MiniMagick::Image.open(Rails.root.join('public', 'uploads', new_url))
mm_copy.crop('200x200')
mm_copy.write(Rails.root.join('public', 'uploads', "c_"+new_url))
new_url is the path to the image in the public folder. The thumbnail routine works perfectly. When the app goes to start processing the cropped version, that is where things start breaking and I can't for the life of me figure it out. I receive the following error when from this code:
No such file or directory - /tmp/mini_magick20110627-10055-2dimyl-0.jpg
I read some stuff about possible race conditions with the garbage collector in Rails but I wasn't able to resolve the issue. I tried this from the console as well and can create MiniMagick instances but receive the No such file error there as well. At this point, I have no idea where to go so I'm hoping someone here has some helpful suggestions. Thanks for your help!
Details:
OS: Ubuntu (Lucid Lynx)
Rails Version: 3.0.7
Ruby Version: 1.8.7
MiniMagick Version: 3.3
Did you installed ImageMagick?
If not,
try sudo apt-get install ImageMagick,
and then restart your webrick server
it's probably the race condition which is mentioned here:
https://ar-code.lighthouseapp.com/projects/35/tickets/6-race-condition-with-temp_file
here's one fix:
http://rubyforge.org/tracker/index.php?func=detail&aid=9417&group_id=1358&atid=5365
alternatively, and probably easier, you could try this:
// Generates a thumbnail image
mm = MiniMagick::Image.open(Rails.root.join('public', 'uploads', new_url))
mm_copy = mm.clone # clone the opened Image, instead of re-opening it
mm.resize(thumbnail_dimensions.join("x"))
mm.write(Rails.root.join('public', 'uploads', "t_"+new_url))
// Generates cropped version
mm_copy.crop('200x200')
mm_copy.write(Rails.root.join('public', 'uploads', "c_"+new_url))

Resources