Carrierwave - Multiple images upload too slow when adding or removing images - ruby-on-rails

I'm following this article in CarrierWave Wiki https://github.com/carrierwaveuploader/carrierwave/wiki/How-to%3A-Add-more-files-and-remove-single-file-when-using-default-multiple-file-uploads-feature to implement adding more images and removing images for a model in my system using CarrierWave Multiple Uploads feature.
The main code in that article is
def add_more_images(new_images)
images = #gallery.images
images += new_images
#gallery.images = images
end
def remove_image_at_index(index)
remain_images = #gallery.images # copy the array
deleted_image = remain_images.delete_at(index) # delete the target image
deleted_image.try(:remove!) # delete image from S3
#gallery.images = remain_images # re-assign back
end
It works. However, it is too slooooow. I have looked at the log and the overall processing time is as follow:
Upload 1 image: it takes 5000ms for example
Add 1 more image: it takes 8500ms (2 images)
Add 1 more image: it takes 12000ms (3 images)
Remove 1 image: it takes 8400ms (back to 2 images)
I have tested the sample app of this solution written by the author on my local machine and it was very slow too.
It seems like CarrierWave reuploads and re-processes all images even we only add or remove 1 image. I think because we are re-assigning back new array of images to #gallery so that it treats old images as new ones.
Also there is a related issue here https://github.com/carrierwaveuploader/carrierwave/issues/1704#issuecomment-259106600
Does anyone have any better solution for adding and removing images using CarrierWave multiple upload feature?
Thanks.

when you call model.images = remain_images, carrierwave will upload all images. So the more images you stored in a column, the longer it will take.
See: mount.rb#L300, mounter.rb#L40
I had this problem before, the following is my code:
new_images = self.logo_images.clone
4.times do |t|
next if !(image = params[:"logo_image#{t + 1}"])
new_images[t] = image
changed = true
end
self.logo_images = new_images if changed
...
self.save if changed
And this is the hack...
(works fine with carrierwave 1.0.0 and carrierwave-aws 1.1.0)
mounter = self.send(:_mounter, :logo_images)
4.times do |t|
next if !(image = params[:"logo_image#{t + 1}"])
uploader = mounter.blank_uploader
uploader.cache!(image)
mounter.uploaders[t] = uploader
changed = true
end
mounter.uploaders.each{|s| s.send(:original_filename=, s.file.filename) if !s.filename} if changed
...
self.save if changed

Related

Process `auto_orient` with Cloudinary upload

I had a regular file upload that I now changed to using Cloudinary.
Upon upload I did the following to prevent orientation glitches when uploading images from a mobile device (See exif image rotation issue using carrierwave and rmagick to upload to s3 for details):
process :rotate
process :store_dimensions
def rotate
manipulate! do |image|
image.tap(&:auto_orient)
end
end
def store_dimensions
# This does not work with cloudinary #18
if file && model
model.width, model.height = ::MiniMagick::Image.open(file.file)[:dimensions]
end
end
Neither rotation nor storing the dimensions work, since I switched to cloudinary.
Now Cloudinary has an official tutuorial that shows how to do this but it simply does not work and other people seem to have the same issue and neither of the provided options worked for me:
I was able to get it working using a variation of the first option:
after_save :update_dimensions
def update_dimensions
if self.image != nil && self.image.metadata.present?
width = self.image.metadata["width"]
height = self.image.metadata["height"]
self.update_column(:width, width)
self.update_column(:height, height)
end
end
Important: since we're inside a after_save callback here it's crucial to use update_column so that we don't trigger another callback and end up in an infinite loop.
Fix for the provided solution:
self.image.present? returned false but self.image != nil returend true.

Watermarking with MiniMagick

Versions:
Ruby 2.2.3
Rails 4.2.4
mini_magick: 4.2.10
Carrierwave 0.10.0
Description
I am trying to create a watermarker for a small gallery using CarrierWave as uploader.
I want the watermark to be sized compared to the current image. Therefore I am trying to use a .svg-file with different opacities and a transparent background.
I am using a watermarker based on Carrierwave add a watermark to processed images
require 'mini_magick'
class Watermarker
def initialize(original_path, watermark_path)
#original_path = original_path.to_s
#watermark_path = watermark_path.to_s
end
def watermark!(options = {})
options[:gravity] ||= 'SouthEast'
image = MiniMagick::Image.open(#original_path)
watermark = MiniMagick::Image.open(#watermark_path)
result = image.composite(watermark, 'png') do |c|
c.gravity options[:gravity]
end
result.write #original_path
end
end
And calling this as a process from my uploader.
My problems:
I cannot get the watermarker to input the picture with transparent background. I played around with:
https://github.com/minimagick/minimagick#composite
http://www.imagemagick.org/script/composite.php
But no progress.
I cannot adjust the size of the overlay image properly. There is a lot of settings for the geometry command but I'm stuck.
Any ideas and help would be great.

upload an RMagick-generated file from Heroku to Amazon S3

I am creating a Rails app which is hosted on Heroku and that allows the user to generate animated GIFs on the fly based on an original JPG that's hosted somewhere in the web (think of it as a crop-resize app). I tried Paperclip but, AFAIK, it does not handle dynamically-generated files. I am using the aws-sdk gem and this is a code snippet of my controller:
im = Magick::Image.read(#animation.url).first
fr1 = im.crop(#animation.x1,#animation.y1,#animation.width,#animation.height,true)
str1 = fr1.to_blob
fr2 = im.crop(#animation.x2,#animation.y2,#animation.width,#animation.height,true)
str2 = fr2.to_blob
list = Magick::ImageList.new
list.from_blob(str1)
list.from_blob(str2)
list.delay = #animation.delay
list.iterations = 0
That is for the basic creation of a two-frame animation. RMagick can generate a GIF in my development computer with these lines:
list.write("#{Rails.public_path}/images/" + #animation.filename)
I tried uploading the list structure to S3:
# upload to Amazon S3
s3 = AWS::S3.new
bucket = s3.buckets['mybucket']
obj = bucket.objects[#animation.filename]
obj.write(:single_request => true, :content_type => 'image/gif', :data => list)
But I don't have a size method in RMagick::ImageList that I can use to specify that. I tried "precompiling" the GIF into another RMagick::Image:
anim = Magick::Image.new(#animation.width, #animation.height)
anim.format = "GIF"
list.write(anim)
But Rails crashes with a segmentation fault:
/path/to/my_controller.rb:103: [BUG] Segmentation fault ruby 1.8.7 (2010-01-10 patchlevel 249) [universal-darwin11.0]
Abort trap: 6
Line 103 corresponds to list.write(anim).
So right now I have no idea how to do this and would appreciate any help I receive.
As per #mga's request in his answer to his original question...
a non-filesystem based approach is pretty simple
rm_image = Magick::Image.from_blob(params[:image][:datafile].read)[0]
# [0] because from_blob returns an array
# the blob, presumably, can have multiple images data in it
a_thumbnail = rm_image.resize_to_fit(150, 150)
# just as an example of doing *something* with it before writing
s3_bucket.objects['my_thumbnail.jpg'].write(a_thumbnail.to_blob, {:acl=>:public_read})
Voila! reading an uploaded file, manipulating it with RMagick, and writing it to s3 without ever touching the filesystem.
Since this project is hosted in Heroku I cannot use the filesystem so that is why I was trying to do everything via code. I found that Heroku does have a temporary-writable folder: http://devcenter.heroku.com/articles/read-only-filesystem
This works just fine in my case since I don't need the file after this request.
The resulting code:
im = Magick::Image.read(#animation.url).first
fr1 = im.crop(#animation.x1,#animation.y1,#animation.width,#animation.height,true)
fr2 = im.crop(#animation.x2,#animation.y2,#animation.width,#animation.height,true)
list = Magick::ImageList.new
list << fr1
list << fr2
list.delay = #animation.delay
list.iterations = 0
# gotta packet the file
list.write("#{Rails.root}/tmp/#{#animation.filename}.gif")
# upload to Amazon S3
s3 = AWS::S3.new
bucket = s3.buckets['mybucket']
obj = bucket.objects[#animation.filename]
obj.write(:file => "#{Rails.root}/tmp/#{#animation.filename}.gif")
It would be interesting to know if a non-filesystem-writing solution is possible.
I am updating this answer for AWS SDK Version 2 which should be:
rm_image = Magick::Image.from_blob(params[:image][:datafile].read)[0]
# [0] because from_blob returns an array
# the blob, presumably, can have multiple images data in it
a_thumbnail = rm_image.resize_to_fit(150, 150)
# just as an example of doing *something* with it before writing
s3 = Aws::S3::Resource.new
bucket = s3.bucket('mybucket')
obj = bucket.object('filename')
obj.put(body: background.to_blob)
I think there's a few things going on here. First, the documentation for RMagick is sub-par, and its easy to get side-tracked. The code you're using to generate the gif can be a little simpler. I cooked up a very contrived example here:
#!/usr/bin/env ruby
require 'rubygems'
require 'RMagick'
# read in source file
im = Magick::Image.read('foo.jpg').first
# make two slightly different frames
fr1 = im.crop(0, 100, 300, 300, true)
fr2 = im.crop(0, 200, 300, 300, true)
# create an ImageList
list = Magick::ImageList.new
# add our images to it
list << fr1
list << fr2
# set some basic values
list.delay = 100
list.iterations = 0
# write out an animated gif to the filesystem
list.write("foo.gif")
This code works -- it reads in a jpg I have locally, and writes out a 2-frame animation. Obviously I've hardcoded some values here, but there's no reason this shouldn't work for you, although I am running ruby 1.9.2 and probably a different version of RMagick, but this is basic code.
The second issue is totally unrelated -- is it possible to upload an image generated in IM to S3 without actually hitting the filesystem? Basically, will this ever work:
obj.write(:single_request => true, :content_type => 'image/gif', :data => list)
I'm not sure if it is or not. I experimented with calling list.to_blob, but it only outputs the first frame, and it's as a JPG, although I didn't spend much time on it. You might be able to fool list.write into outputting somewhere, but rather than going down that road, I would personally just output the file unless that is impossible for some reason.

Rails: Carrierwave recreate versions does not change old images

My Rails app uses carrierwave to manage image uploads. I have a watermark version of the images on my site. Previously I was overlaying an image on them, like so:
def watermark
manipulate! do |img|
logo = Magick::Image.read("#{Rails.root}/public/images/plc-watermark.png").first
img = img.composite(logo, Magick::SouthEastGravity, Magick::OverCompositeOp)
end
end
Now I'm overlaying text, like so:
def watermark
manipulate! do |img|
text = Magick::Draw.new
text.gravity = Magick::CenterGravity
text.pointsize = 12
text.font = "#{Rails.root}/public/fonts/hn300.ttf"
text.stroke = 'none'
text.annotate(img, 0, 0, 0, 0, "Photo © #{model.user.full_name}\nHosted by Placeology.ws\nPlease log in to remove this watermark")
img
end
end
Now, this works for new images, but when I call recreate_versions! the old photos are not replaced. How can I get this new watermark to replace the old one?
For what it's worth I'm using Fog with Amazon S3 for storage in both development and production.
This might not be quite the same issue, but for googleability:
We have a random hash in the filename similar to what is described in this discussion thread.
When regenerating images, it would generate new images, using a new hash, but it wouldn't update the filename stored in the database, so it would attempt to display images with the old names.
This reproduces the problem:
bundle exec rails runner "Foo.find(123).images.each { |img| uploader = img.image; puts %{before: #{img.image.inspect}}; uploader.recreate_versions!; puts %{after: #{img.reload.image.inspect}} }; p Foo.find(123).images"
It gives output like
before: /uploads/foo_123_6a34e47ef5.JPG
after: /uploads/foo_123_d9a346292d.JPG
[#<Image id: 456, foo_id: 123, image: "foo_123_6a34e47ef5.JPG">]
But adding a img.save! after recreating versions fixes it:
bundle exec rails runner "Foo.find(123).images.each { |img| uploader = img.image; puts %{before: #{img.image.inspect}}; uploader.recreate_versions!; img.save!; puts %{after: #{img.reload.image.inspect}} }; p Foo.find(123).images"
With output:
before: /uploads/foo_123_6a34e47ef5.JPG
after: /uploads/foo_123_d9a346292d.JPG
[#<Image id: 456, foo_id: 123, image: "foo_123_d9a346292d.JPG">]
Edit:
Actually, the above worked with files on disk, but not with fog. To make things easy for myself, I ended up just recreating the images and removing the old ones:
Image.all.each { |old|
new = Image.new(foo_id: old.foo_id, image: old.image)
new.save!
old.destroy
}
You need to call image.cache_stored_file! before calling recreate_versions!
It's weird because the method itself calls that if the file is cached, but for some reason it wasn't working.

Fast way to get remote image dimensions

I'm using the imagesize gem to check the sizes of remote images and then only push images that are big enough into an array.
require 'open-uri'
require 'image_size'
data = Nokogiri::HTML(open(url))
images = []
forcenocache = Time.now.to_i # No cache because jquery load event doesn't fire for cached images
data.css("img").each do |image|
image_path = URI.join(site, URI.encode(image[:src]))
open(image_path, "rb") do |fh|
image_size = ImageSize.new(fh.read).get_size()
unless image_size[0] < 200 || image_size[1] < 100
image_element = "<img src=\"#{image_path}?#{forcenocache}\">"
images.push(image_element)
end
end
end
I tried using JS on the front-end to check image dimensions but there seems to be a browser limit to how many images can be loaded at once.
Doing it with imagesize is much slower than using JS. Any better and faster ways to do this?
I think this gem does what you want https://github.com/sdsykes/fastimage
FastImage finds the size or type of an
image given its uri by fetching as
little as needed

Resources