I have a scenario where i need to resize thousands of images. I am using MiniMagick to do this.
image = MiniMagick::Image.read(<blob>)
image.resize "100x100"
Image.create(:img => image.to_blob)
But the above code takes too long to process large number of images since it creates a tmp image file for each image it processes.
Is there a way to resize the image without creating the tmp file? I am also open to suggestions on other libraries that can speed up the processing.
Try using convert command available from imagemagic directly on the image as :
`convert source.jpg -resize 120x120 thumbnail.jpg`
Hope this helps you :)
Related
I want to overlay multiple PNG images of different sizes on a transparent canvas using ImageMagick. First I create a transparent canvas of some fixed size, say like
convert -size 1500x1000 canvas:transparent PNG32:canvas.png
Then I loop over my images in order to add each image to the canvas
convert canvas.png nthimage.png -gravity Center -geometry xResxYres+xcoord+ycoord -composite canvas.png
This works fine, but I may overlay as many as 10 pictures and I do this for thousands of n-tuples of images, so a faster solution would be appreciated. So my question: Can I also do this in one step instead of creating the canvas first and then adding a single image at a time?
Edit: I use ImageMagick 7.0.11-13 on macOS 10.15.7. I run ImageMagick from within a python script, so a file containing a list of input files can be generated if needed. For concreteness, say my input files are file_1.png up to file_n.png with sizes A1xB1 up to AnxBn and should be placed at coordinates +X1+Y1 up to +Xn+Yn with respect to the center of the canvas and the output file is output.png and should have size 1500x1000.
I really wouldn't recommend shelling out subprocesses from Python to call ImageMagick thousands of times. You'll just end up including too much process creation overhead per image, which is pointless if you are already running Python which can do the image processing "in house".
I would suggest you use PIL, or OpenCV directly from Python, and as your Mac is certainly multi-core, I would suggest you use multi-processing too since the task of doing thousands of images is trivially parallelisable.
As you haven't really given any indication of what your tuples actually look like, nor how to determine the output filename, I can only point you to methods 7 & 8 in this answer.
Your processing function for each image will want to create a new transparent image then open and paste other images with:
from PIL import Image
canvas = Image.new('RGBA', SOMETHING)
for overlay in overlays:
im = Image.open(overlay)
canvas.paste(im, (SOMEWHERE))
canvas.save(something)
Documentation here.
I'm trying to edit a pdf file with 100 pages, all of them images I need to export as png, setting their image mode as greyscale, and setting also their resolution, width and heigth.
How can I write a scheme (or python) script that perform this actions so that i could apply them by gimp in batch mode?
I've searched in the internet but didn't find simpy stated instructions.
ImageMagick's convert will do all this in one call in a command prompt:
convert -density 200 -colorspace Gray input.pdf -geometry 1000 ouput.png
will produce 1000px-wide grayscale PNGs (output-0 to output-(N-1).png) using a 200DPI rendering of the PDF.
You can also use Gimp scripting but you'll have a lot more to learn and AFAIK the API for the PDF loader only loads at 100DPI.
A slightly more manual method could be to:
Load (manually) the image in Gimp (you can specifiy the DPI in that case). This loads all the pages as layers.
Image>Mode>RGB to convert the image to grayscale.
Image>Scale image to set the size of all the pages
Save the individual layers to PNG (there are scripts for this, for instance this one)
I'm using Paperclip to crop an image in rails.
I use these convert options:
"-quality #{attachment.quality} \
-crop #{attachment.width}x#{attachment.height}+#{attachment.x}+#{attachment.y}"
If I crop and save the image as a JPEG with 65% quality the image comes out awful and still has quite a large image size.
However if I use Image Bucket Pro and do the exact same thing, the JPEG comes out looking much better and with a smaller file size.
What can I do to Paperclip (ImageMagick / Rmagick) to improve the image quality and reduce the file size without having such a drastic drop in quality?
Also: I have tried putting a slight Guassian blur on the image and stripping its EXIF data. However this has a negligible effect on the file size.
I don't know what's going on within Paperclip itself when you alter the quality percentage, but if you're looking for a great way to reduce image file size while maintaining quality, I'd recommend looking into these gems:
https://github.com/toy/image_optim
https://github.com/grosser/smusher
Since you're using Paperclip, you can also use this to manage it in an automated fashion without the need to use the command line (it uses image optim under the hood):
https://github.com/janfoeh/paperclip-optimizer
I am automating conversion of source PNG images to JPEGs of a predefined dimension. For most the images, I don't need to provide the sampling-factor and happy with the output quality and file size. However, a few of the files get heavily distorted with artifacts. For such files, I currently manually provide the option '-sampling-factor 1x1' to get the desired output jpeg, though bigger file size.
Is there an way to identify before hand which PNG src file needs the usage of sampling-factor for conversion? That will help to pull it in the script.
I'm using ImageMagick's convert to resize some .png files, the problem is that convert seems to be writing some extra info to the .png:
EXtdate:create 2012-11-26T19:50:31-08:001
The problem is that if the source image didn't change, a new scaled down image is produced that is identical to the old one, but it has this extra time/date info and it's causing git to think all the files have changed.
Is there a way to keep convert from writing out this addition meta info so subsequent resizing won't show the files as changed is the source image didn't change?
You're searching for the -strip parameter, ie:
convert infile.png -resize 100x100 -strip outfile.png
I found the solution to this problem was adding:
+set date:create +set date:modify
The -strip option was not removing the embedded data, but this does.