using imagemagic Convert batch operation takes too much time versus Gimp. How to execute per file in batch mode? - imagemagick

Using imagemagic Convert batch operation takes too much time versus Gimp. How to execute per file in batch mode?
The following command can be executed to bacth convert 200+ image files.
However, Convert / imagemagic creates tmp files for all images and then apply whatever process you gave to it eg. rotation.
convert '*.jpg' -set filename:fn '%[basename]' -units PixelsPerInch -rotate -90 -density 300 -quality 95 -resize 28% '%[filename:fn].jpg'
This means that it may consume a lot of memory / temporary disk size and it takes too much time. Now it is running more than 10+ minutes and not finished yet.
In comparison, GIMP -in batch mode- that makes operations per file (rotate, finish, next, rotate, finish, next etc.), it takes much less time (2-3 minutes).
I think GIMP uses imagemagick convert.
How can I run convert (in batch mode) in linux terminal and make operation PER FILE and not PER ALL FILES?

You can also use parallel to loop over your files, eg.:
parallel \
convert {} -resize 28% -rotate -90 -quality 95 copy_{} \
::: *.jpg
(you don't need density, and its faster to shrink before rotate)
That'll run the convert commands in parallel. By default it'll use as many processes as you have cores. The {} is substituted for a filename when launching a command. You should get a nice speedup.
I tried a quick benchmark with a 10,000 x 10,000 pixel jpeg:
$ for i in {1..200}; do cp ~/wtc.jpg $i.jpg; done
$ /usr/bin/time -f %M:%e parallel convert {} -resize 28% -rotate -90 -quality 95 copy_{} ::: *.jpg
962788:31.87
So 200 files were rotated and resized in 32s and conversion needed around 1gb of memory at peak.

By experimenting and looking for other examples,
i found that this imagemagick command operates per file and not in same time on all files:
for pic in *.jpg; do convert -units PixelsPerInch -rotate -90 -density 300 -quality 95 -resize 28% "$pic" "$pic";done
Note: if you dont want to replace original photo, the last "$pic" may be manipulated. eg. "$pic" "${pic//}_copy.jpg"

Related

ImageMagick's Stream can't read TIFF64?

I am trying to extract a subregion of a large BigTIFF image (TIFF64). If the images are not too big, I can just convert src.tif dst.jpg. If the images are really big, though, convert doesn't work. I was trying to use stream to extract the region of interest without loading the complete image in memory. However, the result is a 0 bytes file. I uploaded one of my BigTIFFs here:
https://mfr.osf.io/render?url=https://osf.io/kgeqs/?action=download%26mode=render
This one is small enough to work with convert, and it produces the 0 byte image with stream:
stream -map rgb -storage-type char '20-07-2017_RecognizedCode-10685.tif[1000x1000+10000+10000]' 1k-crop.dat
Is there a way of getting stream to work? Is this a come-back of this old bug in stream with TIFF64? http://imagemagick.org/discourse-server/viewtopic.php?t=22046
I am using ImageMagick 6.9.2-4 Q16 x86_64 2016-03-17
I can't download your image to do any tests, but you could consider using vips which is very fast and frugal with memory, especially for large images - which I presume yours are, else you would probably not use BigTIFF.
So, if we make a large 10,000 x 10,000 TIF with ImageMagick for testing:
convert -size 10000x10000 gradient:cyan-magenta -compress lzw test.tif
and I show a smaller JPEG version here:
You could extract the top-left corner with vips like this, and also show the maximum memory usage (with --vips-leak):
vips crop test.tif a.jpg 0 0 100 100 --vips-leak
Output
memory: high-water mark 5.76 MB
And you could extract the bottom-right corner like this:
vips crop test.tif a.jpg 9000 9000 1000 1000 --vips-leak
Output
memory: high-water mark 517.01 MB
Using ImageMagick, that same operation requires 1.2GB of RAM:
/usr/bin/time -l convert test.tif -crop 1000x1000+9000+9000 a.jpg
2.46 real 2.00 user 0.45 sys
1216008192 maximum resident set size
0 average shared memory size
0 average unshared data size
0 average unshared stack size
298598 page reclaims
I agree with Mark's excellent answer, but just wanted to also say that the TIFF format you use can make a big difference.
Regular strip TIFFs don't really support random access, but tiled TIFFs do. For example, here's a 10k x 10k pixel strip TIFF:
$ vips copy wtc.jpg wtc.tif
$ time vips crop wtc.tif x.tif 8000 8000 100 100 --vips-leak
real 0m0.323s
user 0m0.083s
sys 0m0.185s
memory: high-water mark 230.80 MB
Here the TIFF reader has to scan almost the whole image to get to the bit it needs, causing relatively high memory use.
If you try again with a tiled image:
$ vips copy wtc.jpg wtc.tif[tile]
$ time vips crop wtc.tif x.tif 8000 8000 100 100 --vips-leak
real 0m0.032s
user 0m0.017s
sys 0m0.014s
memory: high-water mark 254.39 KB
Now it can just seek and read out the part it needs.
You may not have control over the details of the image format, of course, but if you do, you'll find that for this kind of operation tiled images are dramatically faster and need much less memory.

ImageMagick - alpha channel extract, different results (darker) on 6.7 vs 6.9

I'm working in a cross-platform environment where the results of the same, simple alpha channel extraction operation are different between ImageMagick 6.7.7-10 and ImageMagick 6.9.3-7.
The command to extract the alpha channel is:
convert image.png -alpha extract alpha.png
Or equivalently: convert image.png -channel A -separate alpha.png
Here is the original, the 6.7 output, and the 6.9 output:
Testing the original in Gimp, in the middle of the top dark bar, I can see that the original alpha value was 80% or 204:
The 6.9 output has grayscale value of 204, while the 6.7 output has a grayscale value of 154.
Now the question: I believe the 6.9 version is correct, but we like the visual result provided by the 6.7 version. Can we understand how the 6.7 version was working (maybe some different formula / luminence / color space?) and get the same result from the 6.9 version? Maybe apply some curve to the 6.9 output? Or some switch to make it use a different formula / color space? (Do color spaces even apply to PNGs?)
Post processing the 6.9 output with this simple gamma adjustment gives a very close approximation of the 6.7 output:
convert /tmp/alpha_channel_6.9.png -gamma 0.4472 /tmp/alpha_gamma_0.4472.png
Here's a gist of our solution in a shell script to detect 6.7 and apply the gamma adjustment selectively.
Note: Compared to the -fx answer, the gamma adjustment runs faster is more accurate, judging by less statistical error (MAE of 190 vs 408) found with:
compare -verbose -metric MAE /tmp/alpha_channel_6.7.png /tmp/alpha_curved.png null: 2>&1
compare -verbose -metric MAE /tmp/alpha_channel_6.7.png /tmp/alpha_gamma_0.4472.png null: 2>&1
But I'm going to leave the -fx answer in place, because I like the process of finding curves it describes.
Incidentally, this command lightens 6.7 output to look like 6.9 output:
convert /tmp/alpha_channel_6.7.png -gamma 2.22 /tmp/alpha_gamma_to_look_like_6.9.png
But with such a big gamma boost, the results are pretty ugly with color banding in the dark areas:
Deprecated: The -gamma answer is faster and provides better results, but I'll leave the below info, as it could be useful for other problems needing a "curves" solution.
Ok, I was able to post-process my 6.9 alpha channel output with a curves function so that it very closely matches the 6.7 alpha channel output.
However, if someone has a more concise switch, let me know!
Long story short, here's the post processing step:
It uses convert's -fx filter to apply curves to make 6.9 alpha channel output look like 6.7:
convert /tmp/alpha_channel_6.9.png -fx "-0.456633863737214*u^4 + 1.33965176221586*u^3 + -0.0837635742856742*u^2 + 0.199687083827961*u +0.00105015974839925" /tmp/alpha_curved_to_look_like_6.7.png
One could figure out the inverse function, to make 6.7 look like 6.9, given enough motivation.
Note to my future self, here's waaay too many details about how to derive this function:
Ok, so there's a page on ImageMagick's website about achieving a "curves" effect. The fun part is, it uses gnuplot to fit a polynomial to the curves function that'd you'd normally see in Gimp or Photoshop.
So I had the idea that I could create a test image (note, it's white with alpha, so not easy to see), run it through 6.7 alpha extract and 6.9 alpha extract, and visually compare them (on separate layers) in Gimp:
Then poke around in the curves tool on the 6.9 layer to make it look exactly like the 6.7 image:
Ok, so I found the curve I want. Now luckily, in Gimp, if you hover over the curve plot, it tells you the coordinates of the cursor, so I can find the coordinates of my curve to fit with gnuplot (as described in the link above. Note, I had to convert from 0-255 to 0.0-1.0.)
Cut super gory details, see this screencap of the general idea.
Note that I updated the ImageMagick code to fit a 4th degree polynomial, as it gave a better fit than their 3rd degree for me:
( echo 'f(x) = a*x**4 + b*x**3 + c*x**2 + d*x + e'; echo 'fit f(x) "fx_control.txt" via a, b, c, d, e'; echo 'print a,"*x^4 + ",b,"*x^3 + ",c,"*x^2 + ",d,"*x +",e'; ) | gnuplot 2>&1 | tail -1 > fx_funct.txt
This output:
-0.456633863737214*x^4 + 1.33965176221586*x^3 + -0.0837635742856742*x^2 + 0.199687083827961*x +0.00105015974839925
Ok, I used the above to generate a function of X, and plotted it using desmos.com, then screencapped that plot (in red) to overlay and compare it to the gimp curves. Looks pretty close to me:
So finally, switch the x's to u's, and plug it into ImageMagick, and voila, my 6.9 output looks like my 6.7 output once again:
convert /tmp/alpha_channel_6.9.png -fx "-0.456633863737214*u^4 + 1.33965176221586*u^3 + -0.0837635742856742*u^2 + 0.199687083827961*u +0.00105015974839925" /tmp/alpha_curved_to_look_like_6.7.png

C++ TIFF (raw) to JPEG : Faster than ImageMagick?

I need to convert many TIFF images to JPEG per second. Currently I'm using libmagick++ (Q16). I'm in the process of compiling ImageMagick Q8 as I read that it may improve performance (specially because I'm only working with 8bit images).
CImg also looks like a good option and GraphicsMagick claims to be faster than ImageMagic. I haven't tested either of those yet, but I was wondering if there are any other alternatives that could be faster than using ImageMagick Q8?
I'm looking for a Linux only solution.
UPDATE width GraphicsMagick & ImageMagick Q8
Base comparison (see comment to Mark): 0.2 secs with ImageMagick Q16
I successfully compiled GraphicsMagick with Q8, but after all, it seems about 30% slower than ImageMagick (0.3 secs).
After compiling ImageMagick with Q8, there was a gain of about 25% (0.15 secs). Nice :)
UPDATE width VIPS
Thanks to Mark's post, I give it a try to VIPS. Using the 7.38 version that is found in Ubuntu Trusty repositories:
time vips copy input.tiff output.jpg[Q=95]
real 0m0.105s
user 0m0.130s
sys 0m0.038s
Very nice :)
I also tried with the 7.42 (from ppa:dhor/myway) but it seems slighlty slower:
real 0m0.134s
user 0m0.168s
sys 0m0.039s
I will try to compile VIPS from source and see if I can beat that time. Well done Mark!
UPDATE: with VIPS 8.0
Compiled from source, vips-8.0 gets practically the same performance than 7.38:
real 0m0.100s
user 0m0.137s
sys 0m0.031s
Configure command:
./configure CC=c99 CFLAGS=-O2 --without-magick --without-OpenEXR --without-openslide --without-matio --without-cfitsio --without-libwebp --without-pangoft2 --without-zip --without-png --without-python
I have a few thoughts...
Thought 1
If your input images are 15MB and, for argument's sake, your output images are 1MB, you are already using 80MB/s of disk bandwidth to process 5 images a second - which is already around 50% of what a sensible disk might sustain. I would do a little experiment with using a RAMdisk to see if that might help, or an SSD if you have one.
Thought 2
Try experimenting with using VIPS from the command line to convert your images. I benchmarked it like this:
# Create dummy input image with ImageMagick
convert -size 3288x1152! xc:gray +noise gaussian -depth 8 input.tif
# Check it out
ls -lrt
-rw-r--r--# 1 mark staff 11372808 28 May 11:36 input.tif
identify input.tif
input.tif TIFF 3288x1152 3288x1152+0+0 8-bit sRGB 11.37MB 0.000u 0:00.000
Convert to JPEG with ImageMagick
time convert input.tif output.jpg
real 0m0.409s
user 0m0.330s
sys 0m0.046s
Convert to JPEG with VIPS
time vips copy input.tif output.jpg
real 0m0.218s
user 0m0.169s
sys 0m0.036s
Mmm, seems a good bit faster. YMMV of course.
Thought 3
Depending on the result of your test on disk speed, if your disk is not the limiting factor, consider using GNU Parallel to process more than one image at a time if you have a quad core CPU. It is pretty simple to use and I have always had excellent results with it.
For example, here I sequentially process 32 TIFF images created as above:
time for i in {0..31} ; do convert input-$i.tif output-$i.jpg; done
real 0m11.565s
user 0m10.571s
sys 0m0.862s
Now, I do exactly the same with GNU Parallel, doing 16 in parallel at a time
time parallel -j16 convert {} {.}.jpg ::: *tif
real 0m2.458s
user 0m15.773s
sys 0m1.734s
So, that's now 13 images per second, rather than 2.7 per second.

Resize huge jpeg using no memory

I need to resize huge (up to 30000x30000) JPEG files using no RAM, speed doesn't matter, is there any way to do so? I tried different libraries (nativejpg and others) but they use all free RAM and and crash with errors like "Out of memory" or "Not enough storage is available to process this command". I even tried command line utility imagemagick, but it also uses gigabytes of memory.
I would suggest you have a look at vips. It is documented here.
I can create a 10000x10000 image of noise like this with ImageMagick
convert -size 10000x10000! xc:gray50 +noise poisson image.jpg
and check it is the correct size like this:
identify image.jpg
image.jpg JPEG 10000x10000 10000x10000+0+0 8-bit sRGB 154.9MB 0.000u
I can now use vips to resize the 10000x10000 image down to 2500x2500 like this
time vipsthumbnail image.jpg -s 2500 -o small.jpg --vips-leak
memory: high-water mark 20.48 MB
real 0m1.974s
user 0m2.158s
sys 0m0.096s
Note the memory usage peaked at just 20MB
Check the result like this with ImageMagick
identify result.jpg
result.jpg JPEG 2500x2500 2500x2500+0+0 8-bit sRGB 1.33MB 0.000u 0:00.000
Have a look at the Technical Note too, regarding performance and memory usage - here.
You can also call it from C as well as the command line.
You can do this with imagemagick if you turn on libjpeg shrink-on-load. Try:
$ identify big.jpg
big.jpg JPEG 30000x30000 30000x30000+0+0 8-bit sRGB 128MB 0.000u 0:00.000
$ time convert -define jpeg:size=2500x2500 big.jpg -resize 2500x2500 small.jpg
real 0m3.169s
user 0m2.999s
sys 0m0.159s
peak mem: 170MB
How this works: libjpeg has a great shrink-on-load feature. When you open an image, you can ask the library to downsample by x2, x4 or x8 during the loading process -- the library then just decodes part of each DCT block.
However, this feature must be enabled when the image is opened, you can't set it later. So convert needs a hint that when it opens big.jpg, it only needs to get an image of at least size 2500x2500 (your target size). Now all -resize is doing is shrinking a 3800x3800 pixel image down to 2500x2500, a pretty easy operation. You'll only need 1/64th of the CPU and memory.
As #mark-setchell said above, vipsthumbnail is even faster:
$ time vipsthumbnail big.jpg -s 2500 -o small.jpg --vips-leak
memory: high-water mark 29.93 MB
real 0m2.362s
user 0m2.873s
sys 0m0.082s
Though the speedup is not very dramatic, since both systems are really just resizing 3800 -> 2500.
If you try tif instead, you do see a large difference, since there's no shrink-on-load trick you can use:
$ identify 360mp.tif
360mp.tif TIFF 18000x18000 18000x18000+0+0 8-bit sRGB 972MB 0.000u 0:00.000
$ time convert 360mp.tif -resize 2500 x.tif
peak mem: 2.8 GB
real 0m8.397s
user 0m25.508s
sys 0m1.648s
$ time vipsthumbnail 360mp.tif -o x.tif -s 2500 --vips-leak
memory: high-water mark 122.08 MB
real 0m2.583s
user 0m9.012s
sys 0m0.308s
Now vipsthumbnail is about 4x faster and needs only 1/20th of the memory.
With built in Delphi Jpeg support you can load large jpeg image resampled to smaller size while loading without excessive usage of RAM.
Jpeg image Scale property can have following values jsFullSize, jsHalf, jsQuarter, jsEighth
procedure ScaleJpg(const Source, Dest: string);
var
SourceImg, DestImg: TJPEGImage;
Bmp: TBitmap;
begin
Bmp := TBitmap.Create;
try
SourceImg := TJPEGImage.Create;
try
SourceImg.Scale := jsEighth;
SourceImg.LoadFromFile(Source);
Bmp.Width := SourceImg.Width;
Bmp.Height := SourceImg.Height;
Bmp.Canvas.Draw(0, 0, SourceImg);
finally
SourceImg.Free;
end;
DestImg := TJPEGImage.Create;
try
DestImg := TJPEGImage.Create;
DestImg.Assign(Bmp);
DestImg.SaveToFile(Dest);
finally
DestImg.Free;
end;
finally
Bmp.Free;
end;
end;
Once you have roughly rescaled image to size that can be comfortably processed in memory you can apply ordinary scaling algorithms to get the actual image size you want.

How to batch convert from one image format to another

I would like use imagemagick to convert all TIFF files in a directory to PNG. Is it possible to do it through the convert command without a bash or cmd script?
If you have lots of PNG files to convert, and are using a modern, multi-core CPU, you may find you get much better performance using GNU Parallel, like this:
parallel convert {} {.}.tiff ::: *.png
which will convert all PNG files into TIF files using all your available CPU cores.
I benchmarked 1,000 PNG files, each 1000x1000 pixels and it took 4 minutes with mogrify and just 52 seconds using the command above.
GNU Parallel Documentation
mogrify -format tiff *.png
thanks to http://www.ofzenandcomputing.com/batch-convert-image-formats-imagemagick/

Resources