ImageMagick's Stream can't read TIFF64? - imagemagick

I am trying to extract a subregion of a large BigTIFF image (TIFF64). If the images are not too big, I can just convert src.tif dst.jpg. If the images are really big, though, convert doesn't work. I was trying to use stream to extract the region of interest without loading the complete image in memory. However, the result is a 0 bytes file. I uploaded one of my BigTIFFs here:
https://mfr.osf.io/render?url=https://osf.io/kgeqs/?action=download%26mode=render
This one is small enough to work with convert, and it produces the 0 byte image with stream:
stream -map rgb -storage-type char '20-07-2017_RecognizedCode-10685.tif[1000x1000+10000+10000]' 1k-crop.dat
Is there a way of getting stream to work? Is this a come-back of this old bug in stream with TIFF64? http://imagemagick.org/discourse-server/viewtopic.php?t=22046
I am using ImageMagick 6.9.2-4 Q16 x86_64 2016-03-17

I can't download your image to do any tests, but you could consider using vips which is very fast and frugal with memory, especially for large images - which I presume yours are, else you would probably not use BigTIFF.
So, if we make a large 10,000 x 10,000 TIF with ImageMagick for testing:
convert -size 10000x10000 gradient:cyan-magenta -compress lzw test.tif
and I show a smaller JPEG version here:
You could extract the top-left corner with vips like this, and also show the maximum memory usage (with --vips-leak):
vips crop test.tif a.jpg 0 0 100 100 --vips-leak
Output
memory: high-water mark 5.76 MB
And you could extract the bottom-right corner like this:
vips crop test.tif a.jpg 9000 9000 1000 1000 --vips-leak
Output
memory: high-water mark 517.01 MB
Using ImageMagick, that same operation requires 1.2GB of RAM:
/usr/bin/time -l convert test.tif -crop 1000x1000+9000+9000 a.jpg
2.46 real 2.00 user 0.45 sys
1216008192 maximum resident set size
0 average shared memory size
0 average unshared data size
0 average unshared stack size
298598 page reclaims

I agree with Mark's excellent answer, but just wanted to also say that the TIFF format you use can make a big difference.
Regular strip TIFFs don't really support random access, but tiled TIFFs do. For example, here's a 10k x 10k pixel strip TIFF:
$ vips copy wtc.jpg wtc.tif
$ time vips crop wtc.tif x.tif 8000 8000 100 100 --vips-leak
real 0m0.323s
user 0m0.083s
sys 0m0.185s
memory: high-water mark 230.80 MB
Here the TIFF reader has to scan almost the whole image to get to the bit it needs, causing relatively high memory use.
If you try again with a tiled image:
$ vips copy wtc.jpg wtc.tif[tile]
$ time vips crop wtc.tif x.tif 8000 8000 100 100 --vips-leak
real 0m0.032s
user 0m0.017s
sys 0m0.014s
memory: high-water mark 254.39 KB
Now it can just seek and read out the part it needs.
You may not have control over the details of the image format, of course, but if you do, you'll find that for this kind of operation tiled images are dramatically faster and need much less memory.

Related

using imagemagic Convert batch operation takes too much time versus Gimp. How to execute per file in batch mode?

Using imagemagic Convert batch operation takes too much time versus Gimp. How to execute per file in batch mode?
The following command can be executed to bacth convert 200+ image files.
However, Convert / imagemagic creates tmp files for all images and then apply whatever process you gave to it eg. rotation.
convert '*.jpg' -set filename:fn '%[basename]' -units PixelsPerInch -rotate -90 -density 300 -quality 95 -resize 28% '%[filename:fn].jpg'
This means that it may consume a lot of memory / temporary disk size and it takes too much time. Now it is running more than 10+ minutes and not finished yet.
In comparison, GIMP -in batch mode- that makes operations per file (rotate, finish, next, rotate, finish, next etc.), it takes much less time (2-3 minutes).
I think GIMP uses imagemagick convert.
How can I run convert (in batch mode) in linux terminal and make operation PER FILE and not PER ALL FILES?
You can also use parallel to loop over your files, eg.:
parallel \
convert {} -resize 28% -rotate -90 -quality 95 copy_{} \
::: *.jpg
(you don't need density, and its faster to shrink before rotate)
That'll run the convert commands in parallel. By default it'll use as many processes as you have cores. The {} is substituted for a filename when launching a command. You should get a nice speedup.
I tried a quick benchmark with a 10,000 x 10,000 pixel jpeg:
$ for i in {1..200}; do cp ~/wtc.jpg $i.jpg; done
$ /usr/bin/time -f %M:%e parallel convert {} -resize 28% -rotate -90 -quality 95 copy_{} ::: *.jpg
962788:31.87
So 200 files were rotated and resized in 32s and conversion needed around 1gb of memory at peak.
By experimenting and looking for other examples,
i found that this imagemagick command operates per file and not in same time on all files:
for pic in *.jpg; do convert -units PixelsPerInch -rotate -90 -density 300 -quality 95 -resize 28% "$pic" "$pic";done
Note: if you dont want to replace original photo, the last "$pic" may be manipulated. eg. "$pic" "${pic//}_copy.jpg"

WebP Image size reduce using ImageMagick MagickGetImageBlob

I am facing this strange issue where i am trying to read the blob of WebP Image through MagickReadImageBlob and in the next line i just try to fetch the same blob using MagickGetImageBlob . So, my final blob size reduces strangely. So, can anyone explain this behaviour?
I am using Version: ImageMagick 6.9.8-10 Q16 x86_64 on ubuntu 16.04
So, can anyone explain this behaviour?
The MagickReadImageBlob decodes an image-file buffer into a raster of authenticated pixels.
The MagickGetImageBlob encodes the raster back into an image-file buffer.
WebP format can be either lossy, or lossless, as well as implement different compression techniques during the encoding process. It's more than possible that the encoding routine simply found another way to store the raster than the previous one. Your version of ImageMagick has a quantum depth of 16 (Q16), so the decoding/scaling of WebP's 24-bit Color + 8-bit alpha to Q16 might influence some encoding variations. Try setting MagickSetImageDepth(wand, 8) to see if that helps.

C++ TIFF (raw) to JPEG : Faster than ImageMagick?

I need to convert many TIFF images to JPEG per second. Currently I'm using libmagick++ (Q16). I'm in the process of compiling ImageMagick Q8 as I read that it may improve performance (specially because I'm only working with 8bit images).
CImg also looks like a good option and GraphicsMagick claims to be faster than ImageMagic. I haven't tested either of those yet, but I was wondering if there are any other alternatives that could be faster than using ImageMagick Q8?
I'm looking for a Linux only solution.
UPDATE width GraphicsMagick & ImageMagick Q8
Base comparison (see comment to Mark): 0.2 secs with ImageMagick Q16
I successfully compiled GraphicsMagick with Q8, but after all, it seems about 30% slower than ImageMagick (0.3 secs).
After compiling ImageMagick with Q8, there was a gain of about 25% (0.15 secs). Nice :)
UPDATE width VIPS
Thanks to Mark's post, I give it a try to VIPS. Using the 7.38 version that is found in Ubuntu Trusty repositories:
time vips copy input.tiff output.jpg[Q=95]
real 0m0.105s
user 0m0.130s
sys 0m0.038s
Very nice :)
I also tried with the 7.42 (from ppa:dhor/myway) but it seems slighlty slower:
real 0m0.134s
user 0m0.168s
sys 0m0.039s
I will try to compile VIPS from source and see if I can beat that time. Well done Mark!
UPDATE: with VIPS 8.0
Compiled from source, vips-8.0 gets practically the same performance than 7.38:
real 0m0.100s
user 0m0.137s
sys 0m0.031s
Configure command:
./configure CC=c99 CFLAGS=-O2 --without-magick --without-OpenEXR --without-openslide --without-matio --without-cfitsio --without-libwebp --without-pangoft2 --without-zip --without-png --without-python
I have a few thoughts...
Thought 1
If your input images are 15MB and, for argument's sake, your output images are 1MB, you are already using 80MB/s of disk bandwidth to process 5 images a second - which is already around 50% of what a sensible disk might sustain. I would do a little experiment with using a RAMdisk to see if that might help, or an SSD if you have one.
Thought 2
Try experimenting with using VIPS from the command line to convert your images. I benchmarked it like this:
# Create dummy input image with ImageMagick
convert -size 3288x1152! xc:gray +noise gaussian -depth 8 input.tif
# Check it out
ls -lrt
-rw-r--r--# 1 mark staff 11372808 28 May 11:36 input.tif
identify input.tif
input.tif TIFF 3288x1152 3288x1152+0+0 8-bit sRGB 11.37MB 0.000u 0:00.000
Convert to JPEG with ImageMagick
time convert input.tif output.jpg
real 0m0.409s
user 0m0.330s
sys 0m0.046s
Convert to JPEG with VIPS
time vips copy input.tif output.jpg
real 0m0.218s
user 0m0.169s
sys 0m0.036s
Mmm, seems a good bit faster. YMMV of course.
Thought 3
Depending on the result of your test on disk speed, if your disk is not the limiting factor, consider using GNU Parallel to process more than one image at a time if you have a quad core CPU. It is pretty simple to use and I have always had excellent results with it.
For example, here I sequentially process 32 TIFF images created as above:
time for i in {0..31} ; do convert input-$i.tif output-$i.jpg; done
real 0m11.565s
user 0m10.571s
sys 0m0.862s
Now, I do exactly the same with GNU Parallel, doing 16 in parallel at a time
time parallel -j16 convert {} {.}.jpg ::: *tif
real 0m2.458s
user 0m15.773s
sys 0m1.734s
So, that's now 13 images per second, rather than 2.7 per second.

Resize huge jpeg using no memory

I need to resize huge (up to 30000x30000) JPEG files using no RAM, speed doesn't matter, is there any way to do so? I tried different libraries (nativejpg and others) but they use all free RAM and and crash with errors like "Out of memory" or "Not enough storage is available to process this command". I even tried command line utility imagemagick, but it also uses gigabytes of memory.
I would suggest you have a look at vips. It is documented here.
I can create a 10000x10000 image of noise like this with ImageMagick
convert -size 10000x10000! xc:gray50 +noise poisson image.jpg
and check it is the correct size like this:
identify image.jpg
image.jpg JPEG 10000x10000 10000x10000+0+0 8-bit sRGB 154.9MB 0.000u
I can now use vips to resize the 10000x10000 image down to 2500x2500 like this
time vipsthumbnail image.jpg -s 2500 -o small.jpg --vips-leak
memory: high-water mark 20.48 MB
real 0m1.974s
user 0m2.158s
sys 0m0.096s
Note the memory usage peaked at just 20MB
Check the result like this with ImageMagick
identify result.jpg
result.jpg JPEG 2500x2500 2500x2500+0+0 8-bit sRGB 1.33MB 0.000u 0:00.000
Have a look at the Technical Note too, regarding performance and memory usage - here.
You can also call it from C as well as the command line.
You can do this with imagemagick if you turn on libjpeg shrink-on-load. Try:
$ identify big.jpg
big.jpg JPEG 30000x30000 30000x30000+0+0 8-bit sRGB 128MB 0.000u 0:00.000
$ time convert -define jpeg:size=2500x2500 big.jpg -resize 2500x2500 small.jpg
real 0m3.169s
user 0m2.999s
sys 0m0.159s
peak mem: 170MB
How this works: libjpeg has a great shrink-on-load feature. When you open an image, you can ask the library to downsample by x2, x4 or x8 during the loading process -- the library then just decodes part of each DCT block.
However, this feature must be enabled when the image is opened, you can't set it later. So convert needs a hint that when it opens big.jpg, it only needs to get an image of at least size 2500x2500 (your target size). Now all -resize is doing is shrinking a 3800x3800 pixel image down to 2500x2500, a pretty easy operation. You'll only need 1/64th of the CPU and memory.
As #mark-setchell said above, vipsthumbnail is even faster:
$ time vipsthumbnail big.jpg -s 2500 -o small.jpg --vips-leak
memory: high-water mark 29.93 MB
real 0m2.362s
user 0m2.873s
sys 0m0.082s
Though the speedup is not very dramatic, since both systems are really just resizing 3800 -> 2500.
If you try tif instead, you do see a large difference, since there's no shrink-on-load trick you can use:
$ identify 360mp.tif
360mp.tif TIFF 18000x18000 18000x18000+0+0 8-bit sRGB 972MB 0.000u 0:00.000
$ time convert 360mp.tif -resize 2500 x.tif
peak mem: 2.8 GB
real 0m8.397s
user 0m25.508s
sys 0m1.648s
$ time vipsthumbnail 360mp.tif -o x.tif -s 2500 --vips-leak
memory: high-water mark 122.08 MB
real 0m2.583s
user 0m9.012s
sys 0m0.308s
Now vipsthumbnail is about 4x faster and needs only 1/20th of the memory.
With built in Delphi Jpeg support you can load large jpeg image resampled to smaller size while loading without excessive usage of RAM.
Jpeg image Scale property can have following values jsFullSize, jsHalf, jsQuarter, jsEighth
procedure ScaleJpg(const Source, Dest: string);
var
SourceImg, DestImg: TJPEGImage;
Bmp: TBitmap;
begin
Bmp := TBitmap.Create;
try
SourceImg := TJPEGImage.Create;
try
SourceImg.Scale := jsEighth;
SourceImg.LoadFromFile(Source);
Bmp.Width := SourceImg.Width;
Bmp.Height := SourceImg.Height;
Bmp.Canvas.Draw(0, 0, SourceImg);
finally
SourceImg.Free;
end;
DestImg := TJPEGImage.Create;
try
DestImg := TJPEGImage.Create;
DestImg.Assign(Bmp);
DestImg.SaveToFile(Dest);
finally
DestImg.Free;
end;
finally
Bmp.Free;
end;
end;
Once you have roughly rescaled image to size that can be comfortably processed in memory you can apply ordinary scaling algorithms to get the actual image size you want.

How to batch convert from one image format to another

I would like use imagemagick to convert all TIFF files in a directory to PNG. Is it possible to do it through the convert command without a bash or cmd script?
If you have lots of PNG files to convert, and are using a modern, multi-core CPU, you may find you get much better performance using GNU Parallel, like this:
parallel convert {} {.}.tiff ::: *.png
which will convert all PNG files into TIF files using all your available CPU cores.
I benchmarked 1,000 PNG files, each 1000x1000 pixels and it took 4 minutes with mogrify and just 52 seconds using the command above.
GNU Parallel Documentation
mogrify -format tiff *.png
thanks to http://www.ofzenandcomputing.com/batch-convert-image-formats-imagemagick/

Resources