ImageMagick, What does Q8 vs Q16 actually mean? - imagemagick

Under Windows, I need to choose between Q8 and Q16. I know that Q8 are 8 bits-per-pixel component (e.g. 8-bit red, 8-bit green, etc.), whereas, Q16 are 16 bits-per-pixel component. I know also that Q16 uses twice memory as Q8. Therefore, I must choose carefully.
What is a 16 bits-per-pixel component? Does a jpeg image support 16 bits-per-pixel component? Does a picture one takes from a digital camera in a smartphone result in 8 bits-per-pixel component or 16 bits-per-pixel component?
I just need to load jpg images, crop/resize them and save. I also need to save the pictures in 2 different variants: one with the icc color profile management included and another without any icc profile (sRGB)

What is a 16 bits-per-pixel component?
Each "channel" (e.g. Red, Green, Blue) can have a value between 0x0000 (no color), and 0xFFFF (full color). This allows greater depth of color, and more precision calculations.
For example. A "RED" pixel displayed with QuantumDepth of 8...
$ convert -size 1x1 xc:red -depth 8 rgb:- | hexdump
0000000 ff 00 00
0000003
Same for QuantumDepth go 16 ...
$ convert -size 1x1 xc:red -depth 16 rgb:- | hexdump
0000000 ff ff 00 00 00 00
0000006
And for Q32..? You guessed it.
$ convert -size 1x1 xc:red -depth 32 rgb:- | hexdump
0000000 ff ff ff ff 00 00 00 00 00 00 00 00
000000c
All-n-all, more memory is allocated to represent a color value. It gets a little more complex with HDRI imaging.
does jpeg image support 16 bits-per-pixel component ? does picture we take from camera in smartphone are in 8 bits-per-pixel component or 16 bits-per-pixel component ?
I believe JPEG's are 8bit, but I can be wrong here. I do know that most photographers KEEP all RAW files from device because JPEG doesn't support all the detail captured by the camera sensor. Here's a great write-up with examples.
I just need to load jpg images, crop/resize them and save. I also need to save the pictures in 2 different variants: one with the icc color profile management included and another without any icc profile (sRGB)
ImageMagick was designed to be "Swiss-Army-Knife" of encoders & decoders (+ a large amount of features). When reading a file, it decodes the format into something called "Authenticate Pixels" to be managed internally. The default size of the internal storage can be configured at time of compile, and for convenience the pre-build binaries are offered as Q8, Q16, and Q32. Plus additional HDRI support.
If your focused on quality, Q16 is a safe option. Q8 will be way faster, but limiting at times.

Also, you can find answer here (.net package, but means the same) : https://github.com/dlemstra/Magick.NET/tree/main/docs#q8-q16-or-q16-hdri
Q8, Q16 or Q16-HDRI?
Versions with Q8 in the name are 8 bits-per-pixel component (e.g.
8-bit red, 8-bit green, etc.), whereas, Q16 are 16 bits-per-pixel
component. A Q16 version permits you to read or write 16-bit images
without losing precision but requires twice as much resources as the
Q8 version. The Q16-HDRI version uses twice the amount of memory as
the Q16. It is more precise because it uses a floating point (32
bits-per-pixel component) and it allows out-of-bound pixels (less than
0 and more than 65535). The Q8 version is the recommended version. If
you need to read/write images with a better quality you should use the
Q16 version instead.

Related

WebP Image size reduce using ImageMagick MagickGetImageBlob

I am facing this strange issue where i am trying to read the blob of WebP Image through MagickReadImageBlob and in the next line i just try to fetch the same blob using MagickGetImageBlob . So, my final blob size reduces strangely. So, can anyone explain this behaviour?
I am using Version: ImageMagick 6.9.8-10 Q16 x86_64 on ubuntu 16.04
So, can anyone explain this behaviour?
The MagickReadImageBlob decodes an image-file buffer into a raster of authenticated pixels.
The MagickGetImageBlob encodes the raster back into an image-file buffer.
WebP format can be either lossy, or lossless, as well as implement different compression techniques during the encoding process. It's more than possible that the encoding routine simply found another way to store the raster than the previous one. Your version of ImageMagick has a quantum depth of 16 (Q16), so the decoding/scaling of WebP's 24-bit Color + 8-bit alpha to Q16 might influence some encoding variations. Try setting MagickSetImageDepth(wand, 8) to see if that helps.

ImageMagick's Stream can't read TIFF64?

I am trying to extract a subregion of a large BigTIFF image (TIFF64). If the images are not too big, I can just convert src.tif dst.jpg. If the images are really big, though, convert doesn't work. I was trying to use stream to extract the region of interest without loading the complete image in memory. However, the result is a 0 bytes file. I uploaded one of my BigTIFFs here:
https://mfr.osf.io/render?url=https://osf.io/kgeqs/?action=download%26mode=render
This one is small enough to work with convert, and it produces the 0 byte image with stream:
stream -map rgb -storage-type char '20-07-2017_RecognizedCode-10685.tif[1000x1000+10000+10000]' 1k-crop.dat
Is there a way of getting stream to work? Is this a come-back of this old bug in stream with TIFF64? http://imagemagick.org/discourse-server/viewtopic.php?t=22046
I am using ImageMagick 6.9.2-4 Q16 x86_64 2016-03-17
I can't download your image to do any tests, but you could consider using vips which is very fast and frugal with memory, especially for large images - which I presume yours are, else you would probably not use BigTIFF.
So, if we make a large 10,000 x 10,000 TIF with ImageMagick for testing:
convert -size 10000x10000 gradient:cyan-magenta -compress lzw test.tif
and I show a smaller JPEG version here:
You could extract the top-left corner with vips like this, and also show the maximum memory usage (with --vips-leak):
vips crop test.tif a.jpg 0 0 100 100 --vips-leak
Output
memory: high-water mark 5.76 MB
And you could extract the bottom-right corner like this:
vips crop test.tif a.jpg 9000 9000 1000 1000 --vips-leak
Output
memory: high-water mark 517.01 MB
Using ImageMagick, that same operation requires 1.2GB of RAM:
/usr/bin/time -l convert test.tif -crop 1000x1000+9000+9000 a.jpg
2.46 real 2.00 user 0.45 sys
1216008192 maximum resident set size
0 average shared memory size
0 average unshared data size
0 average unshared stack size
298598 page reclaims
I agree with Mark's excellent answer, but just wanted to also say that the TIFF format you use can make a big difference.
Regular strip TIFFs don't really support random access, but tiled TIFFs do. For example, here's a 10k x 10k pixel strip TIFF:
$ vips copy wtc.jpg wtc.tif
$ time vips crop wtc.tif x.tif 8000 8000 100 100 --vips-leak
real 0m0.323s
user 0m0.083s
sys 0m0.185s
memory: high-water mark 230.80 MB
Here the TIFF reader has to scan almost the whole image to get to the bit it needs, causing relatively high memory use.
If you try again with a tiled image:
$ vips copy wtc.jpg wtc.tif[tile]
$ time vips crop wtc.tif x.tif 8000 8000 100 100 --vips-leak
real 0m0.032s
user 0m0.017s
sys 0m0.014s
memory: high-water mark 254.39 KB
Now it can just seek and read out the part it needs.
You may not have control over the details of the image format, of course, but if you do, you'll find that for this kind of operation tiled images are dramatically faster and need much less memory.

YUV422 Packed format scaling

I am writing a scaling algorithm for YUV422 packed format images (without any intermediate conversions to RGB or grayscale or what have you). As can be seen in the below image from MSDN, the 4:2:2 format has 2 Luma bytes for each chroma byte. My test bench involves procuring images from the iSight camera using OpenCV APIs, converting them to YUV (CV_BGR2YUV) and then resizing them. The questions I have are:
I am posting sample data (from OpenCV's Mat's pointer to raw data) for reference below straight from the memory dump, how do I identify by looking at the data as to what is the Y component and what the UV components are?
15 8B 7A 17 8A 7A 18 8A 7B 17 89 7A 19 89 79 19
Is this bilinear interpolation algorithm correct? Let's say, my box is
TOP ROW: Y00, U00, Y01, V00, Y02, U01, Y03, V01,
BOTTOM ROW: Y10, U10, Y11, V10, Y12, U11, Y13, V11,
Result is interpolation of: (Y00, Y01, Y10, Y11), (U00, U01, U10, U11), (Y02, Y03, Y12, Y13), (U00, U01, U10, U11).
That forms my first two YUYV pixels of 32 bits.
Any references to principles of performing bilinear interpolation on YUYV images would be very helpful! Thanks in advance.
See image format
[EDIT]: Please note that the post here is somewhat different, in that it does not discuss the effects of additive operations on the YUV images. It just discards pixels to downsize. Resize (downsize) YUV420sp image

C++ TIFF (raw) to JPEG : Faster than ImageMagick?

I need to convert many TIFF images to JPEG per second. Currently I'm using libmagick++ (Q16). I'm in the process of compiling ImageMagick Q8 as I read that it may improve performance (specially because I'm only working with 8bit images).
CImg also looks like a good option and GraphicsMagick claims to be faster than ImageMagic. I haven't tested either of those yet, but I was wondering if there are any other alternatives that could be faster than using ImageMagick Q8?
I'm looking for a Linux only solution.
UPDATE width GraphicsMagick & ImageMagick Q8
Base comparison (see comment to Mark): 0.2 secs with ImageMagick Q16
I successfully compiled GraphicsMagick with Q8, but after all, it seems about 30% slower than ImageMagick (0.3 secs).
After compiling ImageMagick with Q8, there was a gain of about 25% (0.15 secs). Nice :)
UPDATE width VIPS
Thanks to Mark's post, I give it a try to VIPS. Using the 7.38 version that is found in Ubuntu Trusty repositories:
time vips copy input.tiff output.jpg[Q=95]
real 0m0.105s
user 0m0.130s
sys 0m0.038s
Very nice :)
I also tried with the 7.42 (from ppa:dhor/myway) but it seems slighlty slower:
real 0m0.134s
user 0m0.168s
sys 0m0.039s
I will try to compile VIPS from source and see if I can beat that time. Well done Mark!
UPDATE: with VIPS 8.0
Compiled from source, vips-8.0 gets practically the same performance than 7.38:
real 0m0.100s
user 0m0.137s
sys 0m0.031s
Configure command:
./configure CC=c99 CFLAGS=-O2 --without-magick --without-OpenEXR --without-openslide --without-matio --without-cfitsio --without-libwebp --without-pangoft2 --without-zip --without-png --without-python
I have a few thoughts...
Thought 1
If your input images are 15MB and, for argument's sake, your output images are 1MB, you are already using 80MB/s of disk bandwidth to process 5 images a second - which is already around 50% of what a sensible disk might sustain. I would do a little experiment with using a RAMdisk to see if that might help, or an SSD if you have one.
Thought 2
Try experimenting with using VIPS from the command line to convert your images. I benchmarked it like this:
# Create dummy input image with ImageMagick
convert -size 3288x1152! xc:gray +noise gaussian -depth 8 input.tif
# Check it out
ls -lrt
-rw-r--r--# 1 mark staff 11372808 28 May 11:36 input.tif
identify input.tif
input.tif TIFF 3288x1152 3288x1152+0+0 8-bit sRGB 11.37MB 0.000u 0:00.000
Convert to JPEG with ImageMagick
time convert input.tif output.jpg
real 0m0.409s
user 0m0.330s
sys 0m0.046s
Convert to JPEG with VIPS
time vips copy input.tif output.jpg
real 0m0.218s
user 0m0.169s
sys 0m0.036s
Mmm, seems a good bit faster. YMMV of course.
Thought 3
Depending on the result of your test on disk speed, if your disk is not the limiting factor, consider using GNU Parallel to process more than one image at a time if you have a quad core CPU. It is pretty simple to use and I have always had excellent results with it.
For example, here I sequentially process 32 TIFF images created as above:
time for i in {0..31} ; do convert input-$i.tif output-$i.jpg; done
real 0m11.565s
user 0m10.571s
sys 0m0.862s
Now, I do exactly the same with GNU Parallel, doing 16 in parallel at a time
time parallel -j16 convert {} {.}.jpg ::: *tif
real 0m2.458s
user 0m15.773s
sys 0m1.734s
So, that's now 13 images per second, rather than 2.7 per second.

opencv 2.4.5 unable to load tif image file properly in windows

I have few tif images which open on imread or cvLoadImage gives null data in windows whereas the same file is processed conveniently on ubuntu installation.
Running imagemagik identify returns
$identify 60018969.tif
60018969.tif[0] TIFF 1696x2192 1696x2192+0+0 8-bit Grayscale DirectClass 516KB 0.000u 0:00.040
60018969.tif[1] TIFF 1696x2192 1696x2192+0+0 8-bit Grayscale DirectClass 516KB 0.000u 0:00.030
60018969.tif[2] TIFF 1696x2376 1696x2376+0+0 8-bit Grayscale DirectClass 516KB 0.000u 0:00.019
60018969.tif[3] TIFF 1696x2376 1696x2376+0+0 8-bit Grayscale DirectClass 516KB 0.000u 0:00.019
identify: 60018969.tif: wrong data type 4 for "JpegProc"; tag ignored. `TIFFReadDirectory' # warning/tiff.c/TIFFWarnings/706.
After a lot of google search ,opencv can read uncompressed tiff images, but for compressed types tiff libtiff is needed. I tried re-intalling/re-configuring opencv many times but could not find a way to load compressed tiff images.
The same image is processed in ubuntu installation of OpenCV 2.4.9. Which codec I need to build in windows for opencv libtiff and how?
Please Help.
EDIT: During the course of time. I tried reading the file using libtiff library and it failed to read the file giving an error "deprecated and troublesome old-style jpeg compression mode please convert to new-style jpeg compression". I made sure that libtiff had following codecs installed
Support for external codecs:
ZLIB support: yes
Pixar log-format algorithm: yes
JPEG support: yes
Old JPEG support: yes
JPEG 8/12 bit dual mode: no
ISO JBIG support: yes
LZMA2 support: no
C++ support: yes
OpenGL support: no
The example file as asked
You should be aware that "Old Style JPEG" TIFFs are an extension that was glommed on to TIFF by Microsoft (IIRC) without consulting anyone and they did so rather poorly. Within all TIFF compression schemes there are "interesting" cases that are non-spec compliant and are more or less hacked into various codecs for compatibility, but old-style JPEG is the king of all kings in that regard and the crap that I've seen (I work for a company that makes a TIFF codec) would turn your hair white. I know for a fact that we have at least a half-dozen classes of old-style JPEG TIFFs that we can read that libtiff cannot.

Resources