Finding the Hue for a pixel in an image - image-processing

I am trying to convert my rgb image to hsv. I am able to find the value and saturation but got into problem while dealing with hue. I searched for the formula for finding the hue value and got one here.
How do you get the hue of a #xxxxxx colour?
But here also the accepted answer has discussed only 3 options.
R is maximum
G is maximum
B is maximum
(So this is not a duplicate question)
But what about other cases such as
R >= G > B or
B >= G > R or
G >= B > R etc
.
Clearly here there is no one value which is the maximum. So for clearing my doubt I searched google and found the following page:
https://en.wikipedia.org/wiki/Hue
Here a table is given that is used for finding the hue value and 6 possible cases are also given. My question is
what are the values 2,4 or 6 in the formula given in the table and how are they calculated?
Why are only 6 cases possible(as shown in the table)? What about
G > B >= R or
G >= B >= R or
B >= R > G or
B >= R >= G etc.

You could let ImageMagick (installed on most Linux distros and available for OSX and Windows) tell you the answer by creating a single pixel RGB image and converting to HSL colorspace:
convert xc:"#ffffff" -colorspace hsl -format "%[pixel:p{0,0}]" info:
hsl(0%,0%,100%)
or
convert xc:"rgb(127,34,56)" -colorspace hsl -depth 8 txt:
# ImageMagick pixel enumeration: 1,1,255,hsl
0,0: (96.0784%,57.6471%,31.7647%) #F59351 hsl(96.0784%,57.6471%,31.7647%)

Related

Duplicate photoshop level command in Imagemagick

All ,
How can I apply level (10, 245, 095) to a picture. When these level are applied in photoshop the results are different, when I am doing same in Imagemagick I get a completely different result. What am I missing
This question was also asked on the ImageMagick forum and answered by Fred (http://www.imagemagick.org/discourse-server/viewtopic.php?f=1&t=26023). This is his answer:
IM -level values for min and max, depend upon the Q level of your IM compile. check convert -version. I assume it will say Q16, which means that IM is expecting values in the range of 0 to 65535. So you need to convert your values of 0 to 255 to the range of 65535 or what I usually do is convert the values to percent and use that:
10/255 = 0.03921568627451 (x100 to get percent)
245/255 = 0.96078431372549 (x100 to get percent)
so try
-level 3.921%,96.08%,0.95
The gamma value (0.95) does not need any conversion.
See:
http://www.imagemagick.org/script/command-line-options.php#level

ImageMagick's composite on HSL (not HSB nor HSV)

Just I want to do is to replace Photoshop's HSL-based blend modes (color/hue/saturation/luminosity) by writing a CUI tool.
Better if I can do it via RMagick.
ImageMagick can manage HSL colorspace, but ImageMagick's composite operators Colorize/Hue/Saturation/Luminize are hard-coded to be based on HSB colorspace.
Is there any workaround without writing pixel-by-pixel processing code?
Thanks.
I tried the separate-and-combine approach.
Then a story has begun.
ImageMagick-6.6.9-7 has a pinpointing bug with rgb<->hsl calculation.
Ubuntu 12.04 LTS's package repository provides it... grrrr
(ImageMagick itself, fixed at r4431 and good with >= 6.6.9-9)
Then I sit down and do the math, to obtain a simple -fx expression.
colorize_hsl.fx:
ul = u.lightness; vl = v.lightness;
bias = (ul < .5 ? ul : 1 - ul)/(vl < .5 ? vl : 1 - vl);
(v - vl)*bias + ul
That is an rgb-based formula to set new lightness and preserve its hue and saturation.
To get luminize_hsl, exchange u and v.
Temporary vars (ul, vl and bias) are common in all channels,
but -fx engine might try it 3 times.
It's not enough...

Chop image into tiles using VIPS command-line

I have a large Tiff image that I want to chop into 512x512 tiles and write to disk.
In the past I've used ImageMagick like so:
convert -crop 512x512 +repage image_in.tif image_out_%d.tif
But recently this hasn't been working, processes running out of memory, etc.
Is there a similar command in VIPS? I know there's a CLI but I can't find an example or useful explanation in the documentation, and I'm still trying to figure out the nip2 GUI thing. Any help appreciated. :)
libvips has a operator which can do this for you very quickly. Try:
$ vips dzsave wtc.tif outdir --depth one --tile-size 512 --overlap 0 --suffix .tif
That's the DeepZoom writer making a depth 1 pyramid of tif tiles. Look in outdir_files/0 for the output tiles. There's a chapter in the docs talking about how to use dzsave.
It's a lot quicker than IM for me:
$ time convert -crop 512x512 +repage huge.tif x/image_out_%d.tif
real 0m5.623s
user 0m2.060s
sys 0m2.148s
$ time vips dzsave huge.tif x --depth one --tile-size 512 --overlap 0 --suffix .tif
real 0m1.643s
user 0m1.668s
sys 0m1.000s
Where huge.tif is a 10,000 by 10,000 pixel uncompressed RGB image. Plus it'll process any size image in only a small amount of memory.
I am running into the same issue. It seems that VIPS does not have a built-in command like the one from imagemagick above, but you can do this with some scripting (Python-code snippet):
for x in xrange(0, tiles_per_row):
xoffset = x * tile_size
for y in xrange(0, tiles_per_row):
yoffset = y * tile_size
filename = "%d_%d_%d.png" % (zoom, x, y)
command = "vips im_extract_area %s %s %d %d %d %d" % (base_image_name, filename, xoffset, yoffset, tile_size, tile_size)
os.system(command)
However you won't get the same speed as with imagemagick cropping...

ImageMagick. What is the correct way to dice an image into sub-tiles

What is the correct way to an dice an image into N x N sub-tile images?
Thanks,
Doug
Thanks,
Actually I futzed a bit and came up with the correct imagemagick incantations.
Here's the tcsh version.
Dice an image into a 4 x 4 grid (resultant images numbered sequentual). The number system is interpreted as: col + row * nrows:
$ convert -crop 25%x25% image.png tile-prefix.png
Often it is desirable to remap the sequential numbering to row x column. For example if you are using CATiledLayer in an iOS app and will need to ingest the correct tiles for a given scale. Here's how:
while ( $i < $number_of_tiles )
while -> set r = `expr $i \/ 4`
while -> set c = `expr $i \% 4`
while -> cp tile-prefix-$i.png tile-prefix-${r}x${c}.png
while -> echo $i
while -> # i++
while -> end

Converting RGB to grayscale/intensity

When converting from RGB to grayscale, it is said that specific weights to channels R, G, and B ought to be applied. These weights are: 0.2989, 0.5870, 0.1140.
It is said that the reason for this is different human perception/sensibility towards these three colors. Sometimes it is also said these are the values used to compute NTSC signal.
However, I didn't find a good reference for this on the web. What is the source of these values?
See also these previous questions: here and here.
The specific numbers in the question are from CCIR 601 (see Wikipedia article).
If you convert RGB -> grayscale with slightly different numbers / different methods,
you won't see much difference at all on a normal computer screen
under normal lighting conditions -- try it.
Here are some more links on color in general:
Wikipedia Luma
Bruce Lindbloom 's outstanding web site
chapter 4 on Color in the book by Colin Ware, "Information Visualization", isbn 1-55860-819-2;
this long link to Ware in books.google.com
may or may not work
cambridgeincolor :
excellent, well-written
"tutorials on how to acquire, interpret and process digital photographs
using a visually-oriented approach that emphasizes concept over procedure"
Should you run into "linear" vs "nonlinear" RGB,
here's part of an old note to myself on this.
Repeat, in practice you won't see much difference.
### RGB -> ^gamma -> Y -> L*
In color science, the common RGB values, as in html rgb( 10%, 20%, 30% ),
are called "nonlinear" or
Gamma corrected.
"Linear" values are defined as
Rlin = R^gamma, Glin = G^gamma, Blin = B^gamma
where gamma is 2.2 for many PCs.
The usual R G B are sometimes written as R' G' B' (R' = Rlin ^ (1/gamma))
(purists tongue-click) but here I'll drop the '.
Brightness on a CRT display is proportional to RGBlin = RGB ^ gamma,
so 50% gray on a CRT is quite dark: .5 ^ 2.2 = 22% of maximum brightness.
(LCD displays are more complex;
furthermore, some graphics cards compensate for gamma.)
To get the measure of lightness called L* from RGB,
first divide R G B by 255, and compute
Y = .2126 * R^gamma + .7152 * G^gamma + .0722 * B^gamma
This is Y in XYZ color space; it is a measure of color "luminance".
(The real formulas are not exactly x^gamma, but close;
stick with x^gamma for a first pass.)
Finally,
L* = 116 * Y ^ 1/3 - 16
"... aspires to perceptual uniformity [and] closely matches human perception of lightness." --
Wikipedia Lab color space
I found this publication referenced in an answer to a previous similar question. It is very helpful, and the page has several sample images:
Perceptual Evaluation of Color-to-Grayscale Image Conversions by Martin Čadík, Computer Graphics Forum, Vol 27, 2008
The publication explores several other methods to generate grayscale images with different outcomes:
CIE Y
Color2Gray
Decolorize
Smith08
Rasche05
Bala04
Neumann07
Interestingly, it concludes that there is no universally best conversion method, as each performed better or worse than others depending on input.
Heres some code in c to convert rgb to grayscale.
The real weighting used for rgb to grayscale conversion is 0.3R+0.6G+0.11B.
these weights arent absolutely critical so you can play with them.
I have made them 0.25R+ 0.5G+0.25B. It produces a slightly darker image.
NOTE: The following code assumes xRGB 32bit pixel format
unsigned int *pntrBWImage=(unsigned int*)..data pointer..; //assumes 4*width*height bytes with 32 bits i.e. 4 bytes per pixel
unsigned int fourBytes;
unsigned char r,g,b;
for (int index=0;index<width*height;index++)
{
fourBytes=pntrBWImage[index];//caches 4 bytes at a time
r=(fourBytes>>16);
g=(fourBytes>>8);
b=fourBytes;
I_Out[index] = (r >>2)+ (g>>1) + (b>>2); //This runs in 0.00065s on my pc and produces slightly darker results
//I_Out[index]=((unsigned int)(r+g+b))/3; //This runs in 0.0011s on my pc and produces a pure average
}
Check out the Color FAQ for information on this. These values come from the standardization of RGB values that we use in our displays. Actually, according to the Color FAQ, the values you are using are outdated, as they are the values used for the original NTSC standard and not modern monitors.
What is the source of these values?
The "source" of the coefficients posted are the NTSC specifications which can be seen in Rec601 and Characteristics of Television.
The "ultimate source" are the CIE circa 1931 experiments on human color perception. The spectral response of human vision is not uniform. Experiments led to weighting of tristimulus values based on perception. Our L, M, and S cones1 are sensitive to the light wavelengths we identify as "Red", "Green", and "Blue" (respectively), which is where the tristimulus primary colors are derived.2
The linear light3 spectral weightings for sRGB (and Rec709) are:
Rlin * 0.2126 + Glin * 0.7152 + Blin * 0.0722 = Y
These are specific to the sRGB and Rec709 colorspaces, which are intended to represent computer monitors (sRGB) or HDTV monitors (Rec709), and are detailed in the ITU documents for Rec709 and also BT.2380-2 (10/2018)
FOOTNOTES
(1) Cones are the color detecting cells of the eye's retina.
(2) However, the chosen tristimulus wavelengths are NOT at the "peak" of each cone type - instead tristimulus values are chosen such that they stimulate on particular cone type substantially more than another, i.e. separation of stimulus.
(3) You need to linearize your sRGB values before applying the coefficients. I discuss this in another answer here.
Starting a list to enumerate how different software packages do it. Here is a good CVPR paper to read as well.
FreeImage
#define LUMA_REC709(r, g, b) (0.2126F * r + 0.7152F * g + 0.0722F * b)
#define GREY(r, g, b) (BYTE)(LUMA_REC709(r, g, b) + 0.5F)
OpenCV
nVidia Performance Primitives
Intel Performance Primitives
Matlab
nGray = 0.299F * R + 0.587F * G + 0.114F * B;
These values vary from person to person, especially for people who are colorblind.
is all this really necessary, human perception and CRT vs LCD will vary, but the R G B intensity does not, Why not L = (R + G + B)/3 and set the new RGB to L, L, L?

Resources