How to mask green pixels? - image-processing

I need to mask a green pixels in the image.
I have have example of the masking red pixels.
Here the example:
Image<Hsv, Byte> hsv = image.Convert<Hsv, Byte>()
Image<Gray, Byte>[] channels = hsv.Split();
//channels[0] is the mask for hue less than 20 or larger than 160
CvInvoke.cvInRangeS(channels[0], new MCvScalar(20), new MCvScalar(160), channels[0]);
channels[0]._Not();
but, I cant understand from where those parameters where token:
new MCvScalar(20), new MCvScalar(160)
Any idea which parameters I have to take to mask the green pixels?
Thank you in advance.

The code masks pixels with Hue outside the range 20 - 160 (or rather masks pixeles inside the range and then inverts the mask).
First, understand HSV (Hue, Saturation, Value): http://en.wikipedia.org/wiki/HSL_and_HSV
The actual Hue is in degrees and goes from 0 to 360 like:
Then see OpenCV documentation on 8-bit HSV format:
Hue is first calculated in 0 - 360, then divided by 2 to fit into 8-bit integer.
This means that in the original example the masked pixels have actual Hue under 40 or above 320 degrees. Apparently that's 0 degrees plus / minus 40.
For a similar range of greens you'd want 120 +/- 40, i.e. from 80 to 160. Finally converting that to 8-bit representation - from 40 to 80.
The actual code will differ from your sample though: for red they had to mask 20,160 then invert the mask. For green just masking from 40 to 80 is enough (i.e. you'll have to omit the channels[0]._Not(); part).

Related

Number of pixels in RGB image

can any one tell that ,how many number of pixels are present in RGB type of image is it height * width or height *width * channels.
I want to calculate bit per pixel(bpp) of an image so i need this information.
The number of pixels is simply:
height × width
It's indepenent of whether the color of each pixel is composed from a single channel or from several channels.
If your image has three channels, e.g. a separte one for red, green and blue, each using an 8 bit value for each pixel, then you have to add them to get the bits per pixel (bpp) value. In the example, it would be:
bpp = 3 × 8bit = 24bit
But it does not affect the number of pixels.

iOS Metal. Why does simply changing colorPixelFormat result in brighter imagery?

In Metal on iOS the default colorPixelFormat is bgra8Unorm. When I change format to rgba16Float all imagery brightens. Why?
An example:
Artwork
MTKView with format bgra8Unorm.
Texture-mapped quad. Texture created with SRGB=false.
MTKView with format rgba16Float.
Texture-mapped quad. Texture created with SRGB=false.
Why is everything brighter with rgba16Float. My understanding is that SRGB=false implies that no gamma correction is done when importing artwork. The assumption is the artwork has no gamma applied.
What is going on here?
If your artwork has a gamma (it does per the first image you uploaded), you have to convert it to a linear gamma if you want to use it in a linear space.
What is happening here is you are displaying gamma encoded values of the image in a linear workspace, without using color management or transform to convert those values.
BUT: Reading some of your comments, is the texture not an image but an .svg?? Did you convert your color values to linear space?
Here's the thing: RGB values are meaningless numbers unless you define how those RGB values relate to a given space.
#00FF00 in sRGB is a different color than #00FF00 in Adobe98 for instance. In your case you are going linear, but what primaries? Still using sRGB primaries? P3 Primaries? I'm not seeing a real hue shift, so I assume you are using sRGB primaries and a linear transfer curve for the second example.
THAT SAID, an RGB value of the top middle kid's green shirt is #8DB54F, normalized to 0-1, that's 0.553 0.710 0.310 .These numbers by themselves don't know if they are gamma encoded or not.
THE RELATIONSHIP BETWEEN sRGB, Y, and Light:
For the purposes of this discussion, we will assume the SIMPLE sRGB gamma of 1/2.2 and not the piecewise version. Same for L*
In sRGB, #8DB54F when displayed on an sRGB monitor with a sRGB gamma curve, the luminance (Y) is 39
This can be found by
(0.553^2.2)*0.2126 + (0.710^2.2)*0.7152 + (0.310^2.2)*0.0722
or 0.057 + 0.33 + 0.0061 = 0.39 and 0.39 * 100 = 39 (Y)
But if color management is told the values are linear, then the gamma correction is discarded, and (more or less):
0.553*0.2126 + 0.710*0.7152 + 0.310*0.0722
or 0.1175 + 0.5078 + 0.0223 = 0.65 and 0.65 * 100 = 65 (Y)
(Assuming the same coefficients are used.)
Luminance (Y) is linear, like light. But human perception is not, and neither are sRGB values.
Y is the linear luminance from CIEXYZ, while it is spectrally weighted based on the eye's response to different wavelengths, it is NOT uniform in terms of lightness. On a scale of 0-100, 18.4 is perceived as the middle.
L* is a perceptual lightness from CIELAB (L* a* b*), it is (simplified curve of):
L* = Y^0.42 On a scale of 0-100, L* 50 is the "perceived middle" value. So that green shirt at Y 39 is L* 69 when interpreted and displayed as sRGB, and the Y 65 is about L* 84 (those numbers are based on the math, here are the values per the color picker on my MacBook):
sRGB is a gamma encoded signal, done to make the best use of the limited bit depth of 8bits per channel. The effective gamma curve is similar to human perception so that more bits are used to define darker areas as human perception is more sensitive to luminance changes in dark regions. As noted above it is a simplified curve of:
sRGB_Video = Linear_Video^0.455 (And to be noted, the MONITOR adds an exponent of about 1.1)
So if 0% is black and 100% is white, then middle gray, the point most humans will say is in between 0% and 100% is:
Y 18.4%. = L* 50% = sRGB 46.7%
That is, an sRGB hex value of #777777 will display a luminance of 18.4 Y, and is equivalent to a perceived lightness of 50 L*. Middle Grey.
BUT WAIT, THERE'S MORE
So what is happening, you are telling MTKView that you are sending it image data that references linear values. But you are actually sending it sRGB values which are lighter due to the applied gamma correction. And then color management is taking what it thinks are linear values, and transforming them to the needed values for the output display.
Color management needs to know what the values mean, what colorspace they relate to. When you set SRGB=false then you are telling it that you are sending it linear values, not gamma encoded values.
BUT you are clearly sending gamma encoded values into a linear space without transforming/decoding the values to linear. Linearization won't happen unless you implicitly do so.
SOLUTION
Linearize the image data OR set the flag SRGB=true
Please let me know if you have further questions. But also, you may wish to see the Poynton Gamma FAQ or also the Color FAQ for clarification.
Also, for your grey: A linear value of 0.216 is equivalent to an sRGB (0-1) value of 0.500

whats the ranges of the red in HSV?

i want to detect red objects in an image.so i convert RGB img to HSV. so in order to know the range of red color i used color pallet on this site
https://alloyui.com/examples/color-picker/hsv
I found out that H(Hue) is falling between 0 to 10 as a lower limit and 340 to 359 as an upper limit. also i found out that the maximum value of S(Saturation) and V(value) is 100. but the problem is that i found some people say the ranges of red H: 0 to 10 as lower limit and 160 to 180 as uper limit.
https://solarianprogrammer.com/2015/05/08/detect-red-circles-image-using-opencv/
OpenCV better detection of red color?
also they said the maximum S and V is 255.This is color i got when i tried to find the upper limit of the red
There are different definitions of HSV, so the values your particular conversion function gives are the ones your should use. Measuring them is the best way to know for sure.
In principle H is an angle, so it goes from 0 to 360, with red centered around 0 (and understanding that 360==0). But some implementations will divide that by 2 to fit it in 8 bits. Others scale to a full 0-255 rage for the 8 bits.
The same is true for S and V. Sometimes they're values between 0 and 100, sometimes they go up to 255.
To measure, create an image where you have pure red pixels (RGB value 255,0,0), and convert. That will give your the center of red hue (H) and the max saturation (S). Then make an image that changes from orange to purple, these colors are near red. You should then see the range of H. Finally, make a pure white image (255,255,255). This will have maximum intensity (V).

OpenCV - Saturated pixels

Just like the title of this topic, how can I determine in OpenCV if a particular pixel of an image (either grayscale or color) is saturated (for instance, excessively bright)?
Thank you in advance.
By definition, saturated pixels are those associated with an intensity (i.e. either the grayscale value or one of the color component) equal to 255. If you prefer, you can also use a threshold smaller than 255, such as 240 or any other value.
Unfortunately, using only the image, you cannot easily distinguish pixels which are much too bright from pixels which are just a little too bright.

Image processing techniques to stand out white tape on the floor with opencv

I have the following image:
And I'd like to obtain a thresholded image where only the tape is white, and the whole background is black.. so far I've tried this:
Mat image = Highgui.imread("C:/bezier/0.JPG");
Mat byn = new Mat();
Imgproc.cvtColor(image, byn, Imgproc.COLOR_BGR2GRAY);
Mat thresh = new Mat();
// apply filters
Imgproc.blur(byn, byn, new Size(2, 2));
Imgproc.threshold(byn, thresh, 0, 255, Imgproc.THRESH_BINARY+Imgproc.THRESH_OTSU);
Imgproc.erode(thresh, thresh, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(4, 4)));
But I obtain this image, that is far away from what I want:
The tape would be always of the same color (white) and width (about 2cm), any idea? Thanks
Let's see what you know:
The tape has a lower contrast
The tape is lighter than the background
If you know the scale of the picture, you can run adaptive thresholds on two levels. Let's say that the width of the tape is 100 pixels:
Reject a pixel that has brightness outside of +/- x from the average brightness in the 50x50 (maybe smaller, but not larger) window surrounding it AND
Reject a pixel that has brightness smaller than y + the average brightness in the 100x100(maybe larger, but not smaller) window surrounding it.
You should also experiment a bit, trying both mean and median as definitions of "average" for each threshold.
From there on you should have a much better-defined image, and you can remove all but the largest contour (presumably the trail)
I think you are not taking advantage of the fact that the tape is white (and the floor is in a shade of brown).
Rather than converting to grayscale with cvtColor(src, dst, Imgproc.COLOR_BGR2GRAY) try using a custom operation that penalizes saturation... Maybe something like converting to HSV and let G = V * (1-S).

Resources