Comparing histograms without white color included OpenCV - opencv

Is there a way that compares histograms but for example white color to be excluded and so white color doesn't affect onto the comparison.

White pixels have Saturation, S = 0. So, it is very easy to remove the white pixels from being counted while creating histogram. Do the following:
Convert your image from BGR to HSV
Then split your HSV image into three individual channels i.e. H, S and V
Then, access each pixel of channel S and if the pixel value = 0 (means S = 0) then it mean that it is a white pixel.
If the pixel is white then do not consider its Hue value to create histogram and if not...then put its hue value into the corresponding bin (normal procedure to build histogram).
Summary: you just need to find white pixels by checking their Saturation value, which is S = 0.
PS: Have a look at this link to understand the HSV model.

Related

Blurring image with RGB values without convolving it with a kernel

I'm using an app for face redaction that doesn't allow access to the source code but only allows me to pass pixel values for red, green and blue channel upon which it creates a matrix with the same average RGB values for every ROI pixel value. For eg. if I give Red=32,Blue=123 and Green=233 it will assign these RGB values for every pixel of the ROI and then draws a colored patch on the face.
So I was wondering is there a general combination of RGB values of a pixel to distort it and make it look like it's blurred. I can also set the opacity value in the app.
Thanks.

How to determine if color at a certain pixel is "white"?

Given an image, how do I go about determining if a certain pixel is "white" ? Based on Wikipedia, I understand that if the RGB values are at (255,255,255), the pixel is considered white and that a lower similar set of values for eg. (200,200,200) would mean that it is a "darker shade of white" i.e. gray.
Should I just set a threshold of example 80% for each channel and if the RGB at a certain pixel passes that condition then it is marked as gray/white ? Are there any papers that I can read up for help ?
Regards,
Haziq
The solution is to convert your color space from RGB to HSV. Here is sample algorithm thread. Finally apply threshold in Value (Lightness) Channel to filter bright region.
If you simply threshold all channels at, say 200, you are allowing the Red to differ from the Green and that to differ from the Blue, which means you are allowing colour into your images and all the following colours would be permitted:
You need to ensure that, not only are Red, Green and Blue above 200, but further that they are equal. That way you only permit this range:
In the HSL model, you need Lightness to be above say 80%, but also the Saturation to be zero to ensure white/gray.

Convert image to grayscale with custom luminosity formula

I have images containing gray gradations and one another color. I'm trying to convert image to grayscale with opencv, also i want the colored pixels in the source image to become rather light in the output grayscale image, independently to the color itself.
The common luminosity formula is smth like 0.299R+0.587G+0.114B, according to opencv docs, so it gives very different luminosity to different colors.
I consider the solution is to set some custom weights in the luminosity formula.
Is it possible in opencv? Or maybe there is a better way to perform such selective desaturation?
I use python, but it doesnt matter
This is the perfect case for the transform() function. You can treat grayscale conversion as applying a 1x3 matrix transformation to each pixel of the input image. The elements in this matrix are the coefficients for the blue, green, and red components, respectively since OpenCV images are BGR by default.
im = cv2.imread(image_path)
coefficients = [1,0,0] # Gives blue channel all the weight
# for standard gray conversion, coefficients = [0.114, 0.587, 0.299]
m = np.array(coefficients).reshape((1,3))
blue = cv2.transform(im, m)
So you have custom formula,
Load source,
Mat src=imread(fileName,1);
Create gray image,
Mat gray(src.size(),CV_8UC1,Scalar(0));
Now in a loop, access BGR pixel of source like,
Vec3b bgrPixel=src.at<cv::Vec3b>(y,x); //gives you the BGR vector of type cv::Vec3band will be in row, column order
bgrPixel[0]= Blue//
bgrPixel[1]= Green//
bgrPixel[2]= Red//
Calculate new gray pixel value using your custom equation.
Finally set the pixel value on gray image,
gray.at<uchar>(y,x) = custom intensity value // will be in row, column order

Why do we convert from RGB to HSV

I have a image and i want to detect a blue rectange in it. My teacher told me that:
convert it to HSV color model
define a thresh hold to make it become a binary image with the color we want to detect
So why do we do that ? why don't we direct thresh hold the rgb image ?
thanks for answer
You can find the answer to your question here
the basic summary is that HSV is better for object detection,
OpenCV usually captures images and videos in 8-bit, unsigned integer, BGR format. In other words, captured images can be considered as 3 matrices, BLUE,RED and GREEN with integer values ranges from 0 to 255.
How BGR image is formed
In the above image, each small box represents a pixel of the image. In real images, these pixels are so small that human eye cannot differentiate.
Usually, one can think that BGR color space is more suitable for color based segmentation. But HSV color space is the most suitable color space for color based image segmentation. So, in the above application, I have converted the color space of original image of the video from BGR to HSV image.
HSV color space is consists of 3 matrices, 'hue', 'saturation' and 'value'. In OpenCV, value range for 'hue', 'saturation' and 'value' are respectively 0-179, 0-255 and 0-255. 'Hue' represents the color, 'saturation' represents the amount to which that respective color is mixed with white and 'value' represents the amount to which that respective color is mixed with black.
According to http://en.wikipedia.org/wiki/HSL_and_HSV#Use_in_image_analysis :
Because the R, G, and B components of an object’s color in a digital image are all correlated with the amount of light hitting the object, and therefore with each other, image descriptions in terms of those components make object discrimination difficult. Descriptions in terms of hue/lightness/chroma or hue/lightness/saturation are often more relevant.
Also some good info here
The HSV color space abstracts color (hue) by separating it from saturation and pseudo-illumination. This makes it practical for real-world applications such as the one you have provided.
R, G, B in RGB are all co-related to the color luminance( what we loosely call intensity),i.e., We cannot separate color information from luminance. HSV or Hue Saturation Value is used to separate image luminance from color information. This makes it easier when we are working on or need luminance of the image/frame. HSV also used in situations where color description plays an integral role.
Cheers

compare two components with their colors

i want to compare to components with their filled colors if they are equal or not
i do the following algorithm , i do averaging for the rgb as following
double avg1 =(comp[0].Red+comp[0].Blue+comp[0].Green)/3;
double avg2 =(comp[1].Red+comp[1].Blue+comp[1].Green)/3;
then compare them as following
double ratio = avg1/avg2 ;
if(ratio > 0.8 && ratio < 1.2){} //then they are supposed to be equal
but this way isn't accurate at all
after searching i found that the best way is converting the image to HSL space and compare
but i can't get how i compare 2 colors ?!! here
in other words after converting the image into HSL space what can i do ?!
help please !!
modification to the question for more clarification
i mean with component (sequence of points) so in the averaging step actually i revisit all the points calculating the sum of the average of rgb for each pixel , then do averaging over the total number of the points
Convert to HSL and use the difference in H (hue) to group colors.
So if your question is "after converting the image into HSL space what can i do ?!" then here goes:
convert the RGB image you've loaded to HSL using cvCvtColor() with the CV_RGB2HLS flag (the HSL image should be 3-channel, naturally)
make three single-channel images (of same size) for the H, L, S channels to be separated into
cvSplit( hls, h, l, s, 0 ) to separate the HSL image into channels
Now the h_image will be just like any single-channel grayscale image. So after extracting components (do this from thresholding the RGB image, sometimes Hue channel image looks weird :P) simply compare the colors in the hue image that correspond to their co-ordinates.
Hope this helps.

Resources