Just like the title of this topic, how can I determine in OpenCV if a particular pixel of an image (either grayscale or color) is saturated (for instance, excessively bright)?
Thank you in advance.
By definition, saturated pixels are those associated with an intensity (i.e. either the grayscale value or one of the color component) equal to 255. If you prefer, you can also use a threshold smaller than 255, such as 240 or any other value.
Unfortunately, using only the image, you cannot easily distinguish pixels which are much too bright from pixels which are just a little too bright.
Related
I wanna calculate the perceived brightness of an image and classify the image into dark, neutral and bright. And I find one problem here!
And I quote Lakshmi Narayanan's comment below. I'm confused with this method. What does "the average of the hist values from 0th channel" mean here? the 0th channel refer to gray image or value channel in hsv image? Moreover, what's the theory of that method?
Well, for such a case, I think the hsv would be better. Or try this method #2vision2. Compute the laplacian of the gray scale of the image. obtain the max value using minMacLoc. call it maxval. Estimate your sharpness/brightness index as - (maxval * average V channel values) / (average of the hist values from 0th channel), as said above. This would give you certain values. low bright images are usually below 30. 30 - 50 can b taken as ok images. and above 50 as bright images.
If you have an RGB color image you can get the brightness by converting it to another color space that separates color from intensity information like HSV or LAB.
Gray images already show local "brightness" so no conversion is necessary.
If an image is perceived as bright depends on many things. Mainly your display device, reference images, contrast, human...
Using a few intensity statistics values should give you an ok classification for one particular display device.
I would like to subtract color from another. For example, I have two image 100X100 pixel, one with color R:236 G:226 B:43, and another R:63 G:85 B:235. I would like to cut color R:236 G:226 B:43 from R:63 G:85 B:235. But I know it can't subtract like the mathematically method, by layer R:236-63, G:226-85, B:43-235 because i found that the color that less than 0 and more than 255 can't define.
I found another color space in RYB color space.but i don't know how it really work.
Thank you for your help.
You cannot actually subtract colors. But you surely can detect their difference. I suppose this is what you need, anyway.
Here are some thoughts and remarks:
Convert your images to HSV colorspace which transforms RGB values to
Hue, Saturation and Brightness (Value).
All your images should be around a yellowish color (near 60 deg. on
the Hue circle) so they should all have about the same Hue with
minor differences.
Typically if all images are taken at constant lighting conditions
they should have the same Value (brightness).
Saturation, which corresponds to the mixture of white in a color,
typically represents how intense you perceive a color to be. This
would typically be of about the same value for all your images in
constant lighting conditions.
According to your first description, the main difference should be detected in the Hue channel.
A good thing about HSV is that H (hue) is represented by a counterclockwise circle and colors are just positions on this circle, so positive and negative values all make sense (search google for a description of HSV colorspace to get a view of how it looks and works).
You may either detect differences by a subtraction that will lead you to a value either positive either negative, or by taking the absolute value of the subtraction, which will just give a measure of the difference of the two values of Hue (but without any information on the direction of the difference). If you need the direction of the difference you should just stick to a plain subtraction.
For example:
Hue_1 - Hue_2 = Hue_3 (typically a small value for your problem)
if Hue_3 > 0 this means that Hue_1 is a bit towards Green if
Hue_3 < 0 this means that Hue_1 is a bit towards Red
Of course you may also need to take a look at the differences in the other channels, S and V to see if colors are more saturated or more bright, but I cannot be sure you need to do this since we haven't seen any images here.
Of course you can do a lot more sophisticated things...Like apply clustering or classification techniques on the detected hues and classify them to classes according to your problem needs...
I have a image and i want to detect a blue rectange in it. My teacher told me that:
convert it to HSV color model
define a thresh hold to make it become a binary image with the color we want to detect
So why do we do that ? why don't we direct thresh hold the rgb image ?
thanks for answer
You can find the answer to your question here
the basic summary is that HSV is better for object detection,
OpenCV usually captures images and videos in 8-bit, unsigned integer, BGR format. In other words, captured images can be considered as 3 matrices, BLUE,RED and GREEN with integer values ranges from 0 to 255.
How BGR image is formed
In the above image, each small box represents a pixel of the image. In real images, these pixels are so small that human eye cannot differentiate.
Usually, one can think that BGR color space is more suitable for color based segmentation. But HSV color space is the most suitable color space for color based image segmentation. So, in the above application, I have converted the color space of original image of the video from BGR to HSV image.
HSV color space is consists of 3 matrices, 'hue', 'saturation' and 'value'. In OpenCV, value range for 'hue', 'saturation' and 'value' are respectively 0-179, 0-255 and 0-255. 'Hue' represents the color, 'saturation' represents the amount to which that respective color is mixed with white and 'value' represents the amount to which that respective color is mixed with black.
According to http://en.wikipedia.org/wiki/HSL_and_HSV#Use_in_image_analysis :
Because the R, G, and B components of an object’s color in a digital image are all correlated with the amount of light hitting the object, and therefore with each other, image descriptions in terms of those components make object discrimination difficult. Descriptions in terms of hue/lightness/chroma or hue/lightness/saturation are often more relevant.
Also some good info here
The HSV color space abstracts color (hue) by separating it from saturation and pseudo-illumination. This makes it practical for real-world applications such as the one you have provided.
R, G, B in RGB are all co-related to the color luminance( what we loosely call intensity),i.e., We cannot separate color information from luminance. HSV or Hue Saturation Value is used to separate image luminance from color information. This makes it easier when we are working on or need luminance of the image/frame. HSV also used in situations where color description plays an integral role.
Cheers
I have a image which is multi colored.
I want to calculate the dominant color of the image. the dominant color is red, i want to filter the red out. i am doing the following code in opencv but its not performing.
inRange(input_image, Scalar(0, 0, 0), Scalar(0, 0, 255), output);
How can i get the dominant color otherwise? My final project should determine the maximum color of the object on its own. What is the best method for this?
You should quantize (reduce number of colors) your image before searching the for the most frequent color.
Why? Imagine image that has 100 pixels of (0,0,255) (blue color int RGB), 100 pixels of (0,0,254) (almost blue - you even won't find the difference) and 150 pixels of (0,255,0) (green). What is the most frequent color here? Obviously, it's green. But after quantization you will got 200 pixels of blue and 150 pixels of green.
Read this discussion: How to reduce the number of colors in an image with OpenCV?. Here's simple example:
int coef = 200;
Mat quantized = img/coef;
quantized = quantized*coef;
And this is what I've got after applying it:
Also you can use k-means or mean-shift to do that (this is much efficient way).
The best method is by analyzing histograms.
Your problem is a classical "find the peak and area under the peak". By having an image file (let's say we take only the third channel for simplicity):
You will have to find the highest peak in that histogram. The easiest method is to simply query the X for which Y is maximized. More advanced methods work with windows - they average the Y-values of 10 consecutive data points, etc.
Also, work in the HSV or YCrCb color space. HSV is good because the "Hue" channel translates very closely to what you mean by "Color". RGB is really not well suited for image analysis.
I split color image for 3 channels and made a contrast enhancement of each channel.
Then merged them together, I like the image at the result, but it has different colors.
Black objects became yellow and so on...
EDIT:
The algorithm I used is to calculate the 5th percentile and the 95th percentile
as min and max values, and then expand the values of image so that it will have min and max values as 0 and 255. If there is a better approach please tell me.
When doing contrast enhancement in color images, it is a good idea to only adjust the luminance (brightness) and leave the color information alone. This requires a colorspace conversion from RGB to something like YUV. In this colorspace, the Y component is similar to a grayscale version of the image, while the other components provide the color. This effectively allows you to adjust contrast (by running your algorithm on just the Y component) without distorting the color information. Finally, you can convert back to RGB.
Use CLAHE algorithm. openCV has an implementation of it: cv::createCLAHE()