I want to segment an image but someone told me that the Euclidean distance for RGB is not as good as HSV -- but for HSV, as not all H, S, V are of the same range so I need to normalize it. Is it a good idea to normalize HSV and then do clustering? If so, how should I normalize on HSV scale?
Thanks
As HSV components are signify Hue, Saturation and gray intensity of a pixel they are not correlated to each other in terms of color, each component have its own role in defining the property of that pixel, like Hue will give you information regarding color (wavelength in other terms) Saturation always shows how much percentage of white is mixed with that color and Value is nothing but magnitude of that color(in other term Intensity), that is why all components of HSV space not follow same scale for representation of the values while hue can goes negative(because these are cyclic values) on the scale as well but intensity (V) will never goes negative, so normalization will not help in clustering much, the Better idea is you should apply clustering only on Hue if you want to do color clustering.
Now why Euclidean is not good for multi-channel clustering is because its distribution along mean is spherical(for 2D circular) so if it can not make any difference between (147,175,208) and (208,175,147) both will have same distance from the center, its better to use Mahalanobis Distance for distance calculation because it uses Co-variance matrix of the components which makes this distance distribution Parabolic along the mean.
so if you want to do color segmentation in RGB color space use mahalanobis distance(but it will computationally extensive so it will slows down clustering process) and if you want to do clustering in HSV color space use Hue for the segmentation of colors and than use V for fine tuning of segmentation output.
Hope it will help. Thank You
Hue is cyclic.
Do not use the mean (and thus, k-means) on such data.
Firstly you need to know why HSV is more preffered than RGB in image segmentation. HSV separates color information (Chroma) and image intensity or brightness level (Luma) which is very useful if you want to do image segmentation. For example if you try to use RGB approach for a photo with sea as the background there is a big chance the dominant RGB component in the sea is not blue (usually because of shadow or illumination). But if you are using HSV, value is separated and you can construct a histogram or thresholding rules using only saturation and hue.
There is a really good paper that compared both RGB and HSV approach and I think it will be a good read for you -> http://www.cse.msu.edu/~pramanik/research/papers/2002Papers/icip.hsv.pdf
Related
I am implementing a classifier that is capable of recognizing vehicle color, and I am using the 3D color histograms of the region of interest as a feature vector, computed using openCV's method, calcHist(). Specifically, to calculate the histograms, I use hist = cv.calcHist([hsv_image], [0, 1, 2], None, (8,8,8), [0, 180, 0, 256, 0, 256]). With these parameters, I got, by doing the flatten() of the histogram, a feature vector of 8x8x8 = 512, and with these feature vectors the classifier works pretty well, but I'm looking to further improve the accuracy of my model. So, what I would like to know is if there is any correlation between the number of bins and the range of color channel values, so that I can choose the best number of bins possible.
I'd suggest if you want to improve accuracy, perhaps try a perceptually accurate colorspace — and HSV isn't one. As color is not "real", and only a perception, it would follow that using a perceptually accurate appearance model is a best practice for your application.
Perceptual Appearance Models
CIECAM02, CIECAM16, Jzazbz are pretty much state of the art, and there is ZCAM for HDR imagery, and also image appearance models such as iCAM. These might not be available in a library for OpenCV, but most aren't that difficult to implement.
CIELAB
A simpler model is CIELAB which is part of OpenCV, and is a better choice than HSV or HSL, particularly if you are goal is to judge or select colors in a manner similar to human perception.
L*a*b* breaks the colors down based on human perception and the opponent process of vision. The channels are perceptual lightness, L* as a value from 0 to 100, and a* and b* which encode red/green and blue/yellow respectively, and are each nominally -128 to 127 if using signed 8bit integers.
LAB is a 3D Cartesian space. To determine the color difference, it is simply the euclidian distance between two colors, in other words, the square root of the sum of the squared differences, so:
∆ = ((L*1 - L*2)2 + (a1 - a2)2 + (b1 - b2)2 )0.5
Polar Version
CIELAB is also available with polar coordinates, LCh, for Lightness, Chroma, and hue.
Saturation is not available with LAB, only Chroma. The other color appearance models I mentioned above do have a saturation correlate, as well as brightness in addition to lightness. (Brightness being a different perception than lightness/darkness).
I would like to know the difference between contrast stretching and histogram equalization.
I have tried both using OpenCV and observed the results, but I still have not understood the main differences between the two techniques. Insights would be of much needed help.
Lets Define Contrast first,
Contrast is a measure of the “range” of an image; i.e. how spread its intensities are. It has many formal definitions one famous is Michelson’s:
He says contrast = ( Imax - Imin )/( Imax + I min )
Contrast is strongly tied to an image’s overall visual quality.
Ideally, we’d like images to use the entire range of values available
to them.
Contrast Stretching and Histogram Equalisation have the same goal: making the images to use entire range of values available to them.
But they use different techniques.
Contrast Stretching works like mapping
it maps minimum intensity in the image to the minimum value in the range( 84 ==> 0 in the example above )
With the same way, it maps maximum intensity in the image to the maximum value in the range( 153 ==> 255 in the example above )
This is why Contrast Stretching is un-reliable, if there exist only two pixels have 0 and 255 intensity, it is totally useless.
However a better approach is Histogram Equalisation which uses probability distribution. You can learn the steps here
I came across the following points after some reading.
Contrast stretching is all about increasing the difference between the maximum intensity value in an image and the minimum one. All the rest of the intensity values are spread out between this range.
Histogram equalization is about modifying the intensity values of all the pixels in the image such that the histogram is "flattened" (in reality, the histogram can't be exactly flattened, there would be some peaks and some valleys, but that's a practical problem).
In contrast stretching, there exists a one-to-one relationship of the intensity values between the source image and the target image i.e., the original image can be restored from the contrast-stretched image.
However, once histogram equalization is performed, there is no way of getting back the original image.
In Histogram equalization, you want to flatten the histogram into a uniform distribution.
In contrast stretching, you manipulate the entire range of intensity values. Like what you do in Normalization.
Contrast stretching is a linear normalization that stretches an arbitrary interval of the intensities of an image and fits the interval to an another arbitrary interval (usually the target interval is the possible minimum and maximum of the image, like 0 and 255).
Histogram equalization is a nonlinear normalization that stretches the area of histogram with high abundance intensities and compresses the area with low abundance intensities.
I think that contrast stretching broadens the histogram of the image intensity levels, so the intensity around the range of input may be mapped to the full intensity range.
Histogram equalization, on the other hand, maps all of the pixels to the full range according to the cumulative distribution function or probability.
Contrast is the difference between maximum and minimum pixel intensity.
Both methods are used to enhance contrast, more precisely, adjusting image intensities to enhance contrast.
During histogram equalization the overall shape of the histogram
changes, whereas in contrast stretching the overall shape of
histogram remains same.
I have a image and i want to detect a blue rectange in it. My teacher told me that:
convert it to HSV color model
define a thresh hold to make it become a binary image with the color we want to detect
So why do we do that ? why don't we direct thresh hold the rgb image ?
thanks for answer
You can find the answer to your question here
the basic summary is that HSV is better for object detection,
OpenCV usually captures images and videos in 8-bit, unsigned integer, BGR format. In other words, captured images can be considered as 3 matrices, BLUE,RED and GREEN with integer values ranges from 0 to 255.
How BGR image is formed
In the above image, each small box represents a pixel of the image. In real images, these pixels are so small that human eye cannot differentiate.
Usually, one can think that BGR color space is more suitable for color based segmentation. But HSV color space is the most suitable color space for color based image segmentation. So, in the above application, I have converted the color space of original image of the video from BGR to HSV image.
HSV color space is consists of 3 matrices, 'hue', 'saturation' and 'value'. In OpenCV, value range for 'hue', 'saturation' and 'value' are respectively 0-179, 0-255 and 0-255. 'Hue' represents the color, 'saturation' represents the amount to which that respective color is mixed with white and 'value' represents the amount to which that respective color is mixed with black.
According to http://en.wikipedia.org/wiki/HSL_and_HSV#Use_in_image_analysis :
Because the R, G, and B components of an object’s color in a digital image are all correlated with the amount of light hitting the object, and therefore with each other, image descriptions in terms of those components make object discrimination difficult. Descriptions in terms of hue/lightness/chroma or hue/lightness/saturation are often more relevant.
Also some good info here
The HSV color space abstracts color (hue) by separating it from saturation and pseudo-illumination. This makes it practical for real-world applications such as the one you have provided.
R, G, B in RGB are all co-related to the color luminance( what we loosely call intensity),i.e., We cannot separate color information from luminance. HSV or Hue Saturation Value is used to separate image luminance from color information. This makes it easier when we are working on or need luminance of the image/frame. HSV also used in situations where color description plays an integral role.
Cheers
I have been trying to obtain the image brightness in Opencv, and so far I have used calcHist and considered the average of the histogram values. However, I feel this is not accurate, as it does not actually determine the brightness of an image. I performed calcHist over a gray scale version of the image, and tried to differentiate between the avergae values obtained from bright images over that of moderate ones. I have not been successful so far. Could you please help me with a method or algorithm, that can be realised through OpenCv, to estimate brightness of an image? Thanks in advance.
I suppose, that HSV color model will be usefull in your problem, where channel V is Value:
"Value is the brightness of the color and varies with color saturation. It ranges from 0 to 100%. When the value is ’0′ the color space will be totally black. With the increase in the value, the color space brightness up and shows various colors."
So use OpenCV method cvCvtColor(const CvArr* src, CvArr* dst, int code), that converts an image from one color space to another. In your case code = CV_BGR2HSV.Than calculate histogram of third channel V.
I was about to ask the same, but then found out, that similar question gave no satisfactory answers. All answers I've found on SO deal with human observation of a single pixel RGB vs HSV.
From my observations, the subjective brightness of an image also depends strongly on the pattern. A star in a dark sky may look more bright than a cloudy sky by day, while the average pixel value of the first image will be much smaller.
The images I use are grey-scale cell-images produced by a microscope. The forms vary considerably. Sometimes they are small bright dots on very black background, sometimes less bright bigger areas on not so dark background.
My approach is:
Find histogram maximum (HMax) using threshold for removing hot pixels.
Calculate mean values of all pixel between HMax * 2/3 and HMax
The ratio 2/3 could be also increased to 3/4 (which reduces the range of pixels considered as bright).
The approach works quite well, as different cell-patterns with same titration produce similar brightness.
P.S.: What I actually wanted to ask is, whether there is a similar function for such a calculation in OpenCV or SimpleCV. Many thanks for any comments!
I prefer Valentin's answer, but for 'yet another' way of determining average-per-pixel brightness, you can use numpy and a geometric mean instead of arithmetic. To me it has better results.
from numpy.linalg import norm
def brightness(img):
if len(img.shape) == 3:
# Colored RGB or BGR (*Do Not* use HSV images with this function)
# create brightness with euclidean norm
return np.average(norm(img, axis=2)) / np.sqrt(3)
else:
# Grayscale
return np.average(img)
A bit of OpenCV C++ source code for a trivial check to differentiate between light and dark images. This is inspired by the answer above provided years ago by #ann-orlova:
const int darkness_threshold = 128; // you need to determine what threshold to use
cv::Mat mat = get_image_from_device();
cv::Mat hsv;
cv::cvtColor(mat, hsv, CV_BGR2HSV);
const auto result = cv::mean(hsv);
// cv::mean() will return 3 numbers, one for each channel:
// 0=hue
// 1=saturation
// 2=value (brightness)
if (result[2] < darkness_threshold)
{
process_dark_image(mat);
}
else
{
process_light_image(mat);
}
I'm using OpenCv. For the purpose of comparison, I have to fetch data about the color histogram of an image.
In detail, I have a large amount of images which I organize into many sub sets, each sub sets consists of a group of similar images. My destination is to be able to get a new image and determine the sub set it belongs to, based on color similarity.
Now, I know how to build the histogram of an image, but my problem is how to decrease as much as possible the affect of the image's lightness on the color histogram. I have thought about using cvEqualizeHist() before calculating the histogram, but since I'm pretty new in OpenCv I'm not sure what the best way is.
Any advise is very appreciated,
Transform your image from RGB to HSV color space using cvtColor() with CV_BGR2HSV or CV_RGB2HSV option. H, S and V stands for Hue, Saturation and Intensity respectively. Use color histograms in this HSV space and use only a couple of bins for V channel.