Grayscale conversion algortihm - image-processing

I am looking for a color to grayscale image conversion algorithm that ensures color contrast will be highly maintained after the conversion. Do you have any suggestions?

Related

Converting colored image to grayscale

I am working on processing images that consists of colors that have the same grayscale. In other words, each image is colored with random colors that have the same gray value.
When I converted the image using (rgb2grey() from skimage or cv2.cvtColor() from OpenCV), the resulted image has only one gray value (or slightly difference gray values (unperceivable by human eyes). Therefore, the resulted image details unrecognizable.
My questions are:
What are the best way to do before converting these images to grayscale ones? (Please note the colors of these images are not fixed)
Are there any color combinations for which the color-gray conversion algorithms won't work?
How about using YCbCr?
Y is intensity, Cb is the blue component relative to the green component and Cr is the red component relative to the green component.
So I think YCbCr can differentiate between multiple pixels with same grayscale value.

Normalization image rgb

I have a problem with normalization.
Let me what the problem is and how I attempt to solve it.
I take a three-channel color image, convert it to grayscale and apply uniform or non-uniform quantization and the same thing.
To this image, I should apply the normalization, but I have a problem even if the image and grayscale and always has three channels.
How can I apply normalization having a three-channel image?
Should the min and the max all be in the three channels?
Could someone give me a hand?
The language I am using is processing 2.
P.S.
Can you do the same thing with a color image instead use a grayscale image?
You can convert between the 1-channel and 3-channel representations easily. I'd recommend scikit-image (http://scikit-image.org/).
from skimage.io import imread
from skimage.color import rgb2gray, gray2rgb
rgb_img = imread('path/to/my/image')
gray_img = rgb2gray(rgb_image)
# Now normalize gray image
gray_norm = gray_img / max(gray_img)
# Now convert back
rgb_norm = gray2rgb(gray_norm)
I worked with a similar problem sometime back. One of the good solutions to this was to:
Convert the image from RGB to HSI
Leaving the Hue and Saturation channels unchanged, simply normalize across the Intensity channel
Convert back to RGB
This logic can be applied accross several other image processing tasks, like for example, applying histogram equalization to RGB images.

Is there any function in OpenCV to quantize RGB values?

I need to quantize the RGB values to 29 uniform color dictionary. I used rgb2ind(image,29) in Matlab.
So, is there any function or efficient way to quantize the image color in OpenCV?
(I need to quantize the image color because i want to get a 29-sized histogram of color)
You will have to make your own. I can reccomend using HSV instead of RGB (you can convert RGB to HSV with opencv). Once the image is converted, you can then simply use 29 ranges for the H value.
EDIT: I saw this answer might be a bit vague for those who have little experience in computer vision. This question gives a lot more information about the difference between HSV and RGB and why this is usefull.

Why do we convert from RGB to HSV

I have a image and i want to detect a blue rectange in it. My teacher told me that:
convert it to HSV color model
define a thresh hold to make it become a binary image with the color we want to detect
So why do we do that ? why don't we direct thresh hold the rgb image ?
thanks for answer
You can find the answer to your question here
the basic summary is that HSV is better for object detection,
OpenCV usually captures images and videos in 8-bit, unsigned integer, BGR format. In other words, captured images can be considered as 3 matrices, BLUE,RED and GREEN with integer values ranges from 0 to 255.
How BGR image is formed
In the above image, each small box represents a pixel of the image. In real images, these pixels are so small that human eye cannot differentiate.
Usually, one can think that BGR color space is more suitable for color based segmentation. But HSV color space is the most suitable color space for color based image segmentation. So, in the above application, I have converted the color space of original image of the video from BGR to HSV image.
HSV color space is consists of 3 matrices, 'hue', 'saturation' and 'value'. In OpenCV, value range for 'hue', 'saturation' and 'value' are respectively 0-179, 0-255 and 0-255. 'Hue' represents the color, 'saturation' represents the amount to which that respective color is mixed with white and 'value' represents the amount to which that respective color is mixed with black.
According to http://en.wikipedia.org/wiki/HSL_and_HSV#Use_in_image_analysis :
Because the R, G, and B components of an object’s color in a digital image are all correlated with the amount of light hitting the object, and therefore with each other, image descriptions in terms of those components make object discrimination difficult. Descriptions in terms of hue/lightness/chroma or hue/lightness/saturation are often more relevant.
Also some good info here
The HSV color space abstracts color (hue) by separating it from saturation and pseudo-illumination. This makes it practical for real-world applications such as the one you have provided.
R, G, B in RGB are all co-related to the color luminance( what we loosely call intensity),i.e., We cannot separate color information from luminance. HSV or Hue Saturation Value is used to separate image luminance from color information. This makes it easier when we are working on or need luminance of the image/frame. HSV also used in situations where color description plays an integral role.
Cheers

SURF and OpenSURF to color image

I am using SURF features in OpenCV where the input images are converted to GRAY image.
cvtColor(object, object, CV_RGB2GRAY);
When I went through the documentation of OpenSURF I realised that its not in grayscale.
My confusion is that can we apply SURF to any image formats (YUV, HSV, RGB) or we have to change and modify the program to achieve that?
Most feature detectors work on greyscale because they analyse the patterns of edges in the image patch. You can run SURF on any single colour channel from the colour formats you mention i.e. You can run it on Y, U or V from YUV images, or on H, S or V from HSV images. Not sure how OpenSURF treats this, but they must be using the greyscale image internally.
Like OpenCV if you given an image to OpenSURF that is not single channel, it calls cvtColor(src, dst, CV_BGR2GRAY). If you pass either a 3 channel image in a YUV, HSV, Lab etc, things will go horribly wrong because the image will have an inappropriate color conversion applied..

Resources