i have not found an article to explain why by transofrmation RGB Color model to HSI we have undefinde Saturation when the color is black?
Qeustion1: what is the explation of this (the mathematic reason)
Question2: when the singularity is in the black color, that means we could not define what is the Saturation. but the Question then why the HSI is sensitiv to error also when the Saturation is small(not Zero but in the near from Zero). i have read that it is better not allow the saturation to be very small?
The mathematical reason is an indeterminate form 0/0. It should be intuitive that "there is nothing to see" in black.
The saturation of RGB 0,0,0 cannot be calculated and is therefor defined as zero.
S = (max(r,g,b) - min(r,g,b)) / max(r,g,b)
You see that r=g=b=0 would result in a problem as we cannot devide by 0.
The formula also shows that very small saturation values can only occur if we have very similar RGB values. If a pixel has a low saturation it is "more gray" or more achromatic. It doesn't make much sense to apply colour based rules to non-colours.
Related
Given an image, how do I go about determining if a certain pixel is "white" ? Based on Wikipedia, I understand that if the RGB values are at (255,255,255), the pixel is considered white and that a lower similar set of values for eg. (200,200,200) would mean that it is a "darker shade of white" i.e. gray.
Should I just set a threshold of example 80% for each channel and if the RGB at a certain pixel passes that condition then it is marked as gray/white ? Are there any papers that I can read up for help ?
Regards,
Haziq
The solution is to convert your color space from RGB to HSV. Here is sample algorithm thread. Finally apply threshold in Value (Lightness) Channel to filter bright region.
If you simply threshold all channels at, say 200, you are allowing the Red to differ from the Green and that to differ from the Blue, which means you are allowing colour into your images and all the following colours would be permitted:
You need to ensure that, not only are Red, Green and Blue above 200, but further that they are equal. That way you only permit this range:
In the HSL model, you need Lightness to be above say 80%, but also the Saturation to be zero to ensure white/gray.
I would like to subtract color from another. For example, I have two image 100X100 pixel, one with color R:236 G:226 B:43, and another R:63 G:85 B:235. I would like to cut color R:236 G:226 B:43 from R:63 G:85 B:235. But I know it can't subtract like the mathematically method, by layer R:236-63, G:226-85, B:43-235 because i found that the color that less than 0 and more than 255 can't define.
I found another color space in RYB color space.but i don't know how it really work.
Thank you for your help.
You cannot actually subtract colors. But you surely can detect their difference. I suppose this is what you need, anyway.
Here are some thoughts and remarks:
Convert your images to HSV colorspace which transforms RGB values to
Hue, Saturation and Brightness (Value).
All your images should be around a yellowish color (near 60 deg. on
the Hue circle) so they should all have about the same Hue with
minor differences.
Typically if all images are taken at constant lighting conditions
they should have the same Value (brightness).
Saturation, which corresponds to the mixture of white in a color,
typically represents how intense you perceive a color to be. This
would typically be of about the same value for all your images in
constant lighting conditions.
According to your first description, the main difference should be detected in the Hue channel.
A good thing about HSV is that H (hue) is represented by a counterclockwise circle and colors are just positions on this circle, so positive and negative values all make sense (search google for a description of HSV colorspace to get a view of how it looks and works).
You may either detect differences by a subtraction that will lead you to a value either positive either negative, or by taking the absolute value of the subtraction, which will just give a measure of the difference of the two values of Hue (but without any information on the direction of the difference). If you need the direction of the difference you should just stick to a plain subtraction.
For example:
Hue_1 - Hue_2 = Hue_3 (typically a small value for your problem)
if Hue_3 > 0 this means that Hue_1 is a bit towards Green if
Hue_3 < 0 this means that Hue_1 is a bit towards Red
Of course you may also need to take a look at the differences in the other channels, S and V to see if colors are more saturated or more bright, but I cannot be sure you need to do this since we haven't seen any images here.
Of course you can do a lot more sophisticated things...Like apply clustering or classification techniques on the detected hues and classify them to classes according to your problem needs...
Anyone know any algorithm to non-linearly change lightness using HSI model?
I am currently doing something like this.
new intensity = old intensity^(1/4)
It increases lightness of dark color more than lightness of bright color.
The problem is that before enhancement, if I have some pixels look like black color because of very low lightness, their lightness increase after enhancement and their actual colors appear which make black area of photo has different colors(eg: grey,blue). I have tried quite a few ways to solve it by lowering new lightness of black spot but I have no luck so far.
Is there anyway to solve it or is there better algorithm? The problem is only with color which appear to be black before enhancement.
Please help. Thank a lot.
The HSI values of dark pixels are usually degenerate. This is because, for example, a fully saturated maximally-dark blue = black, is identical in appearance to a completely de-saturated (grey) pixel at its darkest = black (this is the reason the 3D space shape usually has a pointed tip at the degenerate/singular colors).
You should not enhance pixels under a certain threshold value, or alternatively, use some weighting function that inhibits enhancement at the very dark values.
I'd like design a chart and set the colors
from a single exemplar. Same way as in Excel's:
Is there some sort of a formula or algorithm to
generate the next shade of color from a given
shade or color?
That looks to me like they just took the same hue (basic color) and turned the brightness up and down. That can be done easily enough with a HSL or HSV transformations. Check Wikipedia for HSL and HSV color spaces to get some understanding of the theory involved.
Basic idea: Computers represent color with a mixture of red intensity, green intensity and blue intensity, called RGB, because that's the way the screen displays color. HSL (Hue, Saturation, Lightness) and HSV (Hue, Saturation, Value) are two alternative models for representing color that are more intuitive and closer to the way human beings tend to think about how colors look.
Hue is the basic color, represented (more or less) as an angle on a color wheel. Saturation is a linear value, from 0 (gray) to 255 (bright, vibrant color). And Lightness/Value represent brightness, from 0 (black) to 100 (white).
The algorithms to transform from RGB -> HSL and HSL -> RGB (or HSV instead of HSL) are pretty straightforward. Try transforming your color to HS*, adjusting the brightness, and transforming back. By taking several different brightness values from low to high, and arranging them as wedges in a pie chart, you can duplicate that picture pretty easily.
Look into the HSV colour space. Using it you can produce different shades or tints starting from a given colour. There is a page with Pascal / Delphi code for conversion between RGB and HSV at efg's Computer Lab.
Roderick , the #mghie links are great to start, additionally try out the Colorlib Delphi Library , wich lets you convert between color models as well as HTML color conversion utilities. is very complete, full source code included and freeware ;).
check the demo application , in this image you can see a blue pallete generated using this library.
I split color image for 3 channels and made a contrast enhancement of each channel.
Then merged them together, I like the image at the result, but it has different colors.
Black objects became yellow and so on...
EDIT:
The algorithm I used is to calculate the 5th percentile and the 95th percentile
as min and max values, and then expand the values of image so that it will have min and max values as 0 and 255. If there is a better approach please tell me.
When doing contrast enhancement in color images, it is a good idea to only adjust the luminance (brightness) and leave the color information alone. This requires a colorspace conversion from RGB to something like YUV. In this colorspace, the Y component is similar to a grayscale version of the image, while the other components provide the color. This effectively allows you to adjust contrast (by running your algorithm on just the Y component) without distorting the color information. Finally, you can convert back to RGB.
Use CLAHE algorithm. openCV has an implementation of it: cv::createCLAHE()