I'm reading a paper about image processing and came across this color histogram:
image. But I'm not sure how to interpet it. The 3 different curves are for red, green and blue. But what is on the X and Y-axis? My guess would be X-axis going from 0 to 255 for the 'intensity' of the color and Y-axis the amount of pixels in the image that have this intensity. Could anyone confirm this or correct me if I'm wrong?
If I know well, someone please correct me if I am wrong, the X axis represents the possible values of a color from either one of the RGB channels (a value in the [0-255] interval), and the Y axis represents the number of pixels having that value.
Related
i want to detect red objects in an image.so i convert RGB img to HSV. so in order to know the range of red color i used color pallet on this site
https://alloyui.com/examples/color-picker/hsv
I found out that H(Hue) is falling between 0 to 10 as a lower limit and 340 to 359 as an upper limit. also i found out that the maximum value of S(Saturation) and V(value) is 100. but the problem is that i found some people say the ranges of red H: 0 to 10 as lower limit and 160 to 180 as uper limit.
https://solarianprogrammer.com/2015/05/08/detect-red-circles-image-using-opencv/
OpenCV better detection of red color?
also they said the maximum S and V is 255.This is color i got when i tried to find the upper limit of the red
There are different definitions of HSV, so the values your particular conversion function gives are the ones your should use. Measuring them is the best way to know for sure.
In principle H is an angle, so it goes from 0 to 360, with red centered around 0 (and understanding that 360==0). But some implementations will divide that by 2 to fit it in 8 bits. Others scale to a full 0-255 rage for the 8 bits.
The same is true for S and V. Sometimes they're values between 0 and 100, sometimes they go up to 255.
To measure, create an image where you have pure red pixels (RGB value 255,0,0), and convert. That will give your the center of red hue (H) and the max saturation (S). Then make an image that changes from orange to purple, these colors are near red. You should then see the range of H. Finally, make a pure white image (255,255,255). This will have maximum intensity (V).
I would like to subtract color from another. For example, I have two image 100X100 pixel, one with color R:236 G:226 B:43, and another R:63 G:85 B:235. I would like to cut color R:236 G:226 B:43 from R:63 G:85 B:235. But I know it can't subtract like the mathematically method, by layer R:236-63, G:226-85, B:43-235 because i found that the color that less than 0 and more than 255 can't define.
I found another color space in RYB color space.but i don't know how it really work.
Thank you for your help.
You cannot actually subtract colors. But you surely can detect their difference. I suppose this is what you need, anyway.
Here are some thoughts and remarks:
Convert your images to HSV colorspace which transforms RGB values to
Hue, Saturation and Brightness (Value).
All your images should be around a yellowish color (near 60 deg. on
the Hue circle) so they should all have about the same Hue with
minor differences.
Typically if all images are taken at constant lighting conditions
they should have the same Value (brightness).
Saturation, which corresponds to the mixture of white in a color,
typically represents how intense you perceive a color to be. This
would typically be of about the same value for all your images in
constant lighting conditions.
According to your first description, the main difference should be detected in the Hue channel.
A good thing about HSV is that H (hue) is represented by a counterclockwise circle and colors are just positions on this circle, so positive and negative values all make sense (search google for a description of HSV colorspace to get a view of how it looks and works).
You may either detect differences by a subtraction that will lead you to a value either positive either negative, or by taking the absolute value of the subtraction, which will just give a measure of the difference of the two values of Hue (but without any information on the direction of the difference). If you need the direction of the difference you should just stick to a plain subtraction.
For example:
Hue_1 - Hue_2 = Hue_3 (typically a small value for your problem)
if Hue_3 > 0 this means that Hue_1 is a bit towards Green if
Hue_3 < 0 this means that Hue_1 is a bit towards Red
Of course you may also need to take a look at the differences in the other channels, S and V to see if colors are more saturated or more bright, but I cannot be sure you need to do this since we haven't seen any images here.
Of course you can do a lot more sophisticated things...Like apply clustering or classification techniques on the detected hues and classify them to classes according to your problem needs...
I am trying to apply histogram normalization to create a dense color histogram.
Split the channels into R, G, B
Normalize the individual histogram
Merge
I think that this is the normal step perhaps if I am wrong please let me know. Now,
for a rainbow image as shown below I get I get max of 255 for all 3 channel and 0 as min. Using the formula for
Pixel - Min / (Max - min) * 255
I will get the same image as the original one back. What is the critical step that I am missing. Please advise me.Thank you!
REf: http://www.roborealm.com/help/Normalize.php.. I used this reference
White = (255,255,255). Black = (0,0,0). So your program finds the white background, and the black line in the bottom right.
Remove the white and change it to black. Then make your program ignore black.
images having white and black pixels cannot be normalized as such. your formula is giving you the same value. try ignoring all white and black pixels and normalize the pixels one by one.
as i see here you have a well distributed image for all channels already, so normalizing this one may not work well anyways..
How can I find white/black pixels between 2 points in javacv? I have tried with cvInitLineIterator but I don't know if I am on the right way..
Many thanks for considering my request.
Convert your image data into grayscale
Get list of pixels between your two points using cvInitLineIterator
Now verify each pixel value in line. 0 is ideally black (#000000) and your max pixel depth value is white.
I think you are moving in a right direction.
I am trying to blend two images using Poisson Blending technique. I have written the program and solved the system of linear equations separately for each r,g,b channel. After solving the equation rgb values are going out of bound, each value greater than 255. If I clamp each value to 255, the resulting image becomes white as all three channes are 255 now. My question is that can the rgb values be greater than 255 after solving poisson equation ? How can I have a proper blended image in this case ?
I think you need to change your scale for color values. According to the formula given in most of the online sites (set of equations), they consider the color value to be in the 0 to 1 range. Convert your 0 - 255 scale to floating point values between 0 - 1 and see.