Histogram in Opencv - opencv

I have two questions:
1) I have segmented an image from the background and the result is a colored circle with a black background. When I do histogram using calcHist(), is the result will be the histogram of the colored circle or it will include even the black background.
2) After calculating the histogram for lab image for each channel l, a, and b and calculate the mean for each histogram, it gave me same mean for each channel while the standard deviation and the mode are different. Is this correct or I am doing something wrong.
Any help would be appreciated.

Related

Lane detection with brightness change and shades on lanes?

I am currently working on a lane detection project, where the input is an RGB road image "img" from a racing game, and the output is the same image annotated with drawn colored lines on detected lanes.
The steps are:
Convert the RGB image "img" to HSL image, then use a white color mask on it (white lanes only are expected in the image) with a white color range to discard any parts of the image with colors outside this range (put their values as zeros), let the output of this step by "white_img".
convert "white_img" to Grayscale producing "gray_img".
Apply Gaussian blurring to "gray_img" to make edges smoother, so less noisy edges can be detected, producing "smoothed_img".
Apply edge detection on "smoothed_img", producing "edge_img".
Crop "edge_img" by selecting a region of interest ROI, which is approximately within the lower half of image, producing "roi_img".
Finally, apply Hough transform on "roi_img" to detect the lines which will be considered as the detected lanes.
The biggest problems I am facing now are the brightness change and shades on lanes. For a dark image with shades on lanes, the lanes color can become very dark. I tried to increase the accepted white color range in step 1, which worked well for this kind of images. But for a bright image with no shades on lanes, most of the image is not discarded after step 1, which produces an output containing many things irrelevant to lanes.
Examples of input images:
Medium brightness without shades on lanes
Low brightness with shades on lanes
High brightness without shades on lanes
Any help to deal with these issues will be appreciated. Thanks in advance.

GPUImage Histogram Equalization

I would like to use GPUImage's Histogram Equalization filter (link to .h) (link to .m) for a camera app. I'd like to use it in real time and present it as an option to be applied on the live camera feed. I understand this may be an expensive operation and cause some latency.
I'm confused about how this filter works. When selected in GPUImage's example project (Filter Showcase) the filter shows a very dark image that is biased toward red and blue which does not seem to be the way equalization should work.
Also what is the difference between the histogram types kGPUImageHistogramLuminance and kGPUImageHistogramRGB? Filter Showcase uses kGPUImageHistogramLuminance but the default in the init is kGPUImageHistogramRGB. If I switch Filter Showcase to kGPUImageHistogramRGB, I just get a black screen. My goal is an overall contrast optimization.
Does anyone have experience using this filter? Or are there current limitations with this filter that are documented somewhere?
Histogram equalization of RGB images is done using the Luminance as equalizing the RGB channels separately would render the colour information useless.
You basically convert RGB to a colour space that separates colour from intensity information. Then equalize the intensity image and finally reconvert it to RGB.
According to the documentation: http://oss.io/p/BradLarson/GPUImage
GPUImageHistogramFilter: This analyzes the incoming image and creates
an output histogram with the frequency at which each color value
occurs. The output of this filter is a 3-pixel-high, 256-pixel-wide
image with the center (vertical) pixels containing pixels that
correspond to the frequency at which various color values occurred.
Each color value occupies one of the 256 width positions, from 0 on
the left to 255 on the right. This histogram can be generated for
individual color channels (kGPUImageHistogramRed,
kGPUImageHistogramGreen, kGPUImageHistogramBlue), the luminance of the
image (kGPUImageHistogramLuminance), or for all three color channels
at once (kGPUImageHistogramRGB).
I'm not very familiar with the programming language used so I can't tell if the implementation is correct. But in the end, colours should not change too much. Pixels should just become brighter or darker.

Line detection against darker shades

I wants to detect lines from an image with black line drawing over white papers.
It could be easy if its ideal 'black and white', using histogram threshold would do.
But, as the image attached shows, some lines (e.g. in the light red circle) are in gray lighter than the shades (e.g. in the dark red circle). So some shades are obtained before light lines using histogram threshold.
Is there any ideas to divide lines from shades with some 'knowledge'? Thanks!
Edit:
Here are the raw images, a bit small because they are of original resolution.
Thanks :-)
I would add another method using Gaussian blur other than erosion+dilation, for your reference:
file='http://i.stack.imgur.com/oEjtT.png';
I=imread(file);
h = fspecial('gaussian',5,4);
I1=imfilter(I,h,'replicate');
h = fspecial('gaussian',5);
I2=imfilter(I,h,'replicate');
I3=I1-I2;
I3=double(I3(:,:,1));
I3=I3.*(I3>5);
imshow(I3)

Histogram Normalization

I am trying to apply histogram normalization to create a dense color histogram.
Split the channels into R, G, B
Normalize the individual histogram
Merge
I think that this is the normal step perhaps if I am wrong please let me know. Now,
for a rainbow image as shown below I get I get max of 255 for all 3 channel and 0 as min. Using the formula for
Pixel - Min / (Max - min) * 255
I will get the same image as the original one back. What is the critical step that I am missing. Please advise me.Thank you!
REf: http://www.roborealm.com/help/Normalize.php.. I used this reference
White = (255,255,255). Black = (0,0,0). So your program finds the white background, and the black line in the bottom right.
Remove the white and change it to black. Then make your program ignore black.
images having white and black pixels cannot be normalized as such. your formula is giving you the same value. try ignoring all white and black pixels and normalize the pixels one by one.
as i see here you have a well distributed image for all channels already, so normalizing this one may not work well anyways..

Getting dominance color opencv

I have a image which is multi colored.
I want to calculate the dominant color of the image. the dominant color is red, i want to filter the red out. i am doing the following code in opencv but its not performing.
inRange(input_image, Scalar(0, 0, 0), Scalar(0, 0, 255), output);
How can i get the dominant color otherwise? My final project should determine the maximum color of the object on its own. What is the best method for this?
You should quantize (reduce number of colors) your image before searching the for the most frequent color.
Why? Imagine image that has 100 pixels of (0,0,255) (blue color int RGB), 100 pixels of (0,0,254) (almost blue - you even won't find the difference) and 150 pixels of (0,255,0) (green). What is the most frequent color here? Obviously, it's green. But after quantization you will got 200 pixels of blue and 150 pixels of green.
Read this discussion: How to reduce the number of colors in an image with OpenCV?. Here's simple example:
int coef = 200;
Mat quantized = img/coef;
quantized = quantized*coef;
And this is what I've got after applying it:
Also you can use k-means or mean-shift to do that (this is much efficient way).
The best method is by analyzing histograms.
Your problem is a classical "find the peak and area under the peak". By having an image file (let's say we take only the third channel for simplicity):
You will have to find the highest peak in that histogram. The easiest method is to simply query the X for which Y is maximized. More advanced methods work with windows - they average the Y-values of 10 consecutive data points, etc.
Also, work in the HSV or YCrCb color space. HSV is good because the "Hue" channel translates very closely to what you mean by "Color". RGB is really not well suited for image analysis.

Resources