GPUImage Histogram Equalization - ios

I would like to use GPUImage's Histogram Equalization filter (link to .h) (link to .m) for a camera app. I'd like to use it in real time and present it as an option to be applied on the live camera feed. I understand this may be an expensive operation and cause some latency.
I'm confused about how this filter works. When selected in GPUImage's example project (Filter Showcase) the filter shows a very dark image that is biased toward red and blue which does not seem to be the way equalization should work.
Also what is the difference between the histogram types kGPUImageHistogramLuminance and kGPUImageHistogramRGB? Filter Showcase uses kGPUImageHistogramLuminance but the default in the init is kGPUImageHistogramRGB. If I switch Filter Showcase to kGPUImageHistogramRGB, I just get a black screen. My goal is an overall contrast optimization.
Does anyone have experience using this filter? Or are there current limitations with this filter that are documented somewhere?

Histogram equalization of RGB images is done using the Luminance as equalizing the RGB channels separately would render the colour information useless.
You basically convert RGB to a colour space that separates colour from intensity information. Then equalize the intensity image and finally reconvert it to RGB.
According to the documentation: http://oss.io/p/BradLarson/GPUImage
GPUImageHistogramFilter: This analyzes the incoming image and creates
an output histogram with the frequency at which each color value
occurs. The output of this filter is a 3-pixel-high, 256-pixel-wide
image with the center (vertical) pixels containing pixels that
correspond to the frequency at which various color values occurred.
Each color value occupies one of the 256 width positions, from 0 on
the left to 255 on the right. This histogram can be generated for
individual color channels (kGPUImageHistogramRed,
kGPUImageHistogramGreen, kGPUImageHistogramBlue), the luminance of the
image (kGPUImageHistogramLuminance), or for all three color channels
at once (kGPUImageHistogramRGB).
I'm not very familiar with the programming language used so I can't tell if the implementation is correct. But in the end, colours should not change too much. Pixels should just become brighter or darker.

Related

Calculate the perceived brightness of an image

I wanna calculate the perceived brightness of an image and classify the image into dark, neutral and bright. And I find one problem here!
And I quote Lakshmi Narayanan's comment below. I'm confused with this method. What does "the average of the hist values from 0th channel" mean here? the 0th channel refer to gray image or value channel in hsv image? Moreover, what's the theory of that method?
Well, for such a case, I think the hsv would be better. Or try this method #2vision2. Compute the laplacian of the gray scale of the image. obtain the max value using minMacLoc. call it maxval. Estimate your sharpness/brightness index as - (maxval * average V channel values) / (average of the hist values from 0th channel), as said above. This would give you certain values. low bright images are usually below 30. 30 - 50 can b taken as ok images. and above 50 as bright images.
If you have an RGB color image you can get the brightness by converting it to another color space that separates color from intensity information like HSV or LAB.
Gray images already show local "brightness" so no conversion is necessary.
If an image is perceived as bright depends on many things. Mainly your display device, reference images, contrast, human...
Using a few intensity statistics values should give you an ok classification for one particular display device.

Should I use HSV/HSB or RGB and why?

I have to detect leukocytes cells in an image that contains another blood cells, but the differences can be distinguished through the color of cells, leukocytes have more dense purple color, can be seen in the image below.
What color methode I've to use RGB/HSV ? and why ?!
sample image:
Usually when making decisions like this I just quickly plot the different channels and color spaces and see what I find. It is always better to start with a high quality image than to start with a low one and try to fix it with lots of processing
In this specific case I would use HSV. But unlike most color segmentation I would actually use the Saturation Channel to segment the images. The cells are nearly the same Hue so using the hue channel would be very difficult.
hue, (at full saturation and full brightness) very hard to differentiate cells
saturation huge contrast
Green channel, actually shows a lot of contrast as well (it surprised me)
the red and blue channels are hard to actually distinguish the cells.
Now that we have two candidate representations the saturation or the Green channel, we ask which is easier to work with? Since any HSV work involves us converting the RGB image, we can dismiss it, so the clear choice is to simply use the green channel of the RGB image for segmentation.
edit
since you didn't include a language tag I would like to attach some Matlab code I just wrote. It displays an image in all 4 color spaces so you can quickly make an informed decision on which to use. It mimics matlabs Color Thresholder colorspace selection window
function ViewColorSpaces(rgb_image)
% ViewColorSpaces(rgb_image)
% displays an RGB image in 4 different color spaces. RGB, HSV, YCbCr,CIELab
% each of the 3 channels are shown for each colorspace
% the display mimcs the New matlab color thresholder window
% http://www.mathworks.com/help/images/image-segmentation-using-the-color-thesholder-app.html
hsvim = rgb2hsv(rgb_image);
yuvim = rgb2ycbcr(rgb_image);
%cielab colorspace
cform = makecform('srgb2lab');
cieim = applycform(rgb_image,cform);
figure();
%rgb
subplot(3,4,1);imshow(rgb_image(:,:,1));title(sprintf('RGB Space\n\nred'))
subplot(3,4,5);imshow(rgb_image(:,:,2));title('green')
subplot(3,4,9);imshow(rgb_image(:,:,3));title('blue')
%hsv
subplot(3,4,2);imshow(hsvim(:,:,1));title(sprintf('HSV Space\n\nhue'))
subplot(3,4,6);imshow(hsvim(:,:,2));title('saturation')
subplot(3,4,10);imshow(hsvim(:,:,3));title('brightness')
%ycbcr / yuv
subplot(3,4,3);imshow(yuvim(:,:,1));title(sprintf('YCbCr Space\n\nLuminance'))
subplot(3,4,7);imshow(yuvim(:,:,2));title('blue difference')
subplot(3,4,11);imshow(yuvim(:,:,3));title('red difference')
%CIElab
subplot(3,4,4);imshow(cieim(:,:,1));title(sprintf('CIELab Space\n\nLightness'))
subplot(3,4,8);imshow(cieim(:,:,2));title('green red')
subplot(3,4,12);imshow(cieim(:,:,3));title('yellow blue')
end
you could call it like this
rgbim = imread('http://i.stack.imgur.com/gd62B.jpg');
ViewColorSpaces(rgbim)
and the display is this
in DIP and CV is this always a valid question
But it has no universal answer because each task is unique so use what is better suited for it. To choose correctly you need to know the pros/cons of each so here is some summary:
RGB
this is easy to handle and you can easyly access r,g,b bands. For many cases is better to check just single band instead of whole color or mix the colors to emphasize wanted feature or even dampening unwanted one. It is hard to compare colors in RGB due to intensity encoded into bands directly. To remedy that you can use normalization but that is slow (need per pixel sqrt). You can do arithmetics on RGB colors directly.
Example of task better suited for RGB:
finding horizont in high altitude photo
HSV
is better suited for color recognition because CV algorithms using HSV has very similar visual perception to human perception so if you want to recognize areas of distinct colors HSV is better. The conversion between RGB/HSV takes a bit of time which can be for big resolutions or hi fps apps a problem. For standard DIP/CV tasks is this usually not the case.
Example of task better suited for HSV:
Compare RGB colors
Take a look at:
HSV histogram
to see the distinct color separation in HSV. The segmentation of image based on color is easy on HSV. You can not do arithmetics on HSV colors directly instead need to convert to RGB and back

OpenCv: Get over lightness affect on the color histogram

I'm using OpenCv. For the purpose of comparison, I have to fetch data about the color histogram of an image.
In detail, I have a large amount of images which I organize into many sub sets, each sub sets consists of a group of similar images. My destination is to be able to get a new image and determine the sub set it belongs to, based on color similarity.
Now, I know how to build the histogram of an image, but my problem is how to decrease as much as possible the affect of the image's lightness on the color histogram. I have thought about using cvEqualizeHist() before calculating the histogram, but since I'm pretty new in OpenCv I'm not sure what the best way is.
Any advise is very appreciated,
Transform your image from RGB to HSV color space using cvtColor() with CV_BGR2HSV or CV_RGB2HSV option. H, S and V stands for Hue, Saturation and Intensity respectively. Use color histograms in this HSV space and use only a couple of bins for V channel.

Algorithm for determining the prominant colour of a photograph

When we look at a photo of a group of trees, we are able to identify that the photo is predominantly green and brown, or for a picture of the sea we are able to identify that it is mostly blue.
Does anyone know of an algorithm that can be used to detect the prominent color or colours in a photo?
I can envisage a 3D clustering algorithm in RGB space or something similar. I was wondering if someone knows of an existing technique.
Convert the image from RGB to a color space with brightness and saturation separated (HSL/HSV)
http://en.wikipedia.org/wiki/HSL_and_HSV
Then find the dominating values for the hue component of each pixel. Make a histogram for the hue values of each pixel and analyze in which angle region the peaks fall in. A large peak in the quadrant between 180 and 270 degrees means there is a large portion of blue in the image, for example.
There can be several difficulties in determining one dominant color. Pathological example: an image whose left half is blue and right half is red. Also, the hue will not deal very well with grayscales obviously. So a chessboard image with 50% white and 50% black will suffer from two problems: the hue is arbitrary for a black/white image, and there are two colors that are exactly 50% of the image.
It sounds like you want to start by computing an image histogram or color histogram of the image. The predominant color(s) will be related to the peak(s) in the histogram.
You might want to change the image from RGB to indexed, then you could use a regular histogram and detect the pics (Matlab does this with rgb2ind(), as you probably already know), and then the problem would be reduced to your regular "finding peaks in an array".
Then
n = hist(Y,nbins) bins the elements in vector Y into 10 equally spaced containers and returns the number of elements in each container as a row vector.
Those values in n will give you how many elements in each bin. Then it's just a matter of fiddling with the number of bins to make them wide enough, and with how many elements in each would make you count said bin as a predominant color, then taking the bins that contain those many elements, calculating the index that corresponds with their middle, and converting it to RGB again.
Whatever you're using for your processing probably has similar functions to those
Average all pixels in the image.
Remove all pixels that are farther away from the average color than standard deviation.
GOTO 1 with remaining pixels until arbitrarily few are left (1 or maybe 1%).
You might also want to pre-process the image, for example apply high-pass filter (removing only very low frequencies) to even out lighting in the photo — http://en.wikipedia.org/wiki/Checker_shadow_illusion

What is a good way of Enhancing contrast of color images?

I split color image for 3 channels and made a contrast enhancement of each channel.
Then merged them together, I like the image at the result, but it has different colors.
Black objects became yellow and so on...
EDIT:
The algorithm I used is to calculate the 5th percentile and the 95th percentile
as min and max values, and then expand the values of image so that it will have min and max values as 0 and 255. If there is a better approach please tell me.
When doing contrast enhancement in color images, it is a good idea to only adjust the luminance (brightness) and leave the color information alone. This requires a colorspace conversion from RGB to something like YUV. In this colorspace, the Y component is similar to a grayscale version of the image, while the other components provide the color. This effectively allows you to adjust contrast (by running your algorithm on just the Y component) without distorting the color information. Finally, you can convert back to RGB.
Use CLAHE algorithm. openCV has an implementation of it: cv::createCLAHE()

Resources