What is the difference between modifying the alpha and the contrast? - opencv

Could someone explain what is the difference between contrast and alpha? (is there any difference...?)
when talking about OpenCV, it seems to be the same...

Contrast is the color difference which makes the objects distinguishable. Alpha, on the other hand, is the variable that indicates the transparency of a pixel.
If you want to increase the contrast, you can do this by enwidening the histogram of an image. See histogram equalization. CLAHE will give the best results.
If you want to add transparency to an image, you can write out the 4-channel image (blue - green - red - alpha) as .png. If you want to blend two images, you can use addWeighted or write a function of your own.

Related

Convert a Picture to RGB Dots Image (Half Toning Like Effect)

I'm trying to show students how the RGB color model works to create a particular color (or moreover to convince them that it really does). So I want to take a picture and convert each pixel to an RGB representation so that when you zoom in, instead of a single colored pixel, you see the RGB colors.
I've done this but for some very obvious reasons the converted picture is either washed out or darker than the original (which is a minor inconvenience but I think it would be more powerful if I could get it to be more like the original).
Here are two pictures "zoomed out":
Here is a "medium zoom", starting to show the RGB artifacts in the converted picture:
And here is a picture zoomed in to the point that you can clearly see individual pixels and the RGB squares:
You'll notice the constant color surrounding the pixels; that is the average RGB of the picture. I put that there so that you could see individual pixels (otherwise you just see rows/columns of shades of red/green/blue). If I take that space out completely, the image is even darker and if I replace it with white, then the image looks faded (when zoomed out).
I know why displaying this way causes it to be darker: a "pure red" will come with a completely black blue and green. In a sense if I were to take a completely red picture, it would essentially be 1/3 the brightness of the original.
So my question is:
1: Are there any tools available that already do this (or something similar)?
2: Any ideas on how to get the converted image closer to the original?
For the 2nd question, I could of course just increase the brightness for each "RGB pixel" (the three horizontal stripes in each square), but by how much? I certainly can't just multiply the RGB ints by 3 (in apparent compensation for what I said above). I wonder if there is some way to adjust my background color to compensate for me? Or would it just have to be something that needs to be fiddled with for each picture?
You were correct to assume you could retain the brightness by multiplying everything by 3. There's just one small problem: the RGB values in an image use gamma correction, so the intensity is not linear. You need to de-gamma the values, multiply, then gamma correct them again.
You also need to lose the borders around each pixel. Those borders take up 7/16 of the final image which is just too much to compensate for. I tried rotating every other pixel by 90 degrees, and while it gives the result a definite zig-zag pattern it does make clear where the pixel boundaries are.
When you zoom out in an image viewer you might see the gamma problem too. Many viewers don't bother to do gamma correction when they resize. For an in-depth explanation see Gamma error in picture scaling, and use the test image supplied at the end. It might be better to forgo scaling altogether and simply step back from the monitor.
Here's some Python code and a crop from the resulting image.
from PIL import Image
im = Image.open(filename)
im2 = Image.new('RGB', (im.size[0]*3, im.size[1]*3))
ld1 = im.load()
ld2 = im2.load()
for y in range(im.size[1]):
for x in range(im.size[0]):
rgb = ld1[x,y]
rgb = [(c/255)**2.2 for c in rgb]
rgb = [min(1.0,c*3) for c in rgb]
rgb = tuple(int(255*(c**(1/2.2))) for c in rgb)
x2 = x*3
y2 = y*3
if (x+y) & 1:
for x3 in range(x2, x2+3):
ld2[x3,y2] = (rgb[0],0,0)
ld2[x3,y2+1] = (0,rgb[1],0)
ld2[x3,y2+2] = (0,0,rgb[2])
else:
for y3 in range(y2, y2+3):
ld2[x2,y3] = (rgb[0],0,0)
ld2[x2+1,y3] = (0,rgb[1],0)
ld2[x2+2,y3] = (0,0,rgb[2])
Don't waste so much time on this. You cannot make two images look the same if you have less information in one of them. You still have your computer that will subsample your image in weird ways while zooming out.
Just pass a magnifying glass through the class so they can see for themselves on their phones or other screens or show pictures of a screen in different magnification levels.
If you want to stick to software, triple the resolution of your image, don't use empty rows and columns or at least make them black to increase contrast and scale the RGB components to full range.
Why don't you keep the magnified image for the background ? This will let the two images look identical when zoomed out, while the RGB strips will remain clearly visible in the zoom-in.
If not, use the average color over the whole image to keep a similar intensity, but the washing effect will remain.
An intermediate option is to apply a strong lowpass filter on the image to smoothen all details and use that as the background, but I don't see a real advantage over the first approach.

GPUImage Histogram Equalization

I would like to use GPUImage's Histogram Equalization filter (link to .h) (link to .m) for a camera app. I'd like to use it in real time and present it as an option to be applied on the live camera feed. I understand this may be an expensive operation and cause some latency.
I'm confused about how this filter works. When selected in GPUImage's example project (Filter Showcase) the filter shows a very dark image that is biased toward red and blue which does not seem to be the way equalization should work.
Also what is the difference between the histogram types kGPUImageHistogramLuminance and kGPUImageHistogramRGB? Filter Showcase uses kGPUImageHistogramLuminance but the default in the init is kGPUImageHistogramRGB. If I switch Filter Showcase to kGPUImageHistogramRGB, I just get a black screen. My goal is an overall contrast optimization.
Does anyone have experience using this filter? Or are there current limitations with this filter that are documented somewhere?
Histogram equalization of RGB images is done using the Luminance as equalizing the RGB channels separately would render the colour information useless.
You basically convert RGB to a colour space that separates colour from intensity information. Then equalize the intensity image and finally reconvert it to RGB.
According to the documentation: http://oss.io/p/BradLarson/GPUImage
GPUImageHistogramFilter: This analyzes the incoming image and creates
an output histogram with the frequency at which each color value
occurs. The output of this filter is a 3-pixel-high, 256-pixel-wide
image with the center (vertical) pixels containing pixels that
correspond to the frequency at which various color values occurred.
Each color value occupies one of the 256 width positions, from 0 on
the left to 255 on the right. This histogram can be generated for
individual color channels (kGPUImageHistogramRed,
kGPUImageHistogramGreen, kGPUImageHistogramBlue), the luminance of the
image (kGPUImageHistogramLuminance), or for all three color channels
at once (kGPUImageHistogramRGB).
I'm not very familiar with the programming language used so I can't tell if the implementation is correct. But in the end, colours should not change too much. Pixels should just become brighter or darker.

Should I use HSV/HSB or RGB and why?

I have to detect leukocytes cells in an image that contains another blood cells, but the differences can be distinguished through the color of cells, leukocytes have more dense purple color, can be seen in the image below.
What color methode I've to use RGB/HSV ? and why ?!
sample image:
Usually when making decisions like this I just quickly plot the different channels and color spaces and see what I find. It is always better to start with a high quality image than to start with a low one and try to fix it with lots of processing
In this specific case I would use HSV. But unlike most color segmentation I would actually use the Saturation Channel to segment the images. The cells are nearly the same Hue so using the hue channel would be very difficult.
hue, (at full saturation and full brightness) very hard to differentiate cells
saturation huge contrast
Green channel, actually shows a lot of contrast as well (it surprised me)
the red and blue channels are hard to actually distinguish the cells.
Now that we have two candidate representations the saturation or the Green channel, we ask which is easier to work with? Since any HSV work involves us converting the RGB image, we can dismiss it, so the clear choice is to simply use the green channel of the RGB image for segmentation.
edit
since you didn't include a language tag I would like to attach some Matlab code I just wrote. It displays an image in all 4 color spaces so you can quickly make an informed decision on which to use. It mimics matlabs Color Thresholder colorspace selection window
function ViewColorSpaces(rgb_image)
% ViewColorSpaces(rgb_image)
% displays an RGB image in 4 different color spaces. RGB, HSV, YCbCr,CIELab
% each of the 3 channels are shown for each colorspace
% the display mimcs the New matlab color thresholder window
% http://www.mathworks.com/help/images/image-segmentation-using-the-color-thesholder-app.html
hsvim = rgb2hsv(rgb_image);
yuvim = rgb2ycbcr(rgb_image);
%cielab colorspace
cform = makecform('srgb2lab');
cieim = applycform(rgb_image,cform);
figure();
%rgb
subplot(3,4,1);imshow(rgb_image(:,:,1));title(sprintf('RGB Space\n\nred'))
subplot(3,4,5);imshow(rgb_image(:,:,2));title('green')
subplot(3,4,9);imshow(rgb_image(:,:,3));title('blue')
%hsv
subplot(3,4,2);imshow(hsvim(:,:,1));title(sprintf('HSV Space\n\nhue'))
subplot(3,4,6);imshow(hsvim(:,:,2));title('saturation')
subplot(3,4,10);imshow(hsvim(:,:,3));title('brightness')
%ycbcr / yuv
subplot(3,4,3);imshow(yuvim(:,:,1));title(sprintf('YCbCr Space\n\nLuminance'))
subplot(3,4,7);imshow(yuvim(:,:,2));title('blue difference')
subplot(3,4,11);imshow(yuvim(:,:,3));title('red difference')
%CIElab
subplot(3,4,4);imshow(cieim(:,:,1));title(sprintf('CIELab Space\n\nLightness'))
subplot(3,4,8);imshow(cieim(:,:,2));title('green red')
subplot(3,4,12);imshow(cieim(:,:,3));title('yellow blue')
end
you could call it like this
rgbim = imread('http://i.stack.imgur.com/gd62B.jpg');
ViewColorSpaces(rgbim)
and the display is this
in DIP and CV is this always a valid question
But it has no universal answer because each task is unique so use what is better suited for it. To choose correctly you need to know the pros/cons of each so here is some summary:
RGB
this is easy to handle and you can easyly access r,g,b bands. For many cases is better to check just single band instead of whole color or mix the colors to emphasize wanted feature or even dampening unwanted one. It is hard to compare colors in RGB due to intensity encoded into bands directly. To remedy that you can use normalization but that is slow (need per pixel sqrt). You can do arithmetics on RGB colors directly.
Example of task better suited for RGB:
finding horizont in high altitude photo
HSV
is better suited for color recognition because CV algorithms using HSV has very similar visual perception to human perception so if you want to recognize areas of distinct colors HSV is better. The conversion between RGB/HSV takes a bit of time which can be for big resolutions or hi fps apps a problem. For standard DIP/CV tasks is this usually not the case.
Example of task better suited for HSV:
Compare RGB colors
Take a look at:
HSV histogram
to see the distinct color separation in HSV. The segmentation of image based on color is easy on HSV. You can not do arithmetics on HSV colors directly instead need to convert to RGB and back

subtract one color from another in RGB color space

I would like to subtract color from another. For example, I have two image 100X100 pixel, one with color R:236 G:226 B:43, and another R:63 G:85 B:235. I would like to cut color R:236 G:226 B:43 from R:63 G:85 B:235. But I know it can't subtract like the mathematically method, by layer R:236-63, G:226-85, B:43-235 because i found that the color that less than 0 and more than 255 can't define.
I found another color space in RYB color space.but i don't know how it really work.
Thank you for your help.
You cannot actually subtract colors. But you surely can detect their difference. I suppose this is what you need, anyway.
Here are some thoughts and remarks:
Convert your images to HSV colorspace which transforms RGB values to
Hue, Saturation and Brightness (Value).
All your images should be around a yellowish color (near 60 deg. on
the Hue circle) so they should all have about the same Hue with
minor differences.
Typically if all images are taken at constant lighting conditions
they should have the same Value (brightness).
Saturation, which corresponds to the mixture of white in a color,
typically represents how intense you perceive a color to be. This
would typically be of about the same value for all your images in
constant lighting conditions.
According to your first description, the main difference should be detected in the Hue channel.
A good thing about HSV is that H (hue) is represented by a counterclockwise circle and colors are just positions on this circle, so positive and negative values all make sense (search google for a description of HSV colorspace to get a view of how it looks and works).
You may either detect differences by a subtraction that will lead you to a value either positive either negative, or by taking the absolute value of the subtraction, which will just give a measure of the difference of the two values of Hue (but without any information on the direction of the difference). If you need the direction of the difference you should just stick to a plain subtraction.
For example:
Hue_1 - Hue_2 = Hue_3 (typically a small value for your problem)
if Hue_3 > 0 this means that Hue_1 is a bit towards Green if
Hue_3 < 0 this means that Hue_1 is a bit towards Red
Of course you may also need to take a look at the differences in the other channels, S and V to see if colors are more saturated or more bright, but I cannot be sure you need to do this since we haven't seen any images here.
Of course you can do a lot more sophisticated things...Like apply clustering or classification techniques on the detected hues and classify them to classes according to your problem needs...

What is a good way of Enhancing contrast of color images?

I split color image for 3 channels and made a contrast enhancement of each channel.
Then merged them together, I like the image at the result, but it has different colors.
Black objects became yellow and so on...
EDIT:
The algorithm I used is to calculate the 5th percentile and the 95th percentile
as min and max values, and then expand the values of image so that it will have min and max values as 0 and 255. If there is a better approach please tell me.
When doing contrast enhancement in color images, it is a good idea to only adjust the luminance (brightness) and leave the color information alone. This requires a colorspace conversion from RGB to something like YUV. In this colorspace, the Y component is similar to a grayscale version of the image, while the other components provide the color. This effectively allows you to adjust contrast (by running your algorithm on just the Y component) without distorting the color information. Finally, you can convert back to RGB.
Use CLAHE algorithm. openCV has an implementation of it: cv::createCLAHE()

Resources